StoGO is a global optimization algorithm that works by systematically dividing the search space---which must be bound-constrained---into smaller hyper-rectangles via a branch-and-bound technique, and searching them using a gradient-based local-search algorithm (a BFGS variant), optionally including some randomness.
xtol_rel: stopping criterion for relative change reached.
randomized: logical; shall a randomizing variant be used?
nl.info: logical; shall the original NLopt info be shown.
...: additional arguments passed to the function.
Returns
List with components: - par: the optimal solution found so far.
value: the function value corresponding to par.
iter: number of (outer) iterations, see maxeval.
convergence: integer code indicating successful completion (> 0) or a possible error number (< 0).
message: character string produced by NLopt and giving additional information.
Note
Only bounds-constrained problems are supported by this algorithm.
Examples
## Rosenbrock Banana objective functionrbf <-function(x){(1- x[1])^2+100*(x[2]- x[1]^2)^2}x0 <- c(-1.2,1)lb <- c(-3,-3)ub <- c(3,3)## The function as written above has a minimum of 0 at (1, 1)stogo(x0 = x0, fn = rbf, lower = lb, upper = ub)
References
S. Zertchaninov and K. Madsen, ``A C++ Programme for Global Optimization,'' IMM-REP-1998-04, Department of Mathematical Modelling, Technical University of Denmark.