Determine the nonlinear least-squares estimates of the parameters of a large nonlinear model system using the gsl_multilarge_nlinear module present in the GNU Scientific Library (GSL).
gsl_nls_large(fn,...)## S3 method for class 'formula'gsl_nls_large( fn, data = parent.frame(), start, algorithm = c("lm","lmaccel","dogleg","ddogleg","subspace2D","cgst"), control = gsl_nls_control(), jac, fvv, trace =FALSE, subset, weights, na.action, model =FALSE,...)## S3 method for class 'function'gsl_nls_large( fn, y, start, algorithm = c("lm","lmaccel","dogleg","ddogleg","subspace2D","cgst"), control = gsl_nls_control(), jac, fvv, trace =FALSE, weights,...)
Arguments
fn: a nonlinear model defined either as a two-sided formula including variables and parameters, or as a function returning a numeric vector, with first argument the vector of parameters to be estimated. See the individual method descriptions below.
data: an optional data frame in which to evaluate the variables in fn if defined as a formula . Can also be a list or an environment, but not a matrix.
y: numeric response vector if fn is defined as a function , equal in length to the vector returned by evaluation of the function fn.
start: a named list or named numeric vector of starting estimates. start is only allowed to be missing if fn is a selfStart model. If fn
is a formula, a naive guess for start is tried, but this should not be relied on.
algorithm: character string specifying the algorithm to use. The following choices are supported:
"lm" Levenberg-Marquardt algorithm (default).
"lmaccel" Levenberg-Marquardt algorithm with geodesic acceleration. Can be faster than "lm" but less stable. Stability is controlled by the avmax parameter in control, setting avmax to zero is analogous to not using geodesic acceleration.
"dogleg" Powell's dogleg algorithm.
"ddogleg" Double dogleg algorithm, an improvement over "dogleg"
by including information about the Gauss-Newton step while the iteration is still far from the minimum.
"subspace2D" 2D generalization of the dogleg algorithm. This method searches a larger subspace for a solution, it can converge more quickly than "dogleg"
on some problems.
"cgst" Steihaug-Toint Conjugate Gradient algorithm, a generalization of the dogleg algorithm that avoids solving for the Gauss-Newton step directly, instead using an iterative conjugate gradient algorithm. The method performs well at points where the Jacobian is singular, and is also suitable for large-scale problems where factoring the Jacobian matrix is prohibitively expensive.
control: an optional list of control parameters to tune the least squares iterations and multistart algorithm. See gsl_nls_control for the available control parameters and their default values.
jac: a function returning the n by p dimensional Jacobian matrix of the nonlinear model fn, where n is the number of observations and p the number of parameters. The first argument must be the vector of parameters of length p. Can also be TRUE, in which case jac is derived symbolically with deriv, this only works if fn is defined as a (non-selfstarting) formula. If fn is a selfStart model, the Jacobian specified in the "gradient" attribute of the self-start model is used instead.
fvv: a function returning an n dimensional vector containing the second directional derivatives of the nonlinear model fn, with n the number of observations. This argument is only used if geodesic acceleration is enabled (algorithm = "lmaccel"). The first argument must be the vector of parameters of length p and the second argument must be the velocity vector also of length p. Can also be TRUE, in which case fvv is derived symbolically with deriv, this only works if fn is defined as a (non-selfstarting) formula. If the model function in fn also returns a "hessian" attribute (similar to the "gradient" attribute in a selfStart model), this Hessian matrix is used to evaluate the second directional derivatives instead.
trace: logical value indicating if a trace of the iteration progress should be printed. Default is FALSE. If TRUE, the residual (weighted) sum-of-squares, the squared (Euclidean) norm of the current parameter estimates and the condition number of the Jacobian are printed after each iteration.
subset: an optional vector specifying a subset of observations to be used in the fitting process. This argument is only used if fn is defined as a formula .
weights: an optional numeric vector of (fixed) weights. When present, the objective function is weighted least squares.
na.action: a function which indicates what should happen when the data contain NAs. The default is set by the na.action setting of options, and is na.fail if that is unset. The 'factory-fresh' default is na.omit. Value na.exclude can be useful. This argument is only used if fn is defined as a formula .
model: a logical value. If TRUE, the model frame is returned as part of the object. Defaults to FALSE. This argument is only used if fn is defined as a formula .
...: additional arguments passed to the calls of fn, jac and fvv if defined as functions.
Returns
If fn is a formula returns a list object of class nls. If fn is a function returns a list object of class gsl_nls. See the individual method descriptions for the structures of the returned lists and the generic functions applicable to objects of both classes.
Methods (by class)
gsl_nls_large(formula): If fn is a formula, the returned list object is of classes gsl_nls and nls. Therefore, all generic functions applicable to objects of class nls, such as anova, coef, confint, deviance, df.residual, fitted, formula, logLik, nobs, predict, print, profile, residuals, summary, vcov and weights are also applicable to the returned list object. In addition, a method confintd is available for inference of derived parameters.
gsl_nls_large(function): If fn is a function, the first argument must be the vector of parameters and the function should return a numeric vector containing the nonlinear model evaluations at the provided parameter and predictor or covariate vectors. In addition, the argument y
needs to contain the numeric vector of observed responses, equal in length to the numeric vector returned by fn. The returned list object is (only) of class gsl_nls. Although the returned object is not of class nls, the following generic functions remain applicable for an object of class gsl_nls: anova, coef, confint, deviance, df.residual, fitted, formula, logLik, nobs, predict, print, residuals, summary, vcov and weights. In addition, a method confintd
is available for inference of derived parameters.
Examples
# Large NLS example# (https://www.gnu.org/software/gsl/doc/html/nls.html#large-nonlinear-least-squares-example)## number of parametersp <-250## model functionf <-function(theta){ c(sqrt(1e-5)*(theta -1), sum(theta^2)-0.25)}## jacobian functionjac <-function(theta){ rbind(diag(sqrt(1e-5), nrow = length(theta)),2* t(theta))}## dense Levenberg-Marquardtgsl_nls_large( fn = f,## model y = rep(0, p +1),## (dummy) responses start =1:p,## start values algorithm ="lm",## algorithm jac = jac,## jacobian control = list(maxiter =250))## dense Steihaug-Toint conjugate gradientgsl_nls_large( fn = f,## model y = rep(0, p +1),## (dummy) responses start =1:p,## start values jac = jac,## jacobian algorithm ="cgst"## algorithm)## sparse Jacobian functionjacsp <-function(theta){ rbind(Matrix::Diagonal(x = sqrt(1e-5), n = length(theta)),2* t(theta))}## sparse Levenberg-Marquardtgsl_nls_large( fn = f,## model y = rep(0, p +1),## (dummy) responses start =1:p,## start values algorithm ="lm",## algorithm jac = jacsp,## sparse jacobian control = list(maxiter =250))## sparse Steihaug-Toint conjugate gradientgsl_nls_large( fn = f,## model y = rep(0, p +1),## (dummy) responses start =1:p,## start values jac = jacsp,## sparse jacobian algorithm ="cgst"## algorithm)
References
M. Galassi et al., GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078.