estimatePopsizeFit function

Regression fitting in single-source capture-recapture models

Regression fitting in single-source capture-recapture models

estimatePopsizeFit does for estimatePopsize what glm.fit does for glm. It is internally called in estimatePopsize. Since estimatePopsize does much more than just regression fitting estimatePopsizeFit is much faster.

estimatePopsizeFit( y, X, family, control, method, priorWeights, coefStart, etaStart, offset, ... )

Arguments

  • y: vector of dependent variables.
  • X: model matrix, the vglm one.
  • family: same as model in estimatePopsize.
  • control: control parameters created in controlModel.
  • method: method of estimation same as in estimatePopsize.
  • priorWeights: vector of prior weights its the same argument as weights in estimatePopsize.
  • etaStart, coefStart: initial value of regression parameters or linear predictors.
  • offset: offset passed from by default passed from estimatePopsize().
  • ...: arguments to pass to other methods.

Returns

List with regression parameters, working weights (if IRLS fitting method) was chosen and number of iterations taken.

Details

\loadmathjax

If method argument was set to "optim" the stats::optim

function will be used to fit regression with analytically computed gradient and (minus) log likelihood functions as gr and fn arguments. Unfortunately optim does not allow for hessian to be specified. More information about how to modify optim fitting is included in controlMethod().

If method argument was set to "IRLS" the iteratively reweighted least squares. The algorithm is well know in generalised linear models. Thomas W. Yee later extended this algorithm to vector generalised linear models and in more general terms it can roughly be described as (this is Yee's description after changing some conventions):

  1. Initialize with:

    • converged \\<- FALSE
    • iter \\<- 1
    • \mjseqn \boldsymbol \beta\\<- start
    • \mjseqn \boldsymbol W\\<- prior
    • \mjseqn \ell\\<-\mjseqn \ell (\boldsymbol \beta)
  2. If converged or iter \> Maxiter move to step 7.

  3. Store values from previous algorithm step:

    • \mjseqn \boldsymbol W_-\\<-

      \mjseqn \boldsymbol W

    • \mjseqn \ell _-\\<-

      \mjseqn \ell

    • \mjseqn \boldsymbol \beta_-\\<-

      \mjseqn \boldsymbol \beta

    and assign values at current step:

    • \mjseqn \boldsymbol \eta\\<-

      \mjseqn \boldsymbol X_vlm\boldsymbol \beta

    • \mjseqn Z_i\\<-

      \mjseqn \eta _i+\frac \partial \ell _i\partial \eta _i

      \mathbb E\left (\frac \partial ^2\ell _i\partial \eta _i^T\partial \eta _i\right )^-1

    • \mjseqn \boldsymbol W_ij\\<-

      \mjseqn \mathbb E\left (\frac \partial ^2\ell\partial \boldsymbol \eta_j^T\partial \boldsymbol \eta_i\right )

    where \mjseqn \ell _iis the ith component of log likelihood function, \mjseqn \eta _i is the vector of linear predictors associated with ith row and \mjseqn \mathbb E\left (\frac \partial ^2\ell _i\partial \eta _i^T\partial \eta _i\right )

    corresponds to weights associated with ith row and \mjseqn \boldsymbol W

    is a block matrix, made of diagonal matrixes \mjseqn \mathbb E\left (\frac \partial ^2\ell\partial \boldsymbol \eta_j^T\partial \boldsymbol \eta_i\right )

  4. Regress \mjseqn \boldsymbol Z on \mjseqn \boldsymbol X_vlm to obtain \mjseqn \boldsymbol \beta as: \mjsdeqn \boldsymbol \beta= \left (\boldsymbol X_vlm^T\boldsymbol W\boldsymbol X_vlm\right )^-1

    \boldsymbol X_vlm^T\boldsymbol W\boldsymbol Z

  5. Assign:

    • converged \\<-\mjseqn \ell (\boldsymbol \beta)-\ell _- \< \varepsilon \cdot \ell _-

      or \mjseqn ||\boldsymbol \beta-\boldsymbol \beta_-||_\infty \< \varepsilon

    • iter \\<- iter + 1

    where \mjseqn \varepsilon is the relative tolerance level, by default 1e-8.

  6. Return to step 2.

  7. Return \mjseqn \boldsymbol \beta, \boldsymbol W, iter.

In this package we use different conventions for \mjseqn \boldsymbol X_vlm

matrix hence slight differences are present in algorithm description but results are identical.

Examples

summary(farmsubmission) # construct vglm model matrix X <- matrix(data = 0, nrow = 2 * NROW(farmsubmission), ncol = 7) X[1:NROW(farmsubmission), 1:4] <- model.matrix( ~ 1 + log_size + log_distance + C_TYPE, farmsubmission ) X[-(1:NROW(farmsubmission)), 5:7] <- X[1:NROW(farmsubmission), c(1, 3, 4)] # this attribute tells the function which elements of the design matrix # correspond to which linear predictor attr(X, "hwm") <- c(4, 3) # get starting points start <- glm.fit( y = farmsubmission$TOTAL_SUB, x = X[1:NROW(farmsubmission), 1:4], family = poisson() )$coefficients res <- estimatePopsizeFit( y = farmsubmission$TOTAL_SUB, X = X, method = "IRLS", priorWeights = 1, family = ztoigeom(), control = controlMethod(verbose = 5), coefStart = c(start, 0, 0, 0), etaStart = matrix(X %*% c(start, 0, 0, 0), ncol = 2), offset = cbind(rep(0, NROW(farmsubmission)), rep(0, NROW(farmsubmission))) ) # extract results # regression coefficient vector res$beta # check likelihood ll <- ztoigeom()$makeMinusLogLike(y = farmsubmission$TOTAL_SUB, X = X) -ll(res$beta) # number of iterations res$iter # working weights head(res$weights) # Compare with optim call res2 <- estimatePopsizeFit( y = farmsubmission$TOTAL_SUB, X = X, method = "optim", priorWeights = 1, family = ztoigeom(), coefStart = c(start, 0, 0, 0), control = controlMethod(verbose = 1, silent = TRUE), offset = cbind(rep(0, NROW(farmsubmission)), rep(0, NROW(farmsubmission))) ) # extract results # regression coefficient vector res2$beta # check likelihood -ll(res2$beta) # number of calls to log lik function # since optim does not return the number of # iterations res2$iter # optim does not calculated working weights head(res2$weights)

References

Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer. ISBN 978-1-4939-2817-0.

See Also

stats::glm() estimatePopsize() controlMethod() stats::optim()

Author(s)

Piotr Chlebicki, Maciej Beresewicz

  • Maintainer: Maciej Beręsewicz
  • License: MIT + file LICENSE
  • Last published: 2025-02-13