Logit Models w/Preference & WTP Space Utility Parameterizations
Extract Model Fitted Values
Predict choices
Tidy a logitr class object
Validate data formatting for logitr models
Calculate the variance-covariance matrix
Get WTP estimates a preference space model
Get WTP estimates a preference space model
Extract Model Confidence Interval
Add dummy-coded variables to data frame.
Obtain a confidence interval from coefficient draws
Glance a logitr class object
Predict probabilities and / or outcomes
Glance a logitr class object
Compute logit fraction for sets of alternatives given coefficient draw...
The main function for estimating logit models
Methods for logitr objects
Extracting the Model Frame from a Formula or Fit
Construct Design Matrices
Predict probabilities and / or outcomes
Predict expected choice probabilities
Returns a list of the design matrix X and updated pars and `randPa...
Objects exported from other packages
Extract Model Residuals
Extract standard errors
Extract standard errors
Simulate expected shares
View a description the nloptr status codes
Compare WTP from preference and WTP space models
Fast estimation of multinomial (MNL) and mixed logit (MXL) models in R. Models can be estimated using "Preference" space or "Willingness-to-pay" (WTP) space utility parameterizations. Weighted models can also be estimated. An option is available to run a parallelized multistart optimization loop with random starting points in each iteration, which is useful for non-convex problems like MXL models or models with WTP space utility parameterizations. The main optimization loop uses the 'nloptr' package to minimize the negative log-likelihood function. Additional functions are available for computing and comparing WTP from both preference space and WTP space models and for predicting expected choices and choice probabilities for sets of alternatives based on an estimated model. Mixed logit models can include uncorrelated or correlated heterogeneity covariances and are estimated using maximum simulated likelihood based on the algorithms in Train (2009) <doi:10.1017/CBO9780511805271>. More details can be found in Helveston (2023) <doi:10.18637/jss.v105.i10>.