Unified Algorithm for Non-convex Penalized Estimation for Generalized Linear Models
coef.cv.ncpen: extracts the optimal coefficients from cv.ncpen
.
coef.ncpen: extract the coefficients from an ncpen
object
control.ncpen: do preliminary works for ncpen
.
cv.ncpen: cross validation for ncpen
cv.ncpen: cross validation for ncpen
Check whether a pair should be excluded from interactions.
fold.cv.ncpen: extracts fold ids for cv.ncpen
.
gic.ncpen: compute the generalized information criterion (GIC) for the...
Construct Interaction Matrix
Create ncpen Data Structure Using a Formula
Native ncpen function.
N/A.
Native object function.
Native object gradient function.
Native object Hessian function.
Native point ncpen function.
Native Penalty function.
Native Penalty Gradient function.
Native QLASSO function.
N/A.
ncpen: A package for non-convex penalized estimation for generalized l...
ncpen: nonconvex penalized estimation
ncpen.reg: nonconvex penalized estimation
plot.cv.ncpen: plot cross-validation error curve.
plot.ncpen: plots coefficients from an ncpen
object.
Power Data
predict.ncpen: make predictions from an ncpen
object
sam.gen.ncpen: generate a simulated dataset.
Check whether column names are derivation of a same base.
Construct Indicator Matrix
Convert a data.frame
to a ncpen
usable matrix
.
An efficient unified nonconvex penalized estimation algorithm for Gaussian (linear), binomial Logit (logistic), Poisson, multinomial Logit, and Cox proportional hazard regression models. The unified algorithm is implemented based on the convex concave procedure and the algorithm can be applied to most of the existing nonconvex penalties. The algorithm also supports convex penalty: least absolute shrinkage and selection operator (LASSO). Supported nonconvex penalties include smoothly clipped absolute deviation (SCAD), minimax concave penalty (MCP), truncated LASSO penalty (TLP), clipped LASSO (CLASSO), sparse ridge (SRIDGE), modified bridge (MBRIDGE) and modified log (MLOG). For high-dimensional data (data set with many variables), the algorithm selects relevant variables producing a parsimonious regression model. Kim, D., Lee, S. and Kwon, S. (2018) <arXiv:1811.05061>, Lee, S., Kwon, S. and Kim, Y. (2016) <doi:10.1016/j.csda.2015.08.019>, Kwon, S., Lee, S. and Kim, Y. (2015) <doi:10.1016/j.csda.2015.07.001>. (This research is funded by Julian Virtue Professorship from Center for Applied Research at Pepperdine Graziadio Business School and the National Research Foundation of Korea.)