lam: postive tuning parameter for elastic net penalty.
alpha: elastic net mixing parameter contained in [0, 1]. 0 = ridge, 1 = lasso. Defaults to alpha = 1.
diagonal: option to penalize the diagonal elements of the estimated precision matrix (Ω). Defaults to FALSE.
rho: initial step size for ADMM algorithm.
mu: factor for primal and residual norms in the ADMM algorithm. This will be used to adjust the step size rho after each iteration.
tau_inc: factor in which to increase step size rho.
tau_dec: factor in which to decrease step size rho.
crit: criterion for convergence (ADMM or loglik). If crit = loglik then iterations will stop when the relative change in log-likelihood is less than tol.abs. Default is ADMM and follows the procedure outlined in Boyd, et al.
tol_abs: absolute convergence tolerance. Defaults to 1e-4.
tol_rel: relative convergence tolerance. Defaults to 1e-4.
maxit: maximum number of iterations. Defaults to 1e4.
Returns
returns list of returns which includes: - Iterations: number of iterations.
Boyd, Stephen, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, and others. 2011. 'Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.' Foundations and Trends in Machine Learning 3 (1). Now Publishers, Inc.: 1-122. https://web.stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf
Hu, Yue, Chi, Eric C, amd Allen, Genevera I. 2016. 'ADMM Algorithmic Regularization Paths for Sparse Statistical Machine Learning.' Splitting Methods in Communication, Imaging, Science, and Engineering. Springer: 433-459.
Zou, Hui and Hastie, Trevor. 2005. "Regularization and Variable Selection via the Elastic Net." Journal of the Royal Statistial Society: Series B (Statistical Methodology) 67 (2). Wiley Online Library: 301-320.
Rothman, Adam. 2017. 'STAT 8931 notes on an algorithm to compute the Lasso-penalized Gaussian likelihood precision matrix estimator.'