y: Numeric vector of observations (n) describing the trait to be analyzed. NA is allowed.
Z: Right-hand side formula or list of numeric matrices (n by p) with incidence matrices for random effect. NA is not allowed.
X: Right-hand side formula or incidence matrices (n by p) for fixed effect. NA is not allowed.
iK: Numeric matrix or list of numeric matrices (p by p) corresponding to the the inverse kinship matrix of each random effect with p parameters.
iR: Numeric matrix (n by n) corresponding to the inverse residual correlation matrix.
Iter: Integer. Number of iterations or samples to be generated.
Burn: Integer. Number of iterations or samples to be discarted.
Thin: Integer. Thinning parameter, used to save memory by storing only one every 'Thin' samples.
DF: Integer. Hyperprior degrees of freedom of variance components.
S: Integer or NULL. Hyperprior shape of variance components. If NULL, the hyperprior solution for the scale parameter is calculated as proposed by de los Campos et al. (2013).
nor: Logical. If TRUE, it normilizes the response variable(s).
GSRU: Logical. If TRUE, it updates the regression coefficient using Gauss-Seidel Residual Update (Legarra and Misztal 2008). Useful for p>>n, but does not work when iK or iR are provided.
Y: Numeric matrix of observations for multivariate mixed models. Each column represents a trait. NA is allowed.
Details
The general model is y=Xb+Zu+e, where u=N(0,Aσ2a) and e=N(0,Rσ2e). The function solves Gaussian mixed models in the Bayesian framework as described by Garcia-Cortes and Sorensen (1996) and Sorensen and Gianola (2002) with conjugated priors. The alternative function, "ml", finds the solution iteratively using the full-conditional expectation. The function "gibbs2" can be used for the multivariate case, check Xavier et al. (2017) for an example of multivariate mixed model using Gibbs sampling.
Returns
The function gibbs returns a list with variance components distribution a posteriori (Posterior.VC) and mode estimated (VC.estimate), a list with the posterior distribution of regression coefficients (Posterior.Coef) and the posterior mean (Coef.estimate), and the fitted values using the mean (Fit.mean) of posterior coefficients.
References
de los Campos, G., Hickey, J. M., Pong-Wong, R., Daetwyler, H. D., and Calus, M. P. (2013). Whole-genome regression and prediction methods applied to plant and animal breeding. Genetics, 193(2), 327-345.
Legarra, A., & Misztal, I. (2008). Technical note: Computing strategies in genome-wide selection. Journal of dairy science, 91(1), 360-366.
Garcia-Cortes, L. A., and Sorensen, D. (1996). On a multivariate implementation of the Gibbs sampler. Genetics Selection Evolution, 28(1), 121-126.
Sorensen, D., & Gianola, D. (2002). Likelihood, Bayesian, and MCMC methods in quantitative genetics. Springer Science & Business Media.
Xavier, A., Hall, B., Casteel, S., Muir, W. and Rainey, K.M. (2017). Using unsupervised learning techniques to assess interactions among complex traits in soybeans. Euphytica, 213(8), p.200.