update_reg function

Update coefficient vector of multiple linear regression

Update coefficient vector of multiple linear regression

This function updates the coefficient vector of a multiple linear regression.

update_reg(mu0, Tau0, XSigX, XSigU)

Arguments

  • mu0: The mean vector of the normal prior distribution for the coefficient vector.
  • Tau0: The precision matrix (i.e. inverted covariance matrix) of the normal prior distribution for the coefficient vector.
  • XSigX: The matrix n=1NXnΣ1Xn\sum_{n=1}^N X_n'\Sigma^{-1}X_n. See below for details.
  • XSigU: The vector n=1NXnΣ1Un\sum_{n=1}^N X_n'\Sigma^{-1}U_n. See below for details.

Returns

A vector, a draw from the normal posterior distribution of the coefficient vector in a multiple linear regression.

Details

This function draws from the posterior distribution of β\beta in the linear utility equation

Un=Xnβ+ϵn, U_n = X_n\beta + \epsilon_n,

where UnU_n is the (latent, but here assumed to be known) utility vector of decider n=1,,Nn = 1,\dots,N, XnX_n

is the design matrix build from the choice characteristics faced by nn, β\beta is the unknown coefficient vector (this can be either the fixed coefficient vector α\alpha or the decider-specific coefficient vector βn\beta_n), and ϵn\epsilon_n is the error term assumed to be normally distributed with mean 00

and (known) covariance matrix Σ\Sigma. A priori we assume the (conjugate) normal prior distribution

βN(μ0,T0) \beta \sim N(\mu_0,T_0)

with mean vector μ0\mu_0 and precision matrix (i.e. inverted covariance matrix) T0T_0. The posterior distribution for β\beta is normal with covariance matrix

Σ1=(T0+n=1NXnΣ1Xn)1 \Sigma_1 = (T_0 + \sum_{n=1}^N X_n'\Sigma^{-1}X_n)^{-1}

and mean vector

μ1=Σ1(T0μ0+n=1NXnΣ1Un) \mu_1 = \Sigma_1(T_0\mu_0 + \sum_{n=1}^N X_n'\Sigma^{-1}U_n)

. Note the analogy of μ1\mu_1 to the generalized least squares estimator

β^GLS=(n=1NXnΣ1Xn)1n=1NXnΣ1Un \hat{\beta}_{GLS} = (\sum_{n=1}^N X_n'\Sigma^{-1}X_n)^{-1} \sum_{n=1}^N X_n'\Sigma^{-1}U_n

which becomes weighted by the prior parameters μ0\mu_0 and T0T_0.

Examples

### true coefficient vector beta_true <- matrix(c(-1,1), ncol=1) ### error term covariance matrix Sigma <- matrix(c(1,0.5,0.2,0.5,1,0.2,0.2,0.2,2), ncol=3) ### draw data N <- 100 X <- replicate(N, matrix(rnorm(6), ncol=2), simplify = FALSE) eps <- replicate(N, rmvnorm(mu = c(0,0,0), Sigma = Sigma), simplify = FALSE) U <- mapply(function(X, eps) X %*% beta_true + eps, X, eps, SIMPLIFY = FALSE) ### prior parameters for coefficient vector mu0 <- c(0,0) Tau0 <- diag(2) ### draw from posterior of coefficient vector XSigX <- Reduce(`+`, lapply(X, function(X) t(X) %*% solve(Sigma) %*% X)) XSigU <- Reduce(`+`, mapply(function(X, U) t(X) %*% solve(Sigma) %*% U, X, U, SIMPLIFY = FALSE)) beta_draws <- replicate(100, update_reg(mu0, Tau0, XSigX, XSigU), simplify = TRUE) rowMeans(beta_draws)