get_entropy function

Shannon entropy

Shannon entropy

get_entropy(dat, n, p, g, pi, mu, sigma, ncov = 2)

Arguments

  • dat: An n×pn\times p matrix where each row represents an individual observation
  • n: Number of observations.
  • p: Dimension of observation vecor.
  • g: Number of multivariate normal classes.
  • pi: A g-dimensional vector for the initial values of the mixing proportions.
  • mu: A p×gp \times g matrix for the initial values of the location parameters.
  • sigma: A p×pp\times p covariance matrix if ncov=1, or a list of g covariance matrices with dimension p×p×gp\times p \times g if ncov=2.
  • ncov: Options of structure of sigma matrix; the default value is 2; ncov = 1 for a common covariance matrix; ncov = 2 for the unequal covariance/scale matrices.

Returns

  • clusprobs: The posterior probabilities of the i-th entity that belongs to the j-th group.

Details

The concept of information entropy was introduced by shannon1948mathematical . The entropy of yjy_j is formally defined as

ej(yj;θ)=i=1gτi(yj;θ)logτi(yj;θ). e_j( y_j; \theta)=-\sum_{i=1}^g \tau_i( y_j; \theta) \log\tau_i(y_j;\theta).

Examples

n<-150 pi<-c(0.25,0.25,0.25,0.25) sigma<-array(0,dim=c(3,3,4)) sigma[,,1]<-diag(1,3) sigma[,,2]<-diag(2,3) sigma[,,3]<-diag(3,3) sigma[,,4]<-diag(4,3) mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),3,4) dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma,ncov=2) en<-get_entropy(dat=dat$Y,n=150,p=3,g=4,mu=mu,sigma=sigma,pi=pi,ncov=2)
  • Maintainer: Ziyang Lyu
  • License: GPL-3
  • Last published: 2022-10-18

Useful links