Wikia

Psychology Wiki

Multivariate normal distribution

Talk0
34,142pages on
this wiki
Revision as of 14:54, December 22, 2006 by Dr Joe Kiff (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


Multivariate normal
Probability density function
Cumulative distribution function
Parameters \mu = [\mu_1, \dots, \mu_N] location (real vector)
\Sigma covariance matrix (positive definite real N\times N matrix)
Support x \in\mathbb{R}^N\!
pdf f_X(x_1, \dots, x_N)=\frac {1} {(2\pi)^{N/2} \left|\Sigma\right|^{1/2}}

                 \exp\left( -\frac{1}{2}( x - \mu)^\top \Sigma^{-1} (x - \mu)\right)
cdf
Mean \mu
Median \mu
Mode \mu
Variance
Skewness 0
Kurtosis 0
Entropy
mgf M_X(t)= \exp\left( \mu^\top t + \frac{1}{2} t^\top \Sigma t\right)
Char. func. \phi_X(t;\mu,\Sigma)=\exp\left( i \mu^\top t - \frac{1}{2} t^\top \Sigma t\right)


In probability theory and statistics, a multivariate normal distribution, also sometimes called a multivariate Gaussian distribution, is a specific probability distribution, which can be thought of as a generalization to higher dimensions of the one-dimensional normal distribution (also called a Gaussian distribution).

General caseEdit

A random vector X = [X_1, \dots, X_N] follows a multivariate normal distribution if it satisfies the following equivalent conditions:

  • there is a random vector Z = [Z_1, \dots, Z_M], whose components are independent standard normal random variables, a vector \mu = [\mu_1, \dots, \mu_N] and an N \times M matrix A such that X = A Z + \mu.

\phi_X(u;\mu,\Sigma)
=
\exp
\left(
 i \mu^\top u - \frac{1}{2} u^\top \Sigma u
\right)

The following is not quite equivalent to the conditions above, since it fails to allow for a singular matrix as the variance:


f_X(x_1, \dots, x_N)
=
\frac
 {1}
 {(2\pi)^{N/2} \left|\Sigma\right|^{1/2}}
\exp
\left(
 -\frac{1}{2}
 ( x - \mu)^\top \Sigma^{-1} (x - \mu)
\right)

where \left| \Sigma \right| is the determinant of \Sigma. Note how the equation above reduces to that of the univariate normal distribution if \Sigma is a scalar (i.e., a real number).

The vector \mu in these conditions is the expected value of X and the matrix \Sigma = A A^T is the covariance matrix of the components X_i.

It is important to realize that the covariance matrix must be allowed to be singular. That case arises frequently in statistics; for example, in the distribution of the vector of residuals in ordinary linear regression problems. Note also that the X_i are in general not independent; they can be seen as the result of applying the linear transformation A to a collection of independent Gaussian variables Z.

The multivariate normal can be written in the following notation:

X \sim N(\mu, \Sigma)

or to make it explicitly known X is N-dimensional

X \sim N_N(\mu, \Sigma)

Cumulative distribution function Edit

The cumulative distribution function (cdf) F(x) is defined as the probability that all values in a random vector X are less than or equal to the corresponding values in vector x. Though there is no closed form for F(x), there are a number of algorithms that estimate it numerically. For example, see MVNDST under [1] (includes FORTRAN code) or [2] (includes MATLAB code).

A counterexampleEdit

The fact that two or more random variables X and Y are normally distributed does not imply that the pair (XY) has a joint normal distribution. A simple example is one in which Y = X if |X| > 1 and Y = −X if |X| < 1.

Normally distributed and independentEdit

If X and Y are normally distributed and independent, then they are "jointly normally distributed", i.e., the pair (XY) has a bivariate normal distribution. There are of course also many bivariate normal distributions in which the components are correlated.

Bivariate caseEdit

In the 2-dimensional nonsingular case, the probability density function (with mean (0,0)) is


f(x,y)
=
\frac{1}{2 \pi \sigma_x \sigma_y \sqrt{1-\rho^2}}
\exp
\left(
 -\frac{1}{2 (1-\rho^2)}
 \left(
  \frac{x^2}{\sigma_x^2} +
  \frac{y^2}{\sigma_y^2} -
  \frac{2 \rho x y}{ (\sigma_x \sigma_y)}
 \right)
\right)

where \rho is the correlation between X and Y. In this case,


\Sigma =
\begin{bmatrix}
\sigma_x^2              & \rho \sigma_x \sigma_y \\
\rho \sigma_x \sigma_y  & \sigma_y^2
\end{bmatrix}.

Linear transformationEdit

If Y = B X \, is a linear transformation of X\ \sim N(\mu, \Sigma), where B\, is an m \times p matrix then Y\, has a multivariate normal distribution with expected value B \mu \,and variance B \Sigma B^T \, (i.e., Y \sim N \left(B \mu, B \Sigma B^T\right).

Corollary: any subset of the X_i\, has a marginal distribution that is also multivariate normal. To see this consider the following example: to extract the subset (X_1, X_2, X_4)^T \,, use


B
=
\begin{bmatrix}
 1 & 0 & 0 & 0 & 0 & \ldots & 0 \\
 0 & 1 & 0 & 0 & 0 & \ldots & 0 \\
 0 & 0 & 0 & 1 & 0 & \ldots & 0
\end{bmatrix}

which extracts the desired elements directly.

Correlations and independenceEdit

In general, random variables may be uncorrelated but highly dependent. But if a random vector has a multivariate normal distribution then any two or more of its components that are uncorrelated are independent. This implies that any two or more of its components that are pairwise independent are independent.

But it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent. Two random variables that are normally distributed may fail to be jointly normally distributed, i.e., the vector whose components they are may fail to have a multivariate normal distribution. For an example of two normally distributed random variables that are uncorrelated but not independent, see normally distributed and uncorrelated does not imply independent.

Higher momentsEdit

The k-order moments of X are defined by


\mu _{1,\dots,N}(X)\ \stackrel{\mathrm{def}}{=}\  \mu _{r_{1},\dots,r_{N}}(X)\ \stackrel{\mathrm{def}}{=}\  E\left[
\prod\limits_{j=1}^{N}x_j^{r_{j}}\right]

where r_{1}+r_{2}+\cdots+r_{N}=k

The central k-order moments are given as follows

(a) If k is odd, \mu _{1,\dots,N}(X-\mu )=0.

(b) If k is even with k=2\lambda, then


\mu _{1,\dots,2\lambda }(X-\mu )=\sum \left( \sigma _{ij}\sigma _{kl}\cdots\sigma _{xz}\right)

where the sum is taken over all allocations of the set \left\{ 1,\dots,2\lambda
\right\} into \lambda (unordered) pairs, giving (2\lambda -1)!/(2^{\lambda -1}(\lambda -1)!) terms in the sum, each being the product of \lambda covariances. The covariances are determined by replacing the terms of the list \left[ 1,\dots,2\lambda \right] by the corresponding terms of the list consisting of r_1 ones, then r_2 twos, etc, after each of the possible allocations of the former list into pairs.

In particular, the 4-order moments are

E\left[ x_{i}^{4}\right] = 3( \sigma _{ii}) ^{2}
E\left[ x_{i}^{3}x_{j}\right] = 3\sigma _{ii}\sigma _{ij}
E\left[ x_{i}^{2}x_{j}^{2}\right] = \sigma _{ii}\sigma _{jj}+2\left( \sigma _{ij}\right) ^{2}
E\left[ x_{i}^{2}x_{j}x_{k}\right] = \sigma _{ii}\sigma _{jk}+2\sigma _{ij}\sigma _{ik}
E\left[ x_{i}x_{j}x_{k}x_{n}\right] = \sigma _{ij}\sigma _{kn}+\sigma _{ik}\sigma _{jn}+\sigma _{in}\sigma _{jk}

Conditional distributionsEdit

If \mu and \Sigma are partitioned as follows


\mu
=
\begin{bmatrix}
 \mu_1 \\
 \mu_2
\end{bmatrix}
\quad with sizes \begin{bmatrix} q \times 1 \\ (N-q) \times 1 \end{bmatrix}

\Sigma
=
\begin{bmatrix}
 \Sigma_{11} & \Sigma_{12} \\
 \Sigma_{21} & \Sigma_{22}
\end{bmatrix}
\quad with sizes \begin{bmatrix} q \times q & q \times (N-q) \\ (N-q) \times q & (N-q) \times (N-q) \end{bmatrix}

then the distribution of x_1 conditional on x_2=a is multivariate normal X_1|X_2=a \sim N(\bar{\mu}, \overline{\Sigma}) where


\bar{\mu}
=
\mu_1 + \Sigma_{12} \Sigma_{22}^{-1}
\left(
 a - \mu_2
\right)

and covariance matrix


\overline{\Sigma}
=
\Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21}.

This matrix is the Schur complement of {\mathbf\Sigma_{22}} in {\mathbf\Sigma}.

Note that knowing the value of x_2 to be a alters the variance; perhaps more surprisingly, the mean is shifted by \Sigma_{12} \Sigma_{22}^{-1} \left(a - \mu_2 \right); compare this with the situation of not knowing the value of a, in which case x_1 would have distribution N_q \left(\mu_1, \Sigma_{11} \right).

The matrix \Sigma_{12} \Sigma_{22}^{-1} is known as the matrix of regression coefficients.

Fisher information matrixEdit

The Fisher information matrix (FIM) for a normal distribution takes a special formulation. The (m,n) element of the FIM for X \sim N(\mu(\theta), \Sigma(\theta)) is


\mathcal{I}_{m,n}
=
\frac{\partial \mu}{\partial \theta_m}
\Sigma^{-1}
\frac{\partial \mu^\top}{\partial \theta_n}
+
\frac{1}{2}
\mathrm{tr}
\left(
 \Sigma^{-1}
 \frac{\partial \Sigma}{\partial \theta_m}
 \Sigma^{-1}
 \frac{\partial \Sigma}{\partial \theta_n}
\right)

where

Kullback-Leibler divergenceEdit

The Kullback-Leibler divergence from N0_N(\mu_0, \Sigma_0) to N1_N(\mu_1, \Sigma_1) is:


KL(N0, N1) = { 1 \over 2 } \left( \log \left( { \det \Sigma_1 \over \det \Sigma_0 } \right) + \mathrm{tr} \left( \Sigma_1^{-1} \Sigma_0 \right) + \left( \mu_1 - \mu_0\right)^\top \Sigma_1^{-1} ( \mu_1 - \mu_0 ) - N\right).

Estimation of parametersEdit

The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is perhaps surprisingly subtle and elegant. See estimation of covariance matrices.

In short, the probability density function (pdf) of an N-dimensional multivariate normal is

f(x)=(2 \pi)^{-N/2} \det(\Sigma)^{-1/2} \exp\left({-1 \over 2} (x-\mu)^T \Sigma^{-1} (x-\mu)\right)

and the ML estimator of the covariance matrix is

\hat\Sigma = {1 \over n}\sum_{i=1}^n (X_i-\overline{X})(X_i-\overline{X})^T

which is simply the sample covariance matrix for sample size n. This is a biased estimator whose expectation is

E[\hat\Sigma] = {n-1 \over n}\Sigma.

An unbiased sample covariance is

\hat\Sigma = {1 \over n-1}\sum_{i=1}^n (X_i-\overline{X})(X_i-\overline{X})^T.

EntropyEdit

The differential entropy of the multivariate normal distribution is [1]

h\left(f\right)= -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}...\int_{-\infty}^{\infty}f(x)\ln f(x)\,dx=
h\left(f\right)=\frac12 \left(N+N\ln\left(2\pi\right)+ln\left| \Sigma \right|\right)=\!
h\left(f\right)=\frac{1}{2}\ln\{(2\pi e)^{N} \left| \Sigma \right|\}

where \left| \Sigma \right| is the determinant of the covariance matrix \Sigma.

Multivariate normality testsEdit

Multivariate normality tests check a given set of data for similarity to the multivariate normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small p-value indicates non-normal data. Multivariate normality tests include the Cox-Small test [2] and Smith and Jain's adaptation of the Friedman-Rafsky test [3].

Drawing values from the distributionEdit

A widely used method for drawing a random vector X from the n-dimensional multivariate normal distribution with mean vector \mu and covariance matrix \Sigma (required to be symmetric and positive definite) works as follows:

  1. Compute the Cholesky decomposition (matrix square root) of \Sigma, that is, find the unique lower triangular matrix A such that A\,A^T = \Sigma.
  2. Let Z=(z_1,\dots,z_n)^T be a vector whose components are n independent standard normal variates (which can be generated, for example, by using the Box-Muller transform).
  3. Let X be \mu + A\,Z.

ReferencesEdit

  1. Gokhale, DV, NA Ahmed, BC Res, NJ Piscataway (May 1989). Entropy Expressions and Their Estimators for Multivariate Distributions. Information Theory, IEEE Transactions on 35 (3): 688-692.
  2. Cox, D. R., N. J. H. Small (August 1978). Testing multivariate normality. Biometrika 65 (2): 263–272.
  3. Smith, Stephen P., Anil K. Jain (September 1988). A test to determine the multivariate normality of a dataset. IEEE Transactions on Pattern Analysis and Machine Intelligence 10 (5): 757–761. DOI:10.1109/34.6789.
Bvn-small Probability distributions [[[:Template:Tnavbar-plain-nodiv]]]
Univariate Multivariate
Discrete: BernoullibinomialBoltzmanncompound PoissondegeneratedegreeGauss-Kuzmingeometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniformYule-SimonzetaZipfZipf-Mandelbrot Ewensmultinomial
Continuous: BetaBeta primeCauchychi-squareDirac delta functionErlangexponentialexponential powerFfadingFisher's zFisher-TippettGammageneralized extreme valuegeneralized hyperbolicgeneralized inverse GaussianHotelling's T-squarehyperbolic secanthyper-exponentialhypoexponentialinverse chi-squareinverse gaussianinverse gammaKumaraswamyLandauLaplaceLévyLévy skew alpha-stablelogisticlog-normalMaxwell-BoltzmannMaxwell speednormal (Gaussian)ParetoPearsonpolarraised cosineRayleighrelativistic Breit-WignerRiceStudent's ttriangulartype-1 Gumbeltype-2 GumbeluniformVoigtvon MisesWeibullWigner semicircle DirichletKentmatrix normalmultivariate normalvon Mises-FisherWigner quasiWishart
Miscellaneous: Cantorconditionalexponential familyinfinitely divisiblelocation-scale familymarginalmaximum entropy phase-typeposterior priorquasisampling
</center>
fr:loi normale multidimensionnelle
ru:Многомерное нормальное распределение
sv:Multivariat normalfördelning
This page uses Creative Commons Licensed content from Wikipedia (view authors).

Around Wikia's network

Random Wiki