Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


In statistics and probability theory, the covariance matrix is a matrix of covariances between elements of a vector. It is the natural generalization to higher dimensions of the concept of the variance of a scalar-valued random variable.

Definition[]

If entries in the column vector

are random variables, each with finite variance, then the covariance matrix Σ is the matrix whose (ij) entry is the covariance

where

is the expected value of the ith entry in the vector X. In other words, we have

As a generalization of the variance[]

The definition above is equivalent to the matrix equality

This form can be seen as a generalization of the scalar-valued variance to higher dimensions. Recall that for a scalar-valued random variable X

where

The matrix is also often called the variance-covariance matrix since the diagonal terms are in fact variances.

Conflicting nomenclatures and notations[]

Nomenclatures differ. Some statisticians, following the probabilist William Feller, call this matrix the variance of the random vector , because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector . Thus

However, the notation for the "cross-covariance" between two vectors is standard:

The notation is found in William Feller's two-volume book An Introduction to Probability Theory and Its Applications, but both forms are quite standard and there is no ambiguity between them.

Properties[]

For and the following basic properties apply:


  1. is positive semi-definite



  2. If p = q, then

  3. If and are independent, then

where and are a random vectors, is a random vector, is vector, and are matrices.

This covariance matrix (though very simple) is a very useful tool in many very different areas. From it a transformation matrix can be derived that allows one to completely decorrelate the data or, from a different point of view, to find an optimal basis for representing the data in a compact way (see Rayleigh quotient for a formal proof and additional properties of covariance matrices). This is called principal components analysis (PCA) in statistics and Karhunen-Loève transform (KL-transform) in image processing.

Which matrices are covariance matrices[]

From the identity

and the fact that the variance of any real-valued random variable is nonnegative, it follows immediately that only a nonnegative-definite matrix can be a covariance matrix. The converse question is whether every nonnegative-definite symmetric matrix is a covariance matrix. The answer is "yes". To see this, suppose M is a p×p nonnegative-definite symmetric matrix. From the finite-dimensional case of the spectral theorem, it follows that M has a nonnegative symmetric square root, which let us call M1/2. Let be any p×1 column vector-valued random variable whose covariance matrix is the p×p identity matrix. Then

Complex random vectors[]

The variance of a complex scalar-valued random variable with expected value μ is conventionally defined using complex conjugation:

where the complex conjugate of a complex number is denoted .

If is a column-vector of complex-valued random variables, then we take the conjugate transpose by both transposing and conjugating, getting a square matrix:

where denotes the conjugate transpose, which is applicable to the scalar case since the transpose of a scalar is still a scalar.

LaTeX provides useful features for dealing with covariance matrices. These are available through the extendedmath package.

Estimation[]

The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is perhaps surprisingly subtle. It involves the spectral theorem and the reason why it can be better to view a scalar as the trace of a 1 × 1 matrix than as a mere scalar. See estimation of covariance matrices.

Further reading[]

See also[]


This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement