Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
In statistics and probability theory, the covariance matrix is a matrix of covariances between elements of a vector. It is the natural generalization to higher dimensions of the concept of the variance of a scalar-valued random variable.
If entries in the column vector
are random variables, each with finite variance, then the covariance matrix Σ is the matrix whose (i, j) entry is the covariance
is the expected value of the ith entry in the vector X. In other words, we have
As a generalization of the variance Edit
The definition above is equivalent to the matrix equality
This form can be seen as a generalization of the scalar-valued variance to higher dimensions. Recall that for a scalar-valued random variable X
The matrix is also often called the variance-covariance matrix since the diagonal terms are in fact variances.
Conflicting nomenclatures and notationsEdit
Nomenclatures differ. Some statisticians, following the probabilist William Feller, call this matrix the variance of the random vector , because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector . Thus
However, the notation for the "cross-covariance" between two vectors is standard:
The notation is found in William Feller's two-volume book An Introduction to Probability Theory and Its Applications, but both forms are quite standard and there is no ambiguity between them.
For and the following basic properties apply:
- is positive semi-definite
- If p = q, then
- If and are independent, then
where and are a random vectors, is a random vector, is vector, and are matrices.
This covariance matrix (though very simple) is a very useful tool in many very different areas. From it a transformation matrix can be derived that allows one to completely decorrelate the data or, from a different point of view, to find an optimal basis for representing the data in a compact way (see Rayleigh quotient for a formal proof and additional properties of covariance matrices). This is called principal components analysis (PCA) in statistics and Karhunen-Loève transform (KL-transform) in image processing.
Which matrices are covariance matricesEdit
From the identity
and the fact that the variance of any real-valued random variable is nonnegative, it follows immediately that only a nonnegative-definite matrix can be a covariance matrix. The converse question is whether every nonnegative-definite symmetric matrix is a covariance matrix. The answer is "yes". To see this, suppose M is a p×p nonnegative-definite symmetric matrix. From the finite-dimensional case of the spectral theorem, it follows that M has a nonnegative symmetric square root, which let us call M1/2. Let be any p×1 column vector-valued random variable whose covariance matrix is the p×p identity matrix. Then
Complex random vectorsEdit
where the complex conjugate of a complex number is denoted .
If is a column-vector of complex-valued random variables, then we take the conjugate transpose by both transposing and conjugating, getting a square matrix:
where denotes the conjugate transpose, which is applicable to the scalar case since the transpose of a scalar is still a scalar.
The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is perhaps surprisingly subtle. It involves the spectral theorem and the reason why it can be better to view a scalar as the trace of a 1 × 1 matrix than as a mere scalar. See estimation of covariance matrices.
Further reading Edit
- Covariance Matrix at MathWorld
- van Kampen, N. G. Stochastic processes in physics and chemistry. New York: North-Holland, 1981.
See also Edit
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|