# Cramér-Rao inequality

*34,204*pages on

this wiki

Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |

Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |

**Statistics:**
Scientific method ·
Research methods ·
Experimental design ·
Undergraduate statistics courses ·
Statistical tests ·
Game theory ·
Decision theory

In statistics, the **Cramér-Rao inequality**, named in honor of Harald Cramér and Calyampudi Radhakrishna Rao, expresses a lower bound on the variance of an unbiased statistical estimator, based on Fisher information.

It states that the reciprocal of the Fisher information, , of a parameter , is a lower bound on the variance of an unbiased estimator of the parameter (denoted ).

In some cases, no unbiased estimator exists that realizes the lower bound.

The Cramér-Rao inequality is also known as the **Cramér-Rao bounds** (CRB) or **Cramér-Rao lower bounds** (CRLB) because it puts a lower bound on the variance of an estimator .

## ExampleEdit

Suppose *X* is a normally distributed random variable with known mean and unknown variance . Consider the following statistic:

Then *T* is unbiased for , as . What is the variance of *T*?

(the second equality follows directly from the definition of variance). The first term is the fourth moment about the mean and has value ; the second is the square of the variance, or . Thus

Now, what is the Fisher information in the sample? Recall that the score *V* is defined as

where is the likelihood function. Thus in this case,

where the second equality is from elementary calculus. Thus, the information in a single observation is just minus the expectation of the derivative of *V*, or

Thus the information in a sample of independent observations is just times this, or .

The Cramer Rao inequality states that

In this case, the inequality is satisfied. In fact the equality is achieved, showing that the estimator is efficient (see efficiency and estimator).

## Regularity conditionsEdit

This inequality relies on two weak regularity conditions on the probability density function, , and the estimator :

- The Fisher information is always defined; equivalently, for all such that ,

- is finite.

- The operations of integration with respect to
*x*and differentiation with respect to can be interchanged in the expectation of ; that is,

- whenever the right-hand side is finite.

In some cases, a *biased* estimator can have both a variance and a mean squared error that are *below* the Cramér-Rao lower bound (the lower bound applies only to estimators that are unbiased). *See estimator bias.*

If the second regularity condition extends to the second derivative, then an alternative form of Fisher information can be used and yields a new Cramér-Rao inequality

In some cases, it may be easier to take the expectation with respect to the second derivative than to take the expectation of the square of the first derivative.

## Multiple parametersEdit

Extending the Cramér-Rao inequality to multiple parameters, define a parameter column vector

with probability density function (pdf), , that satisfies the above two regularity conditions.

The Fisher information matrix is a matrix with element defined as

then the Cramér-Rao inequality is

where

And is a positive-semidefinite matrix, that is

If is an unbiased estimator (i.e., ) then the Cramér-Rao inequality is

## Single-parameter proofEdit

First, a more general version of the inequality will be proven; namely, that if the expectation of is denoted by , then for all

The Cramér-Rao inequality will then follow as a consequence.

Let be a random variable with probability density function . Here is a statistic, which is used as an estimator for . If is the score, i.e.

then the expectation of , written , is zero. If we consider the covariance of and , we have , because . Expanding this expression we have

This may be expanded using the chain rule

and the definition of expectation gives, after cancelling ,

because the integration and differentiation operations commute (second condition).

The Cauchy-Schwarz inequality shows that

therefore

If is an unbiased estimator of , that is, , then ; the inequality then becomes

This is the Cramér-Rao inequality.

The efficiency of is defined as

or the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér-Rao lower bound thus gives .

## Multivariate normal distributionEdit

For the case of a *d*-variate normal distribution

with a probability density function

The Fisher information matrix has elements

where "tr" is the trace.

Let be a white Gaussian noise (a sample of independent observations) with variance

Where

and has (the number of independent observations) terms.

Then the Fisher information matrix is 1 × 1

and so the Cramér-Rao inequality is

## Further reading Edit

- Kay, Steven M. (1993).
*Statistical Signal Processing, Volume I: Estimation Theory*, ch. 3, Prentice Hall. ISBN 0-13-345711-7.de:Cramér-Rao-Ungleichungru:Неравенство Крамера — Рао

This page uses Creative Commons Licensed content from Wikipedia (view authors). |