# Cramér-Rao inequality

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
34,200pages on
this wiki

In statistics, the Cramér-Rao inequality, named in honor of Harald Cramér and Calyampudi Radhakrishna Rao, expresses a lower bound on the variance of an unbiased statistical estimator, based on Fisher information.

It states that the reciprocal of the Fisher information, $\mathcal{I}(\theta)$, of a parameter $\theta$, is a lower bound on the variance of an unbiased estimator of the parameter (denoted $\widehat{\theta}$).

$\mathrm{var} \left(\widehat{\theta}\right) \geq \frac{1}{\mathcal{I}(\theta)} = \frac{1} { \mathrm{E} \left[ \left[ \frac{\partial}{\partial \theta} \log f(X;\theta) \right]^2 \right] }$

In some cases, no unbiased estimator exists that realizes the lower bound.

The Cramér-Rao inequality is also known as the Cramér-Rao bounds (CRB) or Cramér-Rao lower bounds (CRLB) because it puts a lower bound on the variance of an estimator $\widehat{\theta}$.

## ExampleEdit

Suppose X is a normally distributed random variable with known mean $\mu$ and unknown variance $\sigma^2$. Consider the following statistic:

$T=\frac{\sum\left(X_i-\mu\right)^2}{n}.$

Then T is unbiased for $\sigma^2$, as $E(T)=\sigma^2$. What is the variance of T?

$\mathrm{Var}(T) = \frac{\mathrm{var}(X-\mu)^2}{n}=\frac{1}{n} \left[ E\left\{(X-\mu)^4\right\}-\left(E\left\{(X-\mu)^2\right\}\right)^2 \right]$

(the second equality follows directly from the definition of variance). The first term is the fourth moment about the mean and has value $3(\sigma^2)^2$; the second is the square of the variance, or $(\sigma^2)^2$. Thus

$\mathrm{var}(T)=\frac{2(\sigma^2)^2}{n}.$

Now, what is the Fisher information in the sample? Recall that the score V is defined as

$V=\frac{\partial}{\partial\sigma^2}\log L(\sigma^2,X)$

where $L$ is the likelihood function. Thus in this case,

$V=\frac{\partial}{\partial\sigma^2}\log\left[\frac{1}{\sqrt{2\pi\sigma^2}}e^{-(X-\mu)^2/{2\sigma^2}}\right] =\frac{(X-\mu)^2}{2(\sigma^2)^2}-\frac{1}{2\sigma^2}$

where the second equality is from elementary calculus. Thus, the information in a single observation is just minus the expectation of the derivative of V, or

$I =-E\left(\frac{\partial V}{\partial\sigma^2}\right) =-E\left(-\frac{(X-\mu)^2}{(\sigma^2)^3}+\frac{1}{2(\sigma^2)^2}\right) =\frac{\sigma^2}{(\sigma^2)^3}-\frac{1}{2(\sigma^2)^2} =\frac{1}{2(\sigma^2)^2}.$

Thus the information in a sample of $n$ independent observations is just $n$ times this, or $\frac{n}{2(\sigma^2)^2}$.

The Cramer Rao inequality states that

$\mathrm{var}(T)\geq\frac{1}{I}.$

In this case, the inequality is satisfied. In fact the equality is achieved, showing that the estimator is efficient (see efficiency and estimator).

## Regularity conditionsEdit

This inequality relies on two weak regularity conditions on the probability density function, $f(x; \theta)$, and the estimator $T(X)$:

• The Fisher information is always defined; equivalently, for all $x$ such that $f(x; \theta) > 0$,
$\frac{\partial}{\partial\theta} \ln f(x;\theta)$
is finite.
• The operations of integration with respect to x and differentiation with respect to $\theta$ can be interchanged in the expectation of $T$; that is,
$\frac{\partial}{\partial\theta} \left[ \int T(x) f(x;\theta) \,dx \right] = \int T(x) \left[ \frac{\partial}{\partial\theta} f(x;\theta) \right] \,dx$
whenever the right-hand side is finite.

In some cases, a biased estimator can have both a variance and a mean squared error that are below the Cramér-Rao lower bound (the lower bound applies only to estimators that are unbiased). See estimator bias.

If the second regularity condition extends to the second derivative, then an alternative form of Fisher information can be used and yields a new Cramér-Rao inequality

$\mathrm{var} \left(\widehat{\theta}\right) \geq \frac{1}{\mathcal{I}(\theta)} = \frac{1} { -\mathrm{E} \left[ \frac{d^2}{d\theta^2} \log f(X;\theta) \right] }$

In some cases, it may be easier to take the expectation with respect to the second derivative than to take the expectation of the square of the first derivative.

## Multiple parametersEdit

Extending the Cramér-Rao inequality to multiple parameters, define a parameter column vector

$\boldsymbol{\theta} = \left[ \theta_1, \theta_2, \dots, \theta_d \right]^T \in \mathbb{R}^d$

with probability density function (pdf), $f(x; \boldsymbol{\theta})$, that satisfies the above two regularity conditions.

The Fisher information matrix is a $d \times d$ matrix with element $\mathcal{I}_{m, k}$ defined as

$\mathcal{I}_{m, k} = \mathrm{E} \left[ \frac{d}{d\theta_m} \log f\left(x; \boldsymbol(\theta)\right) \frac{d}{d\theta_k} \log f\left(x; \boldsymbol(\theta)\right) \right]$

then the Cramér-Rao inequality is

$\mathrm{cov}_{\boldsymbol{\theta}}\left(\boldsymbol{T}(X)\right) \geq \frac {\partial \boldsymbol{\psi} \left(\boldsymbol{\theta}\right)} {\partial \boldsymbol{\theta}^T} \mathcal{I}\left(\boldsymbol{\theta}\right)^{-1} \frac {\partial \boldsymbol{\psi}\left(\boldsymbol{\theta}\right)^T} {\partial \boldsymbol{\theta}}$

where

• $\boldsymbol{T}(X) = \begin{bmatrix} T_1(X) & T_2(X) & \cdots & T_d(X) \end{bmatrix}^T$
• $\boldsymbol{\psi} = \mathrm{E}\left[\boldsymbol{T}(X)\right] = \begin{bmatrix} \psi_1\left(\boldsymbol{\theta}\right) & \psi_2\left(\boldsymbol{\theta}\right) & \cdots & \psi_d\left(\boldsymbol{\theta}\right) \end{bmatrix}^T$

• $\frac{\partial \boldsymbol{\psi}\left(\boldsymbol{\theta}\right)}{\partial \boldsymbol{\theta}^T} = \begin{bmatrix} \psi_1 \left(\boldsymbol{\theta}\right) \\ \psi_2 \left(\boldsymbol{\theta}\right) \\ \vdots \\ \\ \psi_d \left(\boldsymbol{\theta}\right) \end{bmatrix} \begin{bmatrix} \frac{\partial}{\partial \theta_1} & \frac{\partial}{\partial \theta_2} & \cdots & \frac{\partial}{\partial \theta_d} \end{bmatrix} = \begin{bmatrix} \frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_1} & \frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_2} & \cdots & \frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_d} \\ \\ \frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_1} & \frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_2} & \cdots & \frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_d} \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_1} & \frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_2} & \cdots & \frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_d} \end{bmatrix}$

• $\frac{\partial \boldsymbol{\psi}\left(\boldsymbol{\theta}\right)^T}{\partial \boldsymbol{\theta}} = \begin{bmatrix} \frac{\partial}{\partial \theta_1} \\ \frac{\partial}{\partial \theta_2} \\ \vdots \\ \frac{\partial}{\partial \theta_d} \end{bmatrix} \begin{bmatrix} \psi_1 \left(\boldsymbol{\theta}\right) & \psi_2 \left(\boldsymbol{\theta}\right) & \cdots & \psi_d \left(\boldsymbol{\theta}\right) \end{bmatrix} = \begin{bmatrix} \frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_1} & \frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_1} & \cdots & \frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_1} \\ \\ \frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_2} & \frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_2} & \cdots & \frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_2} \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_d} & \frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_d} & \cdots & \frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_d} \end{bmatrix}$

And $\mathrm{cov}_{\boldsymbol{\theta}} \left( \boldsymbol{T}(X) \right)$ is a positive-semidefinite matrix, that is

$x^{T} \mathrm{cov}_{\boldsymbol{\theta}} \left( \boldsymbol{T}(X) \right) x \geq 0 \quad \forall x \in \mathbb{R}^d$

If $\boldsymbol{T}(X) = \begin{bmatrix} T_1(X) & T_2(X) & \cdots & T_d(X) \end{bmatrix}^T$ is an unbiased estimator (i.e., $\boldsymbol{\psi}\left(\boldsymbol{\theta}\right) = \boldsymbol{\theta}$) then the Cramér-Rao inequality is

$\mathrm{cov}_{\boldsymbol{\theta}}\left(\boldsymbol{T}(X)\right) \geq \mathcal{I}\left(\boldsymbol{\theta}\right)^{-1}$

## Single-parameter proofEdit

First, a more general version of the inequality will be proven; namely, that if the expectation of $T$ is denoted by $\psi (\theta)$, then for all $\theta$

${\rm var}(t(X)) \geq \frac{[\psi^\prime(\theta)]^2}{I(\theta)}$

The Cramér-Rao inequality will then follow as a consequence.

Let $X$ be a random variable with probability density function $f(x, \theta)$. Here $T = t(X)$ is a statistic, which is used as an estimator for $\theta$. If $V$ is the score, i.e.

$V = \frac{\partial}{\partial\theta} \ln f(X;\theta)$

then the expectation of $V$, written ${\rm E}(V)$, is zero. If we consider the covariance ${\rm cov}(V, T)$ of $V$ and $T$, we have ${\rm cov}(V, T) = {\rm E}(V T)$, because ${\rm E}(V) = 0$. Expanding this expression we have

${\rm cov}(V,T) = {\rm E} \left( T \cdot \frac{\partial}{\partial\theta} \ln f(X;\theta) \right)$

This may be expanded using the chain rule

$\frac{\partial}{\partial\theta} \ln Q = \frac{1}{Q}\frac{\partial Q}{\partial\theta}$

and the definition of expectation gives, after cancelling $f(x; \theta)$,

${\rm E} \left( T \cdot \frac{\partial}{\partial\theta} \ln f(X;\theta) \right) = \int t(x) \left[ \frac{\partial}{\partial\theta} f(x;\theta) \right] \, dx = \frac{\partial}{\partial\theta} \left[ \int t(x)f(x;\theta)\,dx \right] = \psi^\prime(\theta)$

because the integration and differentiation operations commute (second condition).

The Cauchy-Schwarz inequality shows that

$\sqrt{ {\rm var} (T) {\rm var} (V)} \geq {\rm cov}(V,T) = \psi^\prime (\theta)$

therefore

${\rm var\ } T \geq \frac{[\psi^\prime(\theta)]^2}{{\rm var} (V)} = \frac{[\psi^\prime(\theta)]^2}{I(\theta)} = \left[ \frac{\partial}{\partial\theta} {\rm E} (T) \right]^2 \frac{1}{I(\theta)}$

If $T$ is an unbiased estimator of $\theta$, that is, ${\rm E}(T) =\theta$, then $\psi'(\theta) = 1$; the inequality then becomes

${\rm var}(T) \geq \frac{1}{I(\theta)}$

This is the Cramér-Rao inequality.

The efficiency of $T$ is defined as

$e(T) = \frac{\frac{1}{I(\theta)}}{{\rm var}(T)}$

or the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér-Rao lower bound thus gives $e(T) \le 1$.

## Multivariate normal distributionEdit

For the case of a d-variate normal distribution

$\boldsymbol{x} \sim N_d \left( \boldsymbol{\mu} \left( \boldsymbol{\theta} \right) , C \left( \boldsymbol{\theta} \right) \right)$
$f\left( \boldsymbol{x}; \boldsymbol{\theta} \right) = \frac{1}{\sqrt{ (2\pi)^d \left| C \right| }} \exp \left( -\frac{1}{2} \left( \boldsymbol{x} - \boldsymbol{\mu} \right)^{T} C^{-1} \left( \boldsymbol{x} - \boldsymbol{\mu} \right) \right).$

The Fisher information matrix has elements

$\mathcal{I}_{m, k} = \frac{\partial \boldsymbol{\mu}^T}{\partial \theta_m} C^{-1} \frac{\partial \boldsymbol{\mu}}{\partial \theta_k} + \frac{1}{2} \mathrm{tr} \left( C^{-1} \frac{\partial C}{\partial \theta_m} C^{-1} \frac{\partial C}{\partial \theta_k} \right)$

where "tr" is the trace.

Let $w[n]$ be a white Gaussian noise (a sample of $N$ independent observations) with variance $\sigma^2$

$w[n] \sim \mathbb{N}_N \left(\boldsymbol{\mu}(\theta), \sigma^2 I \right).$

Where

$\boldsymbol{\mu}(\theta)_i = \theta = mean,$

and $\boldsymbol{\mu}(\theta)$ has $N$ (the number of independent observations) terms.

Then the Fisher information matrix is 1 × 1

$\mathcal{I}(\theta) = \left(\frac{\partial\boldsymbol{\mu}(\theta)}{\partial\theta_m}\right)^TC^{-1}\left(\frac{\partial\boldsymbol{\mu}(\theta)}{\partial\theta_k}\right) = \sum^N_{i=0}\frac{1}{\sigma^2} = \frac{N}{\sigma^2},$

and so the Cramér-Rao inequality is

$\mathrm{var}\left(\theta\right) \geq \frac{\sigma^2}{N}.$