Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


In statistics, an estimator is a function of the known sample data that is used to estimate an unknown population parameter; an estimate is the result from the actual application of the function to a particular set of data. Many different estimators are possible for any given parameter. Some criterion is used to choose between the estimators, although it is often the case that a criterion cannot be used to clearly pick one estimator over another. To estimate a parameter of interest (e.g., a population mean, a binomial proportion, a difference between two population means, or a ratio of two population standard deviation), the usual procedure is as follows:

1- Select a random sample from the population of interest.

2- Calculate the point estimate of the parameter.

3- Calculate a measure of its variability, often a confidence interval.

4- Associate with this estimate a measure of variability.

There are two types of estimators: point estimators and interval estimators.

Point estimators[]

For a point estimator of parameter ,

  1. The error of is
  2. The bias of is defined as
  3. is an unbiased estimator of θ if and only if for all θ, or, equivalently, if and only if for all θ.
  4. The mean squared error of is defined as
i.e. mean squared error = variance + square of bias.

where var(X) is the variance of X and E(X) is the expected value of X.

The standard deviation of an estimator of θ (the square root of the variance), or an estimate of the standard deviation of an estimator of θ, is called the standard error of θ.

Consistency[]

A consistent estimator is an estimator that converges in probability to the quantity being estimated as the sample size grows.

An estimator (where n is the sample size) is a consistent estimator for parameter if and only if, for all , no matter how small, we have

It is called strongly consistent, if it converges almost surely to the true value.

Efficiency[]

The quality of an estimator is generally judged by its mean squared error.

However, occasionally one chooses the unbiased estimator with the lowest variance. Efficient estimators are those that have the lowest possible variance among all unbiased estimators. In some cases, a biased estimator may have a uniformly smaller mean squared error than does any unbiased estimator, so one should not make too much of this concept. For that and other reasons, it is sometimes preferable not to limit oneself to unbiased estimators; see bias (statistics). Concerning such "best unbiased estimators", see also Cramér-Rao inequality, Gauss-Markov theorem, Lehmann-Scheffé theorem, Rao-Blackwell theorem.

Robustness[]

See: Robust estimator, Robust statistics

Other properties[]

Often, estimator are due to restrictions (restricted estimators).

See also[]

External links[]

de:Schätzer es:Estimador fr:Estimateur (statistique) nl:schatten zh:估计函数

This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement