Psychology Wiki
Register
No edit summary
 
Line 60: Line 60:
 
* [[Consistency (measurement)]]
 
* [[Consistency (measurement)]]
 
* [[Least squares]]
 
* [[Least squares]]
  +
* [[Observational error]]
 
* [[Sample mean and sample covariance]]
 
* [[Sample mean and sample covariance]]
 
* [[Scoring (testing)]]
 
* [[Scoring (testing)]]

Latest revision as of 15:07, 23 December 2011

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


This article is in need of attention from a psychologist/academic expert on the subject.
Please help recruit one, or improve this page yourself if you are qualified.
This banner appears on articles that are weak and whose contents should be approached with academic caution.

The error of measurement is the observed differences in obtained scores due to chance variance.

The standard error of measurement or estimation is the estimated standard deviation of the error in that method. Specifically, it estimates the standard deviation of the difference between the measured or estimated values and the true values. Notice that the true value of the standard deviation is usually unknown and the use of the term standard error carries with it the idea that an estimate of this unknown quantity is being used. It also carries with it the idea that it measures not the standard deviation of the estimate itself but the standard deviation of the error in the estimate, and these are very different.

In applications where a standard error is used, it would be good to be able to take proper account of the fact that the standard error is only an estimate. Unfortunately this is not often possible and it may then be better to use an approach that avoids using a standard error, for example by using maximum likelihood or a more formal approach to deriving confidence intervals. One well-known case where a proper allowance can be made arises where the Student's t-distribution is used to provide a confidence interval for an estimated mean or difference of means. In other cases, the standard error may usefully be used to provide an indication of the size of the uncertainty, but its formal or semi-formal use to provide confidence intervals or tests should be avoided unless the sample size is at least moderately large. Here "large enough" would depend on the particular quantities being analysed.

Standard error of the mean

File:SampleBiasCoefficient.png

Expected error in the mean of A for a sample of n data points with sample bias coefficient ρ. The unbiased standard error plots as the ρ=0 line with log-log slope -½.

The standard error of the mean (SEM), an unbiased estimate of expected error in the sample estimate of a population mean, is the sample estimate of the population standard deviation (sample standard deviation) divided by the square root of the sample size (assuming statistical independence of the values in the sample):

where

s is the sample standard deviation (i.e. the sample based estimate of the standard deviation of the population), and
n is the size (number of items) of the sample.

A practical result: Decreasing the uncertainty in your mean value estimate by a factor of two requires that you acquire four times as many samples. Worse, decreasing standard error by a factor of ten requires a hundred times as many samples.

This estimate may be compared with the formula for the true standard deviation of the mean:

where

σ is the standard deviation of the population.

Note: Standard error may also be defined as the standard deviation of the residual error term. (Kenney and Keeping, p. 187; Zwillinger 1995, p. 626)

If values of the measured quantity A are not statistically independent but have been obtained from known locations in parameter space x, an unbiased estimate of error in the mean may be obtained by multiplying the standard error above by the square root of (1+(n-1)ρ)/(1-ρ), where sample bias coefficient ρ is the average of the autocorrelation-coefficient ρAA[Δx] value (a quantity between -1 and 1) for all sample point pairs.


Assumptions and usage

If the data are assumed to be normally distributed, quantiles of the normal distribution and the sample mean and standard error can be used to calculate confidence intervals for the mean. The following expressions can be used to calculate the upper and lower 95% confidence limits, where x is equal to the sample mean, s is equal to the standard error for the sample mean, and 1.96 is the .975 quantile of the normal distribution.

Upper 95% Limit =
Lower 95% Limit =

In particular, the standard error of a sample statistic (such as sample mean) is the estimated standard deviation of the error in the process by which it was generated. In other words, it is the standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of , (for standard error of measurement or mean), or .

Standard errors provide simple measures of uncertainty in a value and are often used because:

  • If the standard error of several individual quantities is known then the standard error of some function of the quantities can be easily calculated in many cases;
  • Where the probability distribution of the value is known, it can be used to calculate an exact confidence interval; and
  • Where the probability distribution is unknown, relationships like Chebyshev’s or the Vysochanskiï-Petunin inequality can be used to calculate a conservative confidence interval
  • As the sample size tends to infinity the central limit theorem guarantees that the sampling distribution of the mean is asymptotically normal.

See also




This page uses Creative Commons Licensed content from Wikipedia (view authors).