Wikia

Psychology Wiki

Changes: Parametric statistics

Edit

Back to page

 
Line 1: Line 1:
 
{{StatsPsy}}
 
{{StatsPsy}}
  +
'''Parametric statistics''' is a branch of [[statistics]] that assumes that the data has come from a type of [[probability distribution]] and makes [[inference]]s about the [[parameters]] of the distribution.<ref name="Geisser and Johnson">Seymour Geisser and Wesley M. Johnson, <cite>Modes of Parametric Statistical Inference</cite>, John Wiley & Sons (2006), ISBN 978-0471667261</ref> Most well-known elementary statistical methods are parametric.<ref name="Cox">D. R. Cox, <cite>Principles of Statistical Inference</cite>, Cambridge University Press (2006), ISBN 978-0521685672</ref>. For details of particular tests see [[Parametric statistical tests]].
   
  +
Generally speaking parametric methods make more assumptions than [[Non-parametric statistics|non-parametric methods]].<ref name="Corder and Foreman">Corder and Foreman, <cite>Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach</cite>, John Wiley & Sons (2009), ISBN 978-0-470-45461-9</ref> If those extra assumptions are correct, parametric methods can produce more accurate and precise estimates. They are said to have more [[statistical power]]. However, if those assumptions are incorrect, parametric methods can be very misleading. For that reason they are often not considered [[Robust statistics|robust]]. On the other hand, parametric [[formula]]e are often simpler to write down and faster to compute. In some, but definitely not all cases, their simplicity makes up for their [[Robust statistics|non-robustness]], especially if care is taken to examine diagnostic statistics.<ref name="Freedman">David Freedman, <cite>Statistical Models: Theory and Practice</cite>, Cambridge University Press (2000), ISBN 978-0521671057</ref>
   
This article requires attention!!
+
Because parametric statistics require a [[probability distribution]], they are not [[Non-parametric statistics|distribution-free]].<ref name="Hoaglin">David C. Hoaglin, Frederick Mosteller and John Tukey, <cite>Understanding Robust and Exploratory Data Analysis </cite>, Wiley-Interscience (2000), ISBN 978-0471384915</ref>
   
  +
== Example ==
   
'''Parametric inferential statistical methods''' are mathematical procedures for [[statistical hypothesis testing]] which assume that the distributions of the variables being assessed belong to known parametrized families of [[probability distribution]]s. In that case we speak of parametric model.
+
Suppose we have a sample of 99 test scores with a mean of 100 and a standard deviation of 10. If we assume all 99 test scores are random samples from a [[normal distribution]] we predict there is a 1% chance that the 100th test score will be higher than 123.65 (that is the mean plus 2.365 standard deviations) assuming that the 100th test score comes from the same distribution as the others. The normal family of distributions all have the same shape and are ''parameterized'' by mean and standard deviation. That means if you know the mean and standard deviation, and that the distribution is normal, you know the probability of any future observation. Parametric statistical methods are used to compute the 2.365 value above, given 99 [[Independence (probability theory)|independent]] observations from the same normal distribution.
   
In [[statistics]], a '''parametric model''' is a parametrized family of [[probability distribution]]s, one of which is presumed to describe the way a population is distributed.
+
A [[Non-parametric statistics|non-parametric]] estimate of the same thing is the maximum of the first 99 scores. We don't need to assume anything about the distribution of test scores to reason that before we gave the test it was equally likely that the highest score would be any of the first 100. Thus there is a 1% chance that the 100th is higher than any of the 99 that preceded it.
   
==Examples==
+
== History ==
   
* For each [[real number]] &mu; and each positive number &sigma;<sup>2</sup> there is a [[normal distribution]] whose [[expected value]] is &mu; and whose [[variance]] is &sigma;<sup>2</sup>. Its probability density function is
+
[[Statistician]] [[Jacob Wolfowitz]] coined the statistical term "parametric" in order to define its opposite in 1942:
   
::<math>\varphi_{\mu,\sigma^2}(x) = {1 \over \sigma}\cdot{1 \over \sqrt{2\pi}} \exp\left( {-1 \over 2} \left({x - \mu \over \sigma}\right)^2\right)</math>
+
"Most of these developments have this feature in common, that the distribution functions of the various [[stochastic]] variables which enter into their problems are assumed to be of known functional form, and the theories of estimation and of testing hypotheses are theories of estimation of and of testing hypotheses about, one or more parameters. . ., the knowledge of which would completely determine the various distribution functions involved. We shall refer to this situation. . .as the parametric case, and denote the opposite case, where the functional forms of the distributions are unknown, as the non-parametric case."<ref name="Wolfowitz">J. Wolfowitz, <cite>Annals of Mathematical Statistics</cite> XIII, p. 264 (1942)</ref>
   
Thus the family of normal distributions is parametrized by the pair (&mu;, &sigma;<sup>2</sup>).
+
==See also==
   
This parametrized family is both an [[exponential family]] and a [[location-scale family]]
+
*[[Parametric equation]]
  +
*[[Parametric model]]
   
* For each positive real number &lambda; there is a [[Poisson distribution]] whose expected value is &lambda;. Its probability mass function is
+
==References==
  +
{{reflist|colwidth=30em}}
   
::<math>f(x) = {\lambda^x e^{-\lambda} \over x!}\ \mathrm{for}\ x\in\{\,0,1,2,3,\dots\,\}.</math>
+
{{DEFAULTSORT:Parametric Statistics}}
  +
[[Category:Parametric statistics| ]]
  +
[[Category:Statistical inference]]
   
Thus the family of [[Poisson distribution]]s is parametrized by the positive number &lambda;.
+
<!--
  +
[[de:Parametrische Statistik]]
  +
[[es:Estadística paramétrica]]
  +
[[fa:آمار پارامتری]]
  +
[[it:Statistica parametrica]]
  +
[[nl:Parametrische toets]]
  +
-->
   
The family of Poisson distributions is an [[exponential family]].
+
{{enWP|Parametric statistics}}
 
[[Category:statistics]]
 
 
 
For example, [[analysis of variance]] assumes that the underlying distributions are [[normal distribution|normally]] distributed and that the [[Variance|variances]] of the distributions being compared are similar. The [[Pearson product-moment correlation coefficient]] assumes normality.
 
 
While parametric techniques are robust &#8211; that is, they often retain considerable [[statistical power|power]] to detect differences or similarities even when these assumptions are violated &#8211; some distributions violate the assumptions so markedly that a [[non-parametric statistics|non-parametric]] alternative is more likely to detect a difference or similarity.
 
 
{{Statistics-stub}}
 
 
[[Category:statistics]]
 
 
[[it:statistica parametrica]]
 

Latest revision as of 14:05, December 26, 2011

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


Parametric statistics is a branch of statistics that assumes that the data has come from a type of probability distribution and makes inferences about the parameters of the distribution.[1] Most well-known elementary statistical methods are parametric.[2]. For details of particular tests see Parametric statistical tests.

Generally speaking parametric methods make more assumptions than non-parametric methods.[3] If those extra assumptions are correct, parametric methods can produce more accurate and precise estimates. They are said to have more statistical power. However, if those assumptions are incorrect, parametric methods can be very misleading. For that reason they are often not considered robust. On the other hand, parametric formulae are often simpler to write down and faster to compute. In some, but definitely not all cases, their simplicity makes up for their non-robustness, especially if care is taken to examine diagnostic statistics.[4]

Because parametric statistics require a probability distribution, they are not distribution-free.[5]

Example Edit

Suppose we have a sample of 99 test scores with a mean of 100 and a standard deviation of 10. If we assume all 99 test scores are random samples from a normal distribution we predict there is a 1% chance that the 100th test score will be higher than 123.65 (that is the mean plus 2.365 standard deviations) assuming that the 100th test score comes from the same distribution as the others. The normal family of distributions all have the same shape and are parameterized by mean and standard deviation. That means if you know the mean and standard deviation, and that the distribution is normal, you know the probability of any future observation. Parametric statistical methods are used to compute the 2.365 value above, given 99 independent observations from the same normal distribution.

A non-parametric estimate of the same thing is the maximum of the first 99 scores. We don't need to assume anything about the distribution of test scores to reason that before we gave the test it was equally likely that the highest score would be any of the first 100. Thus there is a 1% chance that the 100th is higher than any of the 99 that preceded it.

History Edit

Statistician Jacob Wolfowitz coined the statistical term "parametric" in order to define its opposite in 1942:

"Most of these developments have this feature in common, that the distribution functions of the various stochastic variables which enter into their problems are assumed to be of known functional form, and the theories of estimation and of testing hypotheses are theories of estimation of and of testing hypotheses about, one or more parameters. . ., the knowledge of which would completely determine the various distribution functions involved. We shall refer to this situation. . .as the parametric case, and denote the opposite case, where the functional forms of the distributions are unknown, as the non-parametric case."[6]

See alsoEdit

ReferencesEdit

  1. Seymour Geisser and Wesley M. Johnson, Modes of Parametric Statistical Inference, John Wiley & Sons (2006), ISBN 978-0471667261
  2. D. R. Cox, Principles of Statistical Inference, Cambridge University Press (2006), ISBN 978-0521685672
  3. Corder and Foreman, Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach, John Wiley & Sons (2009), ISBN 978-0-470-45461-9
  4. David Freedman, Statistical Models: Theory and Practice, Cambridge University Press (2000), ISBN 978-0521671057
  5. David C. Hoaglin, Frederick Mosteller and John Tukey, Understanding Robust and Exploratory Data Analysis , Wiley-Interscience (2000), ISBN 978-0471384915
  6. J. Wolfowitz, Annals of Mathematical Statistics XIII, p. 264 (1942)


This page uses Creative Commons Licensed content from Wikipedia (view authors).

Around Wikia's network

Random Wiki