Wikia

Psychology Wiki

Goodness of fit

Talk0
34,141pages on
this wiki
Revision as of 17:43, January 21, 2009 by Dr Joe Kiff (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e.g. to test for normality of residuals, to test whether two samples are drawn from identical distributions (see Kolmogorov-Smirnov test), or whether outcome frequencies follow a specified distribution (see Pearson's chi-square test). In the analysis of variance, one of the components into which the variance is partitioned may be a lack-of-fit sum of squares.

Example Edit

The chi-square statistic is a sum of differences between observed and expected outcome frequencies, each squared and divided by the expectation:

 \chi^2 = \sum {\frac{(O - E)}{E}^2}

where:

O = an observed frequency
E = an expected (theoretical) frequency, asserted by the null hypothesis

The resulting value can be compared to the chi-square distribution to determine the goodness of fit.

In order to determine the degrees of Freedom of the Chi-Squared distribution, one takes the total number of observed frequencies and subtracts one. For example, if there are eight different frequencies, one would compare to a chi-squared with seven degrees of freedom.

There is also a reduced chi-squared statistic, which is weighted based on measurement error.

 \chi^2 = \sum {\frac{(O - E)^2}{\sigma^2}}

where \sigma^2 is the variance of the observation. [1]

Binomial case Edit

A binomial experiment is a sequence of independent trials in which the trials can result in one of two outcomes, success or failure. There are n trials each with probability of success, denoted by p. Provided that npi ≫ 1 for every i (where i = 1, 2, ..., k), then

 \chi^2 = \sum_{i=1}^{k} {\frac{(N_i - np_i)^2}{np_i}} = \sum_{\mathrm{all\ cells}}^{} {\frac{(\mathrm{O} - \mathrm{E})^2}{\mathrm{E}}}.

This has approximately a chi-squared distribution with k − 1 df. The fact that df = k − 1 is a consequence of the restriction  \sum N_i=n. We know there are k observed cell counts, however, once any k − 1 are known, the remaining one is uniquely determined. Basically, one can say, there are only k − 1 freely determined cell counts, thus df = k − 1.

See alsoEdit

ReferencesEdit

  1. Chi-Square Data Fitting

Around Wikia's network

Random Wiki