# Goodness of fit

*34,190*pages on

this wiki

Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |

Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |

**Statistics:**
Scientific method ·
Research methods ·
Experimental design ·
Undergraduate statistics courses ·
Statistical tests ·
Game theory ·
Decision theory

The **goodness of fit** of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e.g. to test for normality of residuals, to test whether two samples are drawn from identical distributions (see Kolmogorov-Smirnov test), or whether outcome frequencies follow a specified distribution (see Pearson's chi-square test). In the analysis of variance, one of the components into which the variance is partitioned may be a lack-of-fit sum of squares.

## Contents

[show]## Example Edit

The chi-square statistic is a sum of differences between observed and expected outcome frequencies, each squared and divided by the expectation:

where:

*O*= an observed frequency*E*= an expected (theoretical) frequency, asserted by the null hypothesis

The resulting value can be compared to the chi-square distribution to determine the goodness of fit.

In order to determine the degrees of Freedom of the Chi-Squared distribution, one takes the total number of observed frequencies and subtracts one. For example, if there are eight different frequencies, one would compare to a chi-squared with seven degrees of freedom.

There is also a reduced chi-squared statistic, which is weighted based on measurement error.

where is the variance of the observation. ^{[1]}

## Binomial case Edit

A binomial experiment is a sequence of independent trials in which the trials can result in one of two outcomes, success or failure. There are *n* trials each with probability of success, denoted by *p*. Provided that *np*_{i} ≫ 1 for every *i* (where *i* = 1, 2, ..., *k*), then

This has approximately a chi-squared distribution with *k* − 1 df. The fact that df = *k* − 1 is a consequence of the restriction . We know there are *k* observed cell counts, however, once any *k* − 1 are known, the remaining one is uniquely determined. Basically, one can say, there are only *k* − 1 freely determined cell counts, thus df = *k* − 1.