# Fisher's exact test

*34,146*pages on

this wiki

Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |

Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |

**Statistics:**
Scientific method ·
Research methods ·
Experimental design ·
Undergraduate statistics courses ·
Statistical tests ·
Game theory ·
Decision theory

**Fisher's exact test** is a Nonparametric statistical test used in the analysis of categorical data where sample sizes are small. It is named after its inventor, R. A. Fisher, and is one of a class of exact tests.

The test is used to examine the significance of the association between two variables in a 2 x 2 contingency table. With large samples, a chi-squared test can be used in this situation. However, this test is not suitable when the "expected values" in any of the cells of the table is below 5 and there is only one degree of freedom: the sampling distribution of the test statistic that is calculated is only approximately equal to the theoretical chi-squared distribution, and the approximation is inadequate in these conditions (which arise when sample sizes are small, or the data are very unequally distributed among the cells of the table). The Fisher test is, as its name states, exact, and it can therefore be used regardless of the sample characteristics. It becomes difficult to calculate with large samples or well-balanced tables, but fortunately these are exactly the conditions where the chi-square test is available.

The need for the Fisher test arises when we have data that are divided into two categories in two separate ways. For example, a sample of teenagers might be divided into male and female on the one hand, and those that are and are not currently dieting on the other. We hypothesise, perhaps, that the proportion of dieting individuals is higher among the women than among the men, and we want to test whether any difference of proportions that we observe is significant. The data might look like this:

men | women | total | |

dieting | 1 | 9 | 10 |

not dieting | 11 | 3 | 14 |

totals | 12 | 12 | 24 |

These data would not be suitable for analysis by a chi-squared test, because the expected values in the table are all below 10, and in a 2 × 2 contingency table, the number of degrees of freedom is always 1.

The question we ask about these data is: knowing that 10 of these 24 teenagers are dieters, what is the probability that these 10 dieters would be so unevenly distributed between the girls and the boys? If we were to choose 10 of the teenagers at random, what is the probability that 9 of them would be among the 12 girls, and only 1 from among the 12 boys?

Before we proceed with the Fisher test, we first introduce some notation. We represent the cells by the letters *a, b, c* and *d*, call the totals across rows and columns *marginal totals*, and represent the grand total by *n*. So the table now looks like this:

men | women | total | |

dieting | a | b | a + b |

not dieting | c | d | c + d |

totals | a + c | b + d | n |

Fisher showed that the probability of obtaining any such set of values was given by the hypergeometric distribution:

where the symbol ! indicates the factorial, i.e. 1 multiplied by 2 multiplied by 3 etc, up to the number whose factorial is required.

This formula gives the exact probability of observing this particular arrangement of the data, assuming the given marginal totals, on the null hypothesis that the odds ratio between dieter and non-dieter among men and women equals to 1 in the population from which our sample was drawn. Fisher showed that we could only deal with cases where the marginal totals are the same as in the observed table. In the example, there are 11 such cases. Of these only one is more extreme in the same direction as our data; it looks like this:

men | women | total | |

dieting | 0 | 10 | 10 |

not dieting | 12 | 2 | 14 |

totals | 12 | 12 | 24 |

In order to calculate the significance of the observed data, i.e. the total probability of observing data as extreme or more extreme if the null hypothesis is true, we have to calculate the *p* values for both these tables, and add them together. This gives a one-tailed test; for a two-tailed test we must also consider tables that are equally extreme but in the opposite direction. Unlike most statistical tests, it is not always the case that the two-tailed significance level is exactly twice the one-tailed significance level. In the example above, the one-tailed significance level is 0.0014 and the two-tailed significance level is twice this, since this problem is symmetrical (same number of boys as girls). Because the calculation of Fisher’s exact test involves permuting the observed cell frequencies it is referred to as a permutation test, one of a broad class of such tests.

Calculating significance values for the Fisher exact test with a calculator is slow and requires care because the factorial terms quickly become very large, and with larger samples, the number of possible tables more extreme than that observed quickly becomes substantial. Even for small samples (which fortunately is where the test is usually needed), the calculations are tedious, but published tables are available; they are bulky, because the grand total and two of the four cell sizes have to be specified. Given these data, the table then gives the criterial value of the third cell size for specified significance levels. The observed table may have to be re-arranged (for example by rearranging the rows or the columns) to make it compatible with the way the significance levels are tabulated. Most modern statistical packages will calculate the significance of Fisher tests, in some cases even where the chi-squared approximation would also be acceptable.

This page uses Creative Commons Licensed content from Wikipedia (view authors). |