Wikia

Psychology Wiki

Multiple testing

Talk0
34,117pages on
this wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


Merge-arrow
It has been suggested that this article or section be merged into Multiple comparisons. (Discuss)

In statistics, "multiple testing" refers to the potential increase in Type I error that occurs when statistical tests are used repeatedly, for example while doing multiple comparisons to test null hypotheses stating that the averages of several disjoint populations are equal to each other (homogeneous).

Intuitively, even if a particular outcome of an experiment is very unlikely to happen, the fact that the experiment is repeated multiple times will increase the probability that the outcome appears at least once. As an example, if a coin is tossed 10 times and lands 10 times on tail, it will usually be considered evidence that the coin is biased, because the probability of observing such a series is very low for a fair coin (2−10≈10−3). However, if the same series of ten tails in a row appears as part of 10,000 tosses with the same coin, it is more likely to be seen as a random fluctuation in the long series of tosses.

Experimentwise significance level Edit

If the significance level for a given experiment is α, the experimentwise significance level will increase exponentially (significance decreases) as the number of tests increases. More precisely, assuming all tests are independent, if n tests are performed, the experimentwise significance level will be given by 1 − (1 − α)n.

Thus, in order to retain the same overall rate of false positives in a series of multiple tests, the standards for each test must be more stringent. Intuitively, reducing the size of the allowable error (alpha) for each comparison by the number of comparisons will result in an overall alpha which does not exceed the desired limit, and this can be mathematically proved true. For instance, to obtain the usual alpha of 0.05 with ten tests, requiring an alpha of .005 for each test can be demonstrated to result in an overall alpha which does not exceed 0.05.

However, it can also be demonstrated that this technique may be conservative (depending on the correlation structure), i.e. will in actuality result in a true alpha of significantly less than 0.05; therefore raising the rate of false negatives, failing to identify an unnecessarily high percentage of actual significant differences in the data. This can have important real world consequences; for instance, it may result in failure to approve a drug which is in fact superior to existing drugs, thereby both depriving the world of an improved therapy, and also causing the drug company to lose the substantial investment in research and development up to that point. For this reason, there has been a great deal of attention paid to developing better techniques for multiple testing, such that the overall rate of false positives can be maintained without inflating the rate of false negatives unnecessarily.

Large-scale multiple testing Edit

For large-scale multiple testing (for example, as is very common in genomics when using technologies such as DNA microarrays) one can instead control the false discovery rate (FDR), defined to be the expected proportion of false positives among all significant tests.

BibliographyEdit

  • Benjamini, Y, and Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing JRSSB 57:125-133
  • Storey JD and Tibshirani (2003) "Statistical significance for genome-wide studies" PNAS 100, 9440–9445. [1]
This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement | Your ad here

Around Wikia's network

Random Wiki