Psychology Wiki
(cat)
No edit summary
Line 1: Line 1:
 
{{StatsPsy}}
 
{{StatsPsy}}
   
  +
In [[statistics]], the '''Mann–Whitney ''U''''' test (also called the '''Mann–Whitney–Wilcoxon''' ('''MWW'''), '''Wilcoxon rank-sum test''', or '''Wilcoxon–Mann–Whitney''' test) is a [[non-parametric statistics|non-parametric]] test for assessing whether two independent [[sampling (statistics)|samples]] of observations have equally large values. It is one of the best-known non-parametric significance tests. It was proposed initially by [[Frank Wilcoxon]] in 1945, for equal sample sizes, and extended to arbitrary sample sizes and in other ways by [http://www.math.ohio-state.edu/history/biographies/mann/ H. B. Mann] and [[D. R. Whitney]] (1947). MWW is virtually identical to performing an ordinary parametric two-sample [[t test|''t'' test]] on the data after ranking over the combined samples.
The '''Mann-Whitney U''' test is one of the best-known [[non-parametric statistics|non-parametric]] [[statistical significance]] tests. It is sometimes also called the '''Mann-Whitney-Wilcoxon''' ('''MWW''') test.
 
   
  +
==Assumptions and formal statement of hypotheses==
The test is appropriate to the case of two [[statistical independence|independent]] [[sampling (statistics)|samples]] of observations that are measured at least at an [[ordinal measurement|ordinal]] level, i.e. we can at least say, of any two observations, which is the greater. The test assesses whether the degree of overlap between the two observed distributions is less than would be expected by chance, on the [[null hypothesis]] that the two samples are drawn from a single population.
 
  +
Although Mann and Whitney (1947) developed the MWW test under the assumption of continuous responses with the [[alternative hypothesis]] being that one distribution is [[stochastic dominance|stochastically greater]] than the other, there are many other ways to formulate the [[null hypothesis | null]] and alternative hypotheses such that the MWW test will give a valid test.<ref>{{cite journal | last=Fay | first=MP | coauthors=Proschan, MA | journal=Statistics Surveys | year=2010 | pages=1&ndash;39 | volume=4 | url=http://www.i-journals.org/ss/viewarticle.php?id=51 | title=Wilcoxon&ndash;Mann&ndash;Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules
  +
}}</ref>
   
  +
A very general formulation is to assume that:
The test involves the calculation of a statistic, usually called ''U'', whose distribution under the null hypothesis is known. In the case of small samples, the distribution is tabulated, but for samples above about 20 there is a good approximation using the [[normal distribution]]. Some books tabulate statistics other than ''U'', such as the sum of ranks in one of the samples, but this deviation from standard practice is unhelpful.
 
   
  +
# All the observations from both groups are [[statistical independence |independent]] of each other,
The ''U'' test is included in most modern [[statistical packages]]. However, it is easily calculated by hand especially for small samples. There are two ways of doing this:
 
  +
# The responses are [[ordinal measurement|ordinal]] or continuous measurements (i.e. one can at least say, of any two observations, which is the greater),
*For small samples, a direct method is recommended. It is very quick, and it also gives an insight into the meaning of the ''U'' statistic. Choose the sample for which the observations seem to be smaller (or the smaller sample - the choice is relevant only to ease of computation). Call this sample 1, and call the other sample sample 2. Taking each observation in sample 1, count the number of observations in sample 2 that are smaller than it. The total of these counts is ''U''.
 
  +
# Under the null hypothesis the probability of an observation from one population (''X'') exceeding an observation from the second population (''Y'') equals the probability of an observation from ''Y'' exceeding an observation from ''X'', that is, there is a symmetry between populations with respect to probability of random drawing of a larger observation.
*For larger samples, a formula can be used. Arrange all the observations into a single ranked series, and then add up the ranks in the smaller group. The sum of ranks in the other group follows by calculation, since the sum of all the ranks equals ''N''(''N'' + 1)/2 where ''N'' is the total number of observations. ''U'' is then given by the following formula:
 
  +
# Under the alternative hypothesis the probability of an observation from one population (''X'') exceeding an observation from the second population (''Y'') (after correcting for ties) is not equal to 0.5. The alternative may also be stated in terms of a one-sided test, for example: P(''X''&nbsp;>&nbsp;''Y'')&nbsp;+&nbsp;0.5 P(''X''&nbsp;=&nbsp;''Y'') &nbsp;>&nbsp;0.5.
   
  +
If we add more strict assumptions than those above such that the responses are assumed continuous and the alternative is a location shift (i.e. ''F''<sub>1</sub>(''x'') =&nbsp;''F''<sub>2</sub>(''x''&nbsp;+&nbsp;''δ'')), then we can interpret a significant MWW test as showing a significant difference in medians. Under this location shift assumption, we can also interpret the MWW as assessing whether the [[Hodges&ndash;Lehmann estimate]] of the difference in central tendency between the two populations is zero. The [[Hodges&ndash;Lehmann estimate]] for this two-sample problem is the [[median]] of all possible differences between an observation in the first sample and an observation in the second sample.
::<math>U=n_1 n_2 +{n_1(n_1+1) \over 2}-R_1</math>
 
   
  +
The general null hypothesis of a symmetry between populations with respect of obtaining a larger observation is sometimes stated more narrowly as both populations having exactly the same distribution. However, such a specific formulation of MWW test is not consistent with the original formulation of Mann and Whitney (1947), furthermore it leads to problems with interpretation of a test results when both distributions have different variances: for example, the test will never reject the null hypothesis if both populations have normal distribution with the same mean but different variances. In fact, if we formulate the null hypothesis as X and Y having the same distribution, the alternative hypothesis must be that the distributions of X and Y are the same except for a shift in location -- otherwise the test may have little power (or no power at all) to reject the null hypothesis.
:where ''n''<sub>1</sub> and ''n''<sub>2</sub> are the two sample sizes, and ''R''<sub>1</sub> is the sum of the ranks in sample 1.</p>
 
   
  +
==Calculations==
Note that the maximum value of ''U'' is the product of the two sample sizes, and if the value obtained by either of the methods above is more than half of this maximum, it should be subtracted from the maximum to obtain the value to look up in tables.
 
 
The test involves the calculation of a [[statistic]], usually called ''U'', whose distribution under the [[null hypothesis]] is known. In the case of small samples, the distribution is tabulated, but for sample sizes above&nbsp;~20 there is a good approximation using the [[normal distribution]]. Some books tabulate statistics equivalent to ''U'', such as the sum of [[Rank (set theory)|rank]]s in one of the samples, rather than ''U'' itself.
   
 
The ''U'' test is included in most modern [[List of statistical packages|statistical packages]]. It is also easily calculated by hand, especially for small samples. There are two ways of doing this.
For example, let us suppose that Aesop is dissatisfied with his classic experiment in which one tortoise was found to beat one hare in a race, and decides to carry out a significance test to discover whether the results could be extended to tortoises in general and hares in general. He collects a sample of 6 tortoises and 6 hares, and makes them all run his race. The order in which they reach the finishing post is as follows, writing T for a tortoise and H for a hare:
 
   
  +
For small samples a direct method is recommended. It is very quick, and gives an insight into the meaning of the ''U'' statistic.
<center>T H H H H H T T T T T H</center>
 
  +
# Choose the sample for which the ranks seem to be smaller (The only reason to do this is to make computation easier). Call this "sample 1," and call the other sample "sample 2."
  +
# Taking each observation in sample 1, count the number of observations in sample 2 that are smaller than it (count a half for any that are equal to it).
  +
# The total of these counts is ''U''.
   
  +
For larger samples, a formula can be used:
(his original tortoise still goes at warp speed, and his original hare is still lazy, but the others run truer to stereotype). What is the value of ''U''?
 
  +
# Arrange all the observations into a single ranked series. That is, rank all the observations without regard to which sample they are in.
*Using the direct method, we take each tortoise in turn, and count the number of hares it beats, getting the following results: 6, 1, 1, 1, 1, 1. So ''U'' = 6 + 1 + 1 + 1 + 1 + 1 = 11.
 
 
# Add up the ranks for the observations which came from sample 1. The sum of ranks in sample 2 follows by calculation, since the sum of all the ranks equals {{frac|N(N+1)|2}} where ''N'' is the total number of observations.
*Using the indirect method:
 
  +
# ''U'' is then given by:
::the sum of the ranks achieved by the tortoises is 1 + 7 + 8 + 9 + 1 0 + 11 = 46.
 
::Therefore U = 6×6 + 6×7/2 &minus; 46 = 36 + 21 &minus; 46 = 11.
 
Consulting the table referenced below, we find that this result does not confirm the greater speed of tortoises, though nor does it show any significant speed advantage for hares. It is left as an exercise for the reader to establish that statistical packages will give the same result, at rather greater expense.
 
   
 
:::<math>U_1=R_1 - {n_1(n_1+1) \over 2} \,\!</math>
For large samples, the normal approximation:
 
   
 
::where ''n''<sub>1</sub> is the sample size for sample 1, and ''R''<sub>1</sub> is the sum of the ranks in sample 1.
:<math>z=(U-m_U)/\sigma_{U}</math>
 
   
  +
::Note that there is no specification as to which sample is considered sample 1. An equally valid formula for ''U'' is
can be used, where ''z'' is a standard normal deviate whose significance can be checked in tables of the normal distribution. m<sub>U</sub> and &sigma;<sub>U</sub> are the mean and standard deviation of ''U'' if the null hypothesis is true, and are given by the following formulae:
 
 
:::<math>U_2=R_2 - {n_2(n_2+1) \over 2}. \,\!</math>
   
  +
::The smaller value of ''U''<sub>1</sub> and ''U''<sub>2</sub> is the one used when consulting significance tables. The sum of the two values is given by
:<math>m_U=n_1 n_2/2.</math>
 
  +
:::<math>U_1 + U_2 = R_1 - {n_1(n_1+1) \over 2} + R_2 - {n_2(n_2+1) \over 2}. \,\!</math>
   
  +
:: Knowing that ''R''<sub>1</sub>&nbsp;+&nbsp;''R''<sub>2</sub> = ''N''(''N''&nbsp;+&nbsp;1)/2 and ''N'' = ''n''<sub>1</sub>&nbsp;+&nbsp;''n''<sub>2</sub>&nbsp;, and doing some algebra, we find that the sum is
:<math>\sigma_U=\sqrt{n_1 n_2 (n_1+n_2+1) \over 12}.</math>
 
 
:::<math>U_1 + U_2 = n_1 n_2. \,\!</math>
   
  +
The maximum value of ''U'' is the product of the sample sizes for the two samples. In such a case, the "other" ''U'' would be 0. The Mann–Whitney U is equivalent to the area under the [[receiver operating characteristic]] curve that can be readily calculated
All the formulae given here are made more complicated in the presence of tied ranks, but if the number of these is small (and especially if there are no large tie bands) these can be ignored when doing calculations by hand. The computer statistical packages will use them as a matter or routine.
 
  +
::<math>AUC_1 = {U_1 \over n_1n_2}</math>
   
  +
==Examples==
The ''U'' test is useful in the same situations as the independent samples [[Student's t-test]], and the question arises of which should be preferred. Before electronic calculators and computer packages made calculations easy, the ''U'' test was preferred on grounds of speed of calculation. It remains the logical choice when the data are inherently ordinal; and it is much less likely than the ''t''-test to give a spuriously significant result because of one or two [[outlier]]s. On the other hand, the ''U'' test is often recommended for situations where the distributions of the two samples are very different. This is an error: it tests whether the two samples come from a common distribution, and [[Monte Carlo method]]s have shown that it is capable of giving erroneously significant results in some situations where they are drawn from distributions with the same mean and different variances. In that situation, the version of the ''t''-test that allows for the samples to come from populations of different [[variance]] is likely to give more reliable results.
 
  +
===Illustration of calculation methods===
   
 
Suppose that [[Aesop]] is dissatisfied with his [[The Tortoise and the Hare|classic experiment]] in which one [[tortoise]] was found to beat one [[hare]] in a race, and decides to carry out a significance test to discover whether the results could be extended to tortoises and hares in general. He collects a sample of 6 tortoises and 6 hares, and makes them all run his race. The order in which they reach the finishing post (their rank order, from first to last) is as follows, writing T for a tortoise and H for a hare:
The ''U'' test is related to a number of other nonparametric statistical procedures. For example, it is equivalent to using [[Maurice Kendall|Kendall]]'s &#964; correlation coefficient in a situation where one of the variables being correlated can only take two values.
 
 
:T H H H H H T T T T T H
  +
What is the value of ''U''?
  +
* Using the direct method, we take each tortoise in turn, and count the number of hares it is beaten by (lower rank), getting 0, 5, 5, 5, 5, 5, which means ''U'' = 25. Alternatively, we could take each hare in turn, and count the number of tortoises it is beaten by. In this case, we get 1, 1, 1, 1, 1, 6. So ''U'' = 6 + 1 + 1 + 1 + 1 + 1 = 11. Note that the sum of these two values for ''U'' is 36, which is 6&nbsp;&times;&nbsp;6.
 
* Using the indirect method:
 
:: the sum of the ranks achieved by the tortoises is 1 + 7 + 8 + 9 + 10 + 11 = 46.
 
:: Therefore ''U'' = 46&nbsp;&minus;&nbsp;(6&times;7)/2 = 46 &minus; 21 = 25.
  +
:: the sum of the ranks achieved by the hares is 2 + 3 + 4 + 5 + 6 + 12 = 32, leading to ''U'' = 32&nbsp;&minus;&nbsp;21 = 11.
   
  +
===Illustration of object of test===
A statistic linearly related to ''U'', the &#961; statistic proposed by [[Richard Herrnstein]], is widely used in studies of categorization ([[discrimination learning]] involving [[concept]]s) in birds (see [[animal cognition]]). &#961; is calculated by dividing ''U'' by its maximum value for the given sample sizes, which is simply ''n''<sub>1</sub>''n''<sub>2</sub>. &#961; is thus a non-parametric measure of the overlap between two distributions; it can take values between 0 and 1. Both extreme values represent complete separation of the distributions, while a &#961; of 0.5 represents complete overlap.
 
  +
A second example illustrates the point that the Mann–Whitney does not test for equality of medians. Consider another hare and tortoise race, with 19 participants of each species, in which the outcomes are as follows:
   
  +
:H H H H H H H H H T T T T T T T T T T H H H H H H H H H H T T T T T T T T T
== See also ==
 
*[[Kruskal-Wallis one-way analysis of variance]]
 
   
  +
The median tortoise here comes in at position 19, and thus actually beats the median hare, which comes in at position 20. However, the value of ''U'' (for hares) is 100 (using the quick method of calculation described above, we see that each of 10 hares is beaten by 10 tortoises so ''U'' = 10&nbsp;&times;&nbsp;10). Consulting tables, or using the approximation below, shows that this ''U'' value gives significant evidence that hares tend to do better than tortoises (''p''&nbsp;<&nbsp;0.05, two-tailed). Obviously this is an extreme distribution that would be spotted easily, but in a larger sample something similar could happen without it being so apparent. Notice that the problem here is not that the two distributions of ranks have different [[variance]]s; they are mirror images of each other, so their variances are the same, but they have very different [[skewness]].
==External links==
 
  +
*[http://fsweb.berry.edu/academic/education/vbissonnette/tables/mwu.pdf Table of critical values of U]
 
  +
==Normal approximation==
  +
  +
For large samples, ''U'' is approximately [[normal distribution|normally distributed]]. In that case, the standardized value
  +
  +
:<math>z = \frac{ U - m_U }{ \sigma_U }, \, </math>
  +
  +
where ''m''<sub>''U''</sub> and ''σ''<sub>''U''</sub> are the mean and standard deviation of ''U'', is approximately a standard normal deviate whose significance can be checked in tables of the normal distribution. ''m''<sub>''U''</sub> and σ<sub>''U''</sub> are given by
  +
 
:<math>m_U = \frac{n_1 n_2}{2}. \, </math>
  +
  +
:<math>\sigma_U=\sqrt{n_1 n_2 (n_1 + n_2+1) \over 12}. \, </math>
  +
 
The formula for the standard deviation is more complicated in the presence of tied ranks; the full formula is given in the text books referenced below. However, if the number of ties is small (and especially if there are no large tie bands) ties can be ignored when doing calculations by hand. The computer statistical packages will use the correctly adjusted formula as a matter of routine.
  +
  +
Note that since ''U''<sub>1</sub> + ''U''<sub>2</sub> = ''n''<sub>1</sub>&nbsp;''n''<sub>2</sub>, the mean ''n''<sub>1</sub>&nbsp;''n''<sub>2</sub>/2 used in the normal approximation is the mean of the two values of ''U''. Therefore, the absolute value of the ''z'' statistic calculated will be same whichever value of ''U'' is used.
  +
  +
==Relation to other tests==
  +
===Comparison to Student's t-test===
  +
The ''U'' test is useful in the same situations as the [[Student's_t-test#Independent_two-sample_t-test|independent samples]] [[Student's t-test|Student's ''t''-test]], and the question arises of which should be preferred.
  +
;Ordinal data: ''U'' remains the logical choice when the data are ordinal but not interval scaled, so that the spacing between adjacent values cannot be assumed to be constant.
  +
;Robustness: As it compares the sums of ranks <ref name="Motulsky 2007"> H. Motulsky, Statistics Guide, GraphPad Software, 2007. Motulsky. p. 123</ref>. the Mann–Whitney test is less likely than the ''t''-test to spuriously indicate significance because of the presence of [[outlier]]s – i.e. Mann–Whitney is more [[Robust statistics|robust]].{{Clarify|date=September 2009}}{{Citation needed|date=September 2009}}
  +
;Efficiency: When normality holds, ''MWW'' has an (asymptotic) [[Efficiency (statistics)|efficiency]] of <math>3/\pi</math> or about 0.95 when compared to the ''t'' test<ref name="Lehmann 1999"> E.L. Lehmann, Elements of Large Sample Theory. 1999. Springer. p. 176</ref>. For distributions sufficiently far from normal and for sufficiently large sample sizes, the ''MWW'' can be considerably more efficient than the ''t''<ref name="Conover 1980">W. J. Conover, Practical Nonparametric statistics, 2nd Edition 1980 John Wiley & Sons, pp. 225&ndash;226</ref>.
  +
  +
Overall, the robustness makes the ''MWW'' more widely applicable than the ''t'' test, and for large samples from the normal distribution, the efficiency loss compared to the ''t'' test is only 5%, so one can recommend ''MWW'' as the default test for comparing interval or ordinal measurements ''with similar distributions''.
  +
  +
The relation between [[efficiency (statistics)|efficiency]] and [[statistical power|power]] in concrete situations isn't trivial though. For small sample sizes one should investigate the power of the ''MWW'' vs ''t''.
  +
  +
===Different distributions===
  +
If one is only interested in stochastic ordering of the two populations (i.e., the concordance probability P(''Y''&nbsp;>&nbsp;''X'')), the Wilcoxon–Mann–Whitney test can be used even if the shapes of the distributions are different. The concordance probability is exactly equal to the area under the [[receiver operating characteristic]] curve (AUC) that is often used in the context.{{Citation needed|date=November 2009}}
  +
If one desires a simple shift interpretation, the ''U'' test should ''not'' be used when the distributions of the two samples are very different, as it can give erroneously significant results.
  +
  +
====Alternatives====
  +
In that situation, the [[Student's_t-test#Unequal_sample_sizes.2C_unequal_variance|unequal variances]] version of the ''t'' test is likely to give more reliable results, but only ''if normality holds.''
  +
  +
Alternatively, some authors (e.g. Conover) suggest transforming the data to ranks (if they are not already ranks) and then performing the ''t'' test on the transformed data, the version of the ''t'' test used depending on whether or not the population variances are suspected to be different. Rank transformations do not preserve variances so it is difficult to see how this would help.
  +
  +
The [[Brown–Forsythe test]] has been suggested as an appropriate non-parametric equivalent to the [[F test]] for equal variances.
  +
  +
===Kendall's τ===
 
The ''U'' test is related to a number of other non-parametric statistical procedures. For example, it is equivalent to [[Kendall tau rank correlation coefficient|Kendall's τ]] correlation coefficient if one of the variables is binary (that is, it can only take two values).
  +
  +
===''ρ'' statistic===
  +
A statistic called ''ρ'' that is linearly related to ''U'' and widely used in studies of categorization ([[discrimination learning]] involving [[concept]]s) is calculated by dividing ''U'' by its maximum value for the given sample sizes, which is simply ''n''<sub>1</sub>&nbsp;&times;&nbsp;''n''<sub>2</sub>. ''ρ'' is thus a non-parametric measure of the overlap between two distributions; it can take values between 0 and 1, and it is an estimate of P(''Y''&nbsp;>&nbsp;''X'')&nbsp;+&nbsp;0.5&nbsp;P(''Y''&nbsp;=&nbsp;''X''), where ''X'' and ''Y'' are randomly chosen observations from the two distributions. Both extreme values represent complete separation of the distributions, while a ρ of 0.5 represents complete overlap. This statistic was first proposed by [[Richard Herrnstein]] (see Herrnstein et al., 1976). The usefulness of the ρ statistic can be seen in the case of the odd example used above, where two distributions that were significantly different on a ''U''-test nonetheless had nearly identical medians: the ρ value in this case is approximately 0.723 in favour of the hares, correctly reflecting the fact that even though the median tortoise beat the median hare, the hares collectively did better than the tortoises collectively.
  +
  +
==Example statement of results==
  +
In reporting the results of a Mann–Whitney test, it is important to state:
  +
*A measure of the central tendencies of the two groups (means or medians; since the Mann–Whitney is an ordinal test, medians are usually recommended)
  +
*The value of ''U''
  +
*The sample sizes
  +
*The significance level.
  +
In practice some of this information may already have been supplied and common sense should be used in deciding whether to repeat it. A typical report might run,
  +
:"Median latencies in groups E and C were 153 and 247 ms; the distributions in the two groups differed significantly (Mann–Whitney ''U''&nbsp;=&nbsp;10.5, ''n''<sub>1</sub>&nbsp;=&nbsp;''n''<sub>2</sub>&nbsp;=&nbsp;8, ''P''&nbsp;<&nbsp;0.05 two-tailed)."
  +
A statement that does full justice to the statistical status of the test might run,
  +
:"Outcomes of the two treatments were compared using the Wilcoxon–Mann–Whitney two-sample rank-sum test. The treatment effect (difference between treatments) was quantified using the Hodges–Lehmann (HL) estimator, which is consistent with the Wilcoxon test (ref. 5 below). This estimator (HLΔ) is the median of all possible differences in outcomes between a subject in group B and a subject in group A. A non-parametric 0.95 confidence interval for HLΔ accompanies these estimates as does ρ, an estimate of the probability that a randomly chosen subject from population B has a higher weight than a randomly chosen subject from population A. The median [quartiles] weight for subjects on treatment A and B respectively are 147 [121, 177] and 151 [130, 180] Kg. Treatment A decreased weight by HLΔ = 5 Kg. (0.95 CL [2, 9] Kg., 2''P'' = 0.02, ρ = 0.58)."
  +
  +
However it would be rare to find so extended a report in a document whose major topic was not statistical inference.
  +
  +
== Implementations ==
  +
* [http://faculty.vassar.edu/lowry/utest.html Online implementation] using javascript
  +
* [http://www.alglib.net/statistics/hypothesistesting/mannwhitneyu.php ALGLIB] includes implementation of the Mann–Whitney ''U'' test in C++, C#, Delphi, Visual Basic, etc.
  +
* [[R (programming language)|R]] includes an implementation of the test (there referred to as the Wilcoxon two-sample test) as <code>wilcox.test</code> (and in cases of ties in the sample: <code>wilcox.exact</code> in the exactRankTests package, or use the <code>exact=FALSE</code> option).
  +
* [[Stata]] includes implementation of Wilcoxon-Mann-Whitney rank-sum test with [http://www.stata.com/help.cgi?ranksum ranksum] command.
  +
* Scipy provides an [http://www.scipy.org/doc/api_docs/SciPy.stats.stats.html#mannwhitneyu implementation] for Python
  +
 
==See also==
  +
*[[Kolmogorov–Smirnov test]]
  +
*[[Wilcoxon signed-rank test]]
 
*[[Kruskal&ndash;Wallis one-way analysis of variance]]
  +
  +
==Notes==
  +
{{Reflist}}<!--added under references heading by script-assisted edit-->
   
 
==References==
 
==References==
  +
* Corder, G.W. & Foreman, D.I. (2009). ''Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach''.
  +
* Conover, W. J. (1980). ''Practical Nonparametric Statistics'' (3rd Ed.)
 
* Herrnstein, R. J., Loveland, D. H., & Cable, C. (1976). Natural concepts in pigeons. ''Journal of Experimental Psychology: Animal Behavior Processes, 2'', 285&ndash;302.
  +
* Hollander, M. and Wolfe, D. A. (1999). ''Nonparametric Statistical Methods'' (2nd Ed.).
  +
* Lehmann, E. L. (1975). ''NONPARAMETRICS: Statistical Methods Based On Ranks''.
  +
* [http://www.math.ohio-state.edu/history/biographies/mann/ Mann, H. B.], & Whitney, D. R. (1947). "On a test of whether one of two random variables is stochastically larger than the other". ''Annals of Mathematical Statistics, 18'', 50&ndash;60 ([http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?handle=euclid.aoms/1177730491&view=body&content-type=pdf_1 pdf]).
  +
* Wilcoxon, F. (1945). "Individual comparisons by ranking methods". ''Biometrics Bulletin, 1'', 80&ndash;83.
   
 
==External links==
*Bi, J. (2006). Statistical analyses for R-index: Journal of Sensory Studies Vol 21(6) Dec 2006, 584-600.
 
  +
*Table of critical values of ''U'' [http://math.usask.ca/~laverty/S245/Tables/wmw.pdf (pdf)]
*Blair, R. C., Higgins, J. J., & Smitley, W. D. (1980). On the relative power of the U and t tests: British Journal of Mathematical and Statistical Psychology Vol 33(1) May 1980, 114-120.
 
  +
*Discussion and table of critical values for the original Wilcoxon Rank-Sum Test, which uses a slightly different test statistic ([http://www.stat.auckland.ac.nz/~wild/ChanceEnc/Ch10.wilcoxon.pdf pdf])
*Ciechalski, J. C. (1990). Action research, the Mann-Whitney U, and thou: Elementary School Guidance & Counseling Vol 25(1) Oct 1990, 54-63.
 
  +
*[http://faculty.vassar.edu/lowry/utest.html Interactive calculator] for ''U'' and its significance
*Curtis, D. A., & Marascuilo, L. A. (1992). Point estimates and confidence intervals for the parameters of the two-sample and matched-pair combined tests for ranks and normal scores: Journal of Experimental Education Vol 60(3) Spr 1992, 243-269.
 
  +
*[http://www.math.ohio-state.edu/history/biographies/mann/ Mann, Henry Berthold] (biography at [[Ohio State University]])
*D'Andrade, R. G. (1978). U-statistic hierarchical clustering: Psychometrika Vol 43(1) Mar 1978, 59-67.
 
*Gibbons, J. D., & Chakraborti, S. (1991). Comparisons of the Mann-Whitney, Student's t, and alternate t tests for means of normal distributions: Journal of Experimental Education Vol 59(3) Spr 1991, 258-267.
 
*Herrnstein, R. J., Loveland, D. H., & Cable, C. (1976). Natural concepts in pigeons. ''Journal of Experimental Psychology: Animal Behavior Processes, 2'', 285-302.
 
*Kasuya, E. (2001). Mann-Whitney U test when variances are unequal: Animal Behaviour Vol 61(6) Jun 2001, 1247-1249.
 
*Kim, C. (1986). An empirical comparison of the power and the robustness of the two independent means t-test and the Mann-Whitney U-test for semantic differential and Likert type scale scores assuming a discretized normal distribution: Dissertation Abstracts International.
 
*Lindman, H. R. (1972). Nonparametric statistics, Bayesian and classical: I. Sign test and Mann-Whitney U test. Oxford, England: Indiana U , No 72-6.
 
*Pacut, A. (1987). How to use the Mann-Whitney Test to detect a change in distribution for groups: Acta Neurobiologiae Experimentalis Vol 47(1) 1987, 19-26.
 
*Rasch, D., & Guiard, V. (2004). The robustness of parametric statistical methods: Psychology Science Vol 46(2) 2004, 175-208.
 
*Rasmussen, J. L. (1983). Parametric vs nonparametric tests on non-normal and transformed data: Dissertation Abstracts International.
 
*Rasmussen, J. L. (1986). An evaluation of parametric and non-parametric tests on modified and non-modified data: British Journal of Mathematical and Statistical Psychology Vol 39(2) Nov 1986, 213-220.
 
*Simmons, H. J., Garber, E. E., & Simmons, G. T. (1988). An order statistic for coarse measurement scales: Ce derived from U: Journal of General Psychology Vol 115(2) Apr 1988, 203-213.
 
*Trachtman, J. N., Giambalvo, V., & Dippner, R. S. (1978). On the assumptions concerning the assumptions of a t test: Journal of General Psychology Vol 99(1) Jul 1978, 107-116.
 
*Ury, H. K., & Wiggins, A. D. (1976). A general upper bound on the variance of the Wilcoxon-Mann-Whitney U-statistic for symmetric distributions with shift alternatives: British Journal of Mathematical and Statistical Psychology Vol 29(2) Nov 1976, 263-267.
 
*Wiedermann, W. T., & Alexandrowicz, R. W. (2007). A plea for more general tests than those for location only: Further considerations on Rasch & Guiard's 'The robustness of parametric statistical methods': Psychology Science Vol 49(1) 2007, 2-12.
 
*Zimmerman, D. W. (1985). Power functions of the t test and Mann-Whitney U test under violation of parametric assumptions: Perceptual and Motor Skills Vol 61(2) Oct 1985, 467-470.
 
*Zimmerman, D. W. (1987). Comparative power of Student t test and Mann-Whitney U test for unequal sample sizes and variances: Journal of Experimental Education Vol 55(3) Spr 1987, 171-174.
 
   
[[Category:Statistics]]
+
[[Category:Statistical tests]]
[[Category:Nonparametric statistical tests]]
+
[[Category:Non-parametric statistics]]
   
<!-
+
<!--
[[de:Wilcoxon-Rangsummentest]]
+
[[de:Mann-Whitney-U-Test]]
 
[[es:Prueba de Mann-Whitney]]
 
[[es:Prueba de Mann-Whitney]]
  +
[[fa:آزمون مان-ویتنی]]
 
[[it:Test di Wilcoxon-Mann-Whitney]]
 
[[it:Test di Wilcoxon-Mann-Whitney]]
 
[[nl:Wilcoxon]]
 
[[nl:Wilcoxon]]
 
[[ja:マン・ホイットニーのU検定]]
 
[[ja:マン・ホイットニーのU検定]]
  +
[[pl:Test Manna-Whitneya-Wilcoxona]]
  +
[[ru:U-критерий Манна-Уитни]]
  +
[[uk:U-критерій Манна-Уітні]]
  +
  +
{{statistics}}
 
-->
 
-->
   

Revision as of 07:11, 27 May 2010

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


In statistics, the Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test) is a non-parametric test for assessing whether two independent samples of observations have equally large values. It is one of the best-known non-parametric significance tests. It was proposed initially by Frank Wilcoxon in 1945, for equal sample sizes, and extended to arbitrary sample sizes and in other ways by H. B. Mann and D. R. Whitney (1947). MWW is virtually identical to performing an ordinary parametric two-sample t test on the data after ranking over the combined samples.

Assumptions and formal statement of hypotheses

Although Mann and Whitney (1947) developed the MWW test under the assumption of continuous responses with the alternative hypothesis being that one distribution is stochastically greater than the other, there are many other ways to formulate the null and alternative hypotheses such that the MWW test will give a valid test.[1]

A very general formulation is to assume that:

  1. All the observations from both groups are independent of each other,
  2. The responses are ordinal or continuous measurements (i.e. one can at least say, of any two observations, which is the greater),
  3. Under the null hypothesis the probability of an observation from one population (X) exceeding an observation from the second population (Y) equals the probability of an observation from Y exceeding an observation from X, that is, there is a symmetry between populations with respect to probability of random drawing of a larger observation.
  4. Under the alternative hypothesis the probability of an observation from one population (X) exceeding an observation from the second population (Y) (after correcting for ties) is not equal to 0.5. The alternative may also be stated in terms of a one-sided test, for example: P(X > Y) + 0.5 P(X = Y)  > 0.5.

If we add more strict assumptions than those above such that the responses are assumed continuous and the alternative is a location shift (i.e. F1(x) = F2(x + δ)), then we can interpret a significant MWW test as showing a significant difference in medians. Under this location shift assumption, we can also interpret the MWW as assessing whether the Hodges–Lehmann estimate of the difference in central tendency between the two populations is zero. The Hodges–Lehmann estimate for this two-sample problem is the median of all possible differences between an observation in the first sample and an observation in the second sample.

The general null hypothesis of a symmetry between populations with respect of obtaining a larger observation is sometimes stated more narrowly as both populations having exactly the same distribution. However, such a specific formulation of MWW test is not consistent with the original formulation of Mann and Whitney (1947), furthermore it leads to problems with interpretation of a test results when both distributions have different variances: for example, the test will never reject the null hypothesis if both populations have normal distribution with the same mean but different variances. In fact, if we formulate the null hypothesis as X and Y having the same distribution, the alternative hypothesis must be that the distributions of X and Y are the same except for a shift in location -- otherwise the test may have little power (or no power at all) to reject the null hypothesis.

Calculations

The test involves the calculation of a statistic, usually called U, whose distribution under the null hypothesis is known. In the case of small samples, the distribution is tabulated, but for sample sizes above ~20 there is a good approximation using the normal distribution. Some books tabulate statistics equivalent to U, such as the sum of ranks in one of the samples, rather than U itself.

The U test is included in most modern statistical packages. It is also easily calculated by hand, especially for small samples. There are two ways of doing this.

For small samples a direct method is recommended. It is very quick, and gives an insight into the meaning of the U statistic.

  1. Choose the sample for which the ranks seem to be smaller (The only reason to do this is to make computation easier). Call this "sample 1," and call the other sample "sample 2."
  2. Taking each observation in sample 1, count the number of observations in sample 2 that are smaller than it (count a half for any that are equal to it).
  3. The total of these counts is U.

For larger samples, a formula can be used:

  1. Arrange all the observations into a single ranked series. That is, rank all the observations without regard to which sample they are in.
  2. Add up the ranks for the observations which came from sample 1. The sum of ranks in sample 2 follows by calculation, since the sum of all the ranks equals Template:Frac where N is the total number of observations.
  3. U is then given by:
where n1 is the sample size for sample 1, and R1 is the sum of the ranks in sample 1.
Note that there is no specification as to which sample is considered sample 1. An equally valid formula for U is
The smaller value of U1 and U2 is the one used when consulting significance tables. The sum of the two values is given by
Knowing that R1 + R2 = N(N + 1)/2 and N = n1 + n2 , and doing some algebra, we find that the sum is

The maximum value of U is the product of the sample sizes for the two samples. In such a case, the "other" U would be 0. The Mann–Whitney U is equivalent to the area under the receiver operating characteristic curve that can be readily calculated

Examples

Illustration of calculation methods

Suppose that Aesop is dissatisfied with his classic experiment in which one tortoise was found to beat one hare in a race, and decides to carry out a significance test to discover whether the results could be extended to tortoises and hares in general. He collects a sample of 6 tortoises and 6 hares, and makes them all run his race. The order in which they reach the finishing post (their rank order, from first to last) is as follows, writing T for a tortoise and H for a hare:

T H H H H H T T T T T H

What is the value of U?

  • Using the direct method, we take each tortoise in turn, and count the number of hares it is beaten by (lower rank), getting 0, 5, 5, 5, 5, 5, which means U = 25. Alternatively, we could take each hare in turn, and count the number of tortoises it is beaten by. In this case, we get 1, 1, 1, 1, 1, 6. So U = 6 + 1 + 1 + 1 + 1 + 1 = 11. Note that the sum of these two values for U is 36, which is 6 × 6.
  • Using the indirect method:
the sum of the ranks achieved by the tortoises is 1 + 7 + 8 + 9 + 10 + 11 = 46.
Therefore U = 46 − (6×7)/2 = 46 − 21 = 25.
the sum of the ranks achieved by the hares is 2 + 3 + 4 + 5 + 6 + 12 = 32, leading to U = 32 − 21 = 11.

Illustration of object of test

A second example illustrates the point that the Mann–Whitney does not test for equality of medians. Consider another hare and tortoise race, with 19 participants of each species, in which the outcomes are as follows:

H H H H H H H H H T T T T T T T T T T H H H H H H H H H H T T T T T T T T T

The median tortoise here comes in at position 19, and thus actually beats the median hare, which comes in at position 20. However, the value of U (for hares) is 100 (using the quick method of calculation described above, we see that each of 10 hares is beaten by 10 tortoises so U = 10 × 10). Consulting tables, or using the approximation below, shows that this U value gives significant evidence that hares tend to do better than tortoises (p < 0.05, two-tailed). Obviously this is an extreme distribution that would be spotted easily, but in a larger sample something similar could happen without it being so apparent. Notice that the problem here is not that the two distributions of ranks have different variances; they are mirror images of each other, so their variances are the same, but they have very different skewness.

Normal approximation

For large samples, U is approximately normally distributed. In that case, the standardized value

where mU and σU are the mean and standard deviation of U, is approximately a standard normal deviate whose significance can be checked in tables of the normal distribution. mU and σU are given by

The formula for the standard deviation is more complicated in the presence of tied ranks; the full formula is given in the text books referenced below. However, if the number of ties is small (and especially if there are no large tie bands) ties can be ignored when doing calculations by hand. The computer statistical packages will use the correctly adjusted formula as a matter of routine.

Note that since U1 + U2 = n1 n2, the mean n1 n2/2 used in the normal approximation is the mean of the two values of U. Therefore, the absolute value of the z statistic calculated will be same whichever value of U is used.

Relation to other tests

Comparison to Student's t-test

The U test is useful in the same situations as the independent samples Student's t-test, and the question arises of which should be preferred.

Ordinal data
U remains the logical choice when the data are ordinal but not interval scaled, so that the spacing between adjacent values cannot be assumed to be constant.
Robustness
As it compares the sums of ranks [2]. the Mann–Whitney test is less likely than the t-test to spuriously indicate significance because of the presence of outliers – i.e. Mann–Whitney is more robust.[citation needed]
Efficiency
When normality holds, MWW has an (asymptotic) efficiency of or about 0.95 when compared to the t test[3]. For distributions sufficiently far from normal and for sufficiently large sample sizes, the MWW can be considerably more efficient than the t[4].

Overall, the robustness makes the MWW more widely applicable than the t test, and for large samples from the normal distribution, the efficiency loss compared to the t test is only 5%, so one can recommend MWW as the default test for comparing interval or ordinal measurements with similar distributions.

The relation between efficiency and power in concrete situations isn't trivial though. For small sample sizes one should investigate the power of the MWW vs t.

Different distributions

If one is only interested in stochastic ordering of the two populations (i.e., the concordance probability P(Y > X)), the Wilcoxon–Mann–Whitney test can be used even if the shapes of the distributions are different. The concordance probability is exactly equal to the area under the receiver operating characteristic curve (AUC) that is often used in the context.[citation needed] If one desires a simple shift interpretation, the U test should not be used when the distributions of the two samples are very different, as it can give erroneously significant results.

Alternatives

In that situation, the unequal variances version of the t test is likely to give more reliable results, but only if normality holds.

Alternatively, some authors (e.g. Conover) suggest transforming the data to ranks (if they are not already ranks) and then performing the t test on the transformed data, the version of the t test used depending on whether or not the population variances are suspected to be different. Rank transformations do not preserve variances so it is difficult to see how this would help.

The Brown–Forsythe test has been suggested as an appropriate non-parametric equivalent to the F test for equal variances.

Kendall's τ

The U test is related to a number of other non-parametric statistical procedures. For example, it is equivalent to Kendall's τ correlation coefficient if one of the variables is binary (that is, it can only take two values).

ρ statistic

A statistic called ρ that is linearly related to U and widely used in studies of categorization (discrimination learning involving concepts) is calculated by dividing U by its maximum value for the given sample sizes, which is simply n1 × n2. ρ is thus a non-parametric measure of the overlap between two distributions; it can take values between 0 and 1, and it is an estimate of P(Y > X) + 0.5 P(Y = X), where X and Y are randomly chosen observations from the two distributions. Both extreme values represent complete separation of the distributions, while a ρ of 0.5 represents complete overlap. This statistic was first proposed by Richard Herrnstein (see Herrnstein et al., 1976). The usefulness of the ρ statistic can be seen in the case of the odd example used above, where two distributions that were significantly different on a U-test nonetheless had nearly identical medians: the ρ value in this case is approximately 0.723 in favour of the hares, correctly reflecting the fact that even though the median tortoise beat the median hare, the hares collectively did better than the tortoises collectively.

Example statement of results

In reporting the results of a Mann–Whitney test, it is important to state:

  • A measure of the central tendencies of the two groups (means or medians; since the Mann–Whitney is an ordinal test, medians are usually recommended)
  • The value of U
  • The sample sizes
  • The significance level.

In practice some of this information may already have been supplied and common sense should be used in deciding whether to repeat it. A typical report might run,

"Median latencies in groups E and C were 153 and 247 ms; the distributions in the two groups differed significantly (Mann–Whitney U = 10.5, n1 = n2 = 8, P < 0.05 two-tailed)."

A statement that does full justice to the statistical status of the test might run,

"Outcomes of the two treatments were compared using the Wilcoxon–Mann–Whitney two-sample rank-sum test. The treatment effect (difference between treatments) was quantified using the Hodges–Lehmann (HL) estimator, which is consistent with the Wilcoxon test (ref. 5 below). This estimator (HLΔ) is the median of all possible differences in outcomes between a subject in group B and a subject in group A. A non-parametric 0.95 confidence interval for HLΔ accompanies these estimates as does ρ, an estimate of the probability that a randomly chosen subject from population B has a higher weight than a randomly chosen subject from population A. The median [quartiles] weight for subjects on treatment A and B respectively are 147 [121, 177] and 151 [130, 180] Kg. Treatment A decreased weight by HLΔ = 5 Kg. (0.95 CL [2, 9] Kg., 2P = 0.02, ρ = 0.58)."

However it would be rare to find so extended a report in a document whose major topic was not statistical inference.

Implementations

  • Online implementation using javascript
  • ALGLIB includes implementation of the Mann–Whitney U test in C++, C#, Delphi, Visual Basic, etc.
  • R includes an implementation of the test (there referred to as the Wilcoxon two-sample test) as wilcox.test (and in cases of ties in the sample: wilcox.exact in the exactRankTests package, or use the exact=FALSE option).
  • Stata includes implementation of Wilcoxon-Mann-Whitney rank-sum test with ranksum command.
  • Scipy provides an implementation for Python

See also

Notes

  1. Fay, MP, Proschan, MA (2010). Wilcoxon–Mann–Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Statistics Surveys 4: 1–39.
  2. H. Motulsky, Statistics Guide, GraphPad Software, 2007. Motulsky. p. 123
  3. E.L. Lehmann, Elements of Large Sample Theory. 1999. Springer. p. 176
  4. W. J. Conover, Practical Nonparametric statistics, 2nd Edition 1980 John Wiley & Sons, pp. 225–226

References

  • Corder, G.W. & Foreman, D.I. (2009). Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach.
  • Conover, W. J. (1980). Practical Nonparametric Statistics (3rd Ed.)
  • Herrnstein, R. J., Loveland, D. H., & Cable, C. (1976). Natural concepts in pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 2, 285–302.
  • Hollander, M. and Wolfe, D. A. (1999). Nonparametric Statistical Methods (2nd Ed.).
  • Lehmann, E. L. (1975). NONPARAMETRICS: Statistical Methods Based On Ranks.
  • Mann, H. B., & Whitney, D. R. (1947). "On a test of whether one of two random variables is stochastically larger than the other". Annals of Mathematical Statistics, 18, 50–60 (pdf).
  • Wilcoxon, F. (1945). "Individual comparisons by ranking methods". Biometrics Bulletin, 1, 80–83.

External links


This page uses Creative Commons Licensed content from Wikipedia (view authors).