Psychology Wiki

Bimodal distribution

34,202pages on
this wiki
Add New Page
Talk0 Share

Ad blocker interference detected!

Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


In statistics, a bimodal distribution is a continuous probability distribution with two different modes. These appear as distinct peaks (local maxima) in the probability density function, as shown in Figure 1.


When the two modes are unequal the larger mode is known as the major mode and the other as the minor mode. The least frequent value between the modes is known as the antimode. The difference between the major and minor modes is known as the amplitude. In time series the major mode is called the acrophase and the antimode the batiphase.


Examples of variables with bimodal distributions include the time between eruptions of certain geysers, the color of galaxies, the size of worker weaver ants, the age of incidence of Hodgkin's lymphoma, the speed of inactivation of the drug isoniazid in US adults, the absolute magnitude of novae, and the circadian activity patterns of those crepuscular animals that are active both in morning and evening twilight.

Important bimodal distributions include the arcsine distribution and the beta distribution.

Mixture distributionsEdit

Main article: Mixture distribution

A bimodal distribution most commonly arises as a mixture of two different unimodal distributions (i.e. distributions having only one mode). In other words, the bimodally distributed random variable X is defined as  Y with probability  \alpha or  Z with probability  (1-\alpha), where Y and Z are unimodal random variables and 0 < \alpha < 1 is a mixture coefficient. For example, the bimodal distribution of sizes of weaver ant workers shown in Figure 2 arises due to existence of two distinct classes of workers, namely major workers and minor workers.[1] In this case, Y would be the size of a random major worker, Z the size of a random minor worker, and α the proportion of worker weaver ants that are major workers.

A mixture of two normal distributions has five parameters to estimate: the two means, the two variances and the mixing parameter. A mixture of two normal distributions with equal standard deviations is bimodal only if their means differ by at least twice the common standard deviation.[2] Estimates of the parameters is simplified if the variances can be assumed to be equal (the homoscedastic case).

Mixtures of other distributions require additional parameters to be estimated.

A mixture of two unimodal distributions with differing means is not necessarily bimodal. The combined distribution of heights of men and women is sometimes used as an example of a bimodal distribution, but in fact the difference in mean heights of men and women is too small relative to their standard deviations to produce bimodality.[2]

Bimodal distributions have the peculiar property that - unlike the unimodal distributions - the mean may be a more robust sample estimator than the median.[3] This is clearly the case when the distribution is U shaped like the arcsine distribution. It may not be true when the distribution has one or more long tails.

Moments of mixturesEdit


 f( x ) = p g_1( x ) + ( 1 - p ) g_2( x )

where gi is a probability distribution and p is the mixing parameter.

The moments of f(x) are[4]

 \mu = p \mu_1 + ( 1 - p ) \mu_2
 \nu_2 = p[ \sigma_1^2 + \delta_1^2 ] + ( 1 - p )[ \sigma_2^2 + \delta_2^2 ]
 \nu_3 = p [ S_1 \sigma_1^3 + 3 \delta_1 \sigma_1^2 + \delta_1^3 ] + ( 1 - p )[ S_2 \sigma_2^3 + 3 \delta_2 \sigma_2^2 + \delta_2^3 ]
 \nu_4 = p[ K_1 \sigma_1^4 + 4 S_1 \delta_1 \sigma_1^3 + 6 \delta_1^2 \sigma_1^2 + \delta_1^4 ] + ( 1 - p )[ K_2 \sigma_2^4 + 4 S_2 \delta_2 \sigma_2^3 + 6 \delta_2^2 \sigma_2^2 + \delta_2^4 ]


 \mu = \int{ x f( x ) dx }
 \delta_i = \mu_i - \mu
 \nu_r = \int{ ( x - \mu )^r f( x ) dx }

and Si and Ki are the skewness and kurtosis of the ith distribution.


More generally, a multimodal distribution is a continuous probability distribution with two or more modes, as illustrated in Figure 3.

Summary statisticsEdit

Bimodal distributions are a commonly used example of how summary statistics such as the mean, median, and standard deviation can be deceptive when used on an arbitrary distribution. For example, in the distribution in Figure 1, the mean and median would be about zero, even though zero is not a typical value. The standard deviation is also larger than deviation of each normal distribution.

Ashman's DEdit

A statistic that may be useful is Ashman's D:[5]

 D = 2^\frac{ 1 }{ 2 } \frac{ | \mu_1 - \mu_2 | }{ \sqrt{ ( \sigma_1^2 + \sigma_2^2 ) } }

where μ1, μ2 are the means and σ1 σ2 are the standard deviations.

For a mixture of two normal distributions D > 2 is required for a clean separation of the distributions.

Bimodality indexEdit

The bimodality index assumes that the distribution is a sum of two normal distributions with equal variances but differing means.[6] It is defined as follow:

 \delta = \frac{ \mu_1 - \mu_2 }{ \sigma }

where μ1, μ2 are the means and σ is the common standard deviation.

 BI = \delta \sqrt{ p( 1 - p ) }

where p is the mixing parameter.

Bimodality coefficientEdit

Sarle's bimodality coefficient b is[7]

 \beta = \frac{ \gamma^2 + 1 }{ \kappa }

where γ is the skewness and κ is the kurtosis. The kurtosis is here defined to be the standardised fourth moment around the mean. The value of b lies between 0 and 1.[8]

The formula for a finite sample is[9]

 b = \frac{ g^2 + 1 }{ k + 3 ( 1 - \frac{ ( n - 1 )^2 }{ ( n - 2 )( n - 3 ) } ) }

where n is the number of items in the sample, g is the sample skewness and k is the sample kurtosis.

The value of b for the uniform distribution is 5/9. This is also its value for the exponential distribution. Values greater than 5/9 may indicate a bimodal or multimodal distribution. The maximum value (1.0) is reached only by a Bernoulli distribution with only two distinct values or the sum of two different Dirac delta functions.

The distribution of this statistic is unknown. It is related to a statistic proposed earlier by Pearson - the difference between the kurtosis and the square of the skewness (vide infra).

Statistical testsEdit

Unimodal vs bimodal distributionEdit

A necessary but not sufficient condition for a symmetrical distribution to be bimodal is that the kurtosis be less than three.[10][11] Here the kurtosis is defined to be the standardised fourth moment around the mean. The reference given prefers to use the excess kurtosis - the kurtosis less 3.

Pearson in 1894 was the first to devise a procedure to test whether a distribution could be resolved into two normal distributions.[12] This method required the solution of a ninth order polynomial. In a subsequent paper Pearson reported that for any distribution skewness2 + 1 < kurtosis.[8] Later Pearson showed that[13]

 b_2 - b_1 \ge 1

where b2 is the kurtosis and b1 is the square of the skewness. Equality holds only for the two point Bernoulli distribution or the sum of two different Dirac delta functions. These are the most extreme cases of bimodality possible. The kurtosis in both these cases is 1. Since they are both symmetrical their skewness is 0 and the difference is 1.

Baker proposed a transformation to convert a bimodal to a unimodal distribution.[14]

Haldane suggested a test based on second central differences.[15]

To test whether a univariant distribution is unimodal or bimodal, Larkin introduced a test based on the F test.[16] Later Benett instead used a G test.[17]

Tokeshi proposed another test for bimodality.[18][19]

General testsEdit

To test if a distribution is other than unimodal, several additional tests have been devised: the bandwidth test,[20] the dip test,[21] the excess mass test,[22] the MAP test,[23] the mode-existence test,[24] the runt test,[25][26] the span test,[27] and the saddle test.

See alsoEdit


  1. 1.0 1.1 Weber, NA (1946). Dimorphism in the African Oecophylla worker and an anomaly (Hym.: Formicidae). Annals of the Entomological Society of America 39: pp. 7–10.
  2. 2.0 2.1 (2002). Is Human Height Bimodal?. The American Statistician 56 (3): 223–229.
  3. Mosteller F, Tukey JW (1977) Data analysis and regression: a second course in statistics. Reading, Mass, Addison-Wesley Pub Co
  4. Kim T-H, White H (2003) On more robust estimation of skewness and kurtosis: Simulation and application to the S & P 500 index
  5. Ashman KM, Bird CM, Zepf SE(1994) Astronomical J 108: 2348
  6. Wang J, Wen S, Symmans WF, Pusztai L, Coombes KR (2009) The bimodality index: a criterion for discovering and ranking bimodal signatures from cancer gene expression profiling data. Cancer Inform 7:199-216
  7. Ellision AM (1987) Effect of seed dimorphism on the density-dependent dynamics of experimental populations of Atriplex triangularis (Chenopodiaceae). Am J Botany 74(8): 1280-1288
  8. 8.0 8.1 Pearson K (1916) Mathematical contributions to the theory of evolution, XIX: Second supplement to a memoir on skew variation. Phil Trans Roy Soc London. Series A 216 (538–548): 429–457. Bibcode 1916RSPTA.216..429P. doi:10.1098/rsta.1916.0009. JSTOR 91092
  9. Hellwig B, Hengstler JG, Schmidt M, Gehrmann MC, Schormann W, Rahnenführer J (2010) Comparison of scores for bimodality of gene expression distributions and genome-wide evaluation of the prognostic relevance of high scoring genes. BMC Bioinformatics 11:276
  10. Gneddin OY(2010) Quantifying Bimodality.
  11. Muratov AL, Gnedin OY (2010) Modeling the metallicity distribution of globular clusters. Ap J (submitted) arXiv:1002.1325
  12. Pearson K (1894) Contributions to the mathematical theory of evolution: On the dissection of asymmetrical frequency-curves. Phil Trans Roy Soc Series A, Part 1, 185: 71-90
  13. Pearson K (1929) Editorial note. Biometrika 21: 370-375
  14. Baker GA (1930) Transformations of bimodal distributions. Ann Math Stat 1 (4) 334-344
  15. Haldane JBS (1951) Simple tests for bimodality and bitangentiality. Ann Eugenics 16 (1) 359–364 DOI: 10.1111/j.1469-1809.1951.tb02488.x
  16. Larkin RP (1979) An algorithm for assessing bimodality vs. unimodality in a univariate distribution. Behavior Research Methods 11 (4) 467-468 DOI: 10.3758/BF03205709
  17. Bennett SC (1992) Sexual dimorphism of Pteranodon and other pterosaurs, with comments on cranial crests. J Vert Paleont 12 (4) 422-434
  18. Tokeshi M (1992) Dynamics and distribution in animal communities; theory and analysis. Researches in Population Ecology 34:249–273
  19. Barreto S, Borges PAV, Guo Q (2003) A typing error in Tokeshi’s test of bimodality. Global Ecology & Biogeography 12: 173–174
  20. Silverman BW (1981) Using kernel density estimates to investigate multimodality. J Roy Statist Soc Ser B 43:97-99
  21. Hartigan JA, Hartigan PM (1985) The dip test of unimodality. Ann Statist 13 (1) 70-84
  22. Mueller DW, Sawitzki G (1991) Excess mass estimates and tests for multimodality. JASA 86, 738 -746
  23. Rozál GPM Hartigan JA (1994) The MAP test for multimodality. J Classification 11 (1) 5-36 DOI: 10.1007/BF01201021
  24. Minnotte MC (1997) Nonparametric testing of the existence of modes. Ann Statist 25 (4) 1646-1660
  25. Hartigan JA, Mohanty S (1992) The RUNT test for multimodality. J Classifcation 9: 63-70
  26. Andrushkiw RI, Klyushin DD, Petunin YI (2008) Theory Stoch Processes 14 (1) 1-6
  27. Hartigan JA (1988) The span test of multimodality
Bvn-small Probability distributions [[[:Template:Tnavbar-plain-nodiv]]]
Univariate Multivariate
Discrete: BernoullibinomialBoltzmanncompound PoissondegeneratedegreeGauss-Kuzmingeometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniformYule-SimonzetaZipfZipf-Mandelbrot Ewensmultinomial
Continuous: BetaBeta primeCauchychi-squareDirac delta functionErlangexponentialexponential powerFfadingFisher's zFisher-TippettGammageneralized extreme valuegeneralized hyperbolicgeneralized inverse GaussianHotelling's T-squarehyperbolic secanthyper-exponentialhypoexponentialinverse chi-squareinverse gaussianinverse gammaKumaraswamyLandauLaplaceLévyLévy skew alpha-stablelogisticlog-normalMaxwell-BoltzmannMaxwell speednormal (Gaussian)ParetoPearsonpolarraised cosineRayleighrelativistic Breit-WignerRiceStudent's ttriangulartype-1 Gumbeltype-2 GumbeluniformVoigtvon MisesWeibullWigner semicircle DirichletKentmatrix normalmultivariate normalvon Mises-FisherWigner quasiWishart
Miscellaneous: Cantorconditionalexponential familyinfinitely divisiblelocation-scale familymarginalmaximum entropy phase-typeposterior priorquasisampling
This page uses Creative Commons Licensed content from Wikipedia (view authors).

Also on Fandom

Random Wiki