Wikia

Psychology Wiki

Uniform distribution (continuous)

Talk0
34,135pages on
this wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


Uniform
Probability density function
PDF of the uniform probability distribution using the maximum convention at the transition points.
Using maximum convention
Cumulative distribution function
CDF of the uniform probability distribution.
Parameters a,b \in (-\infty,\infty)
Support a \le x \le b
pdf 
    \begin{matrix}
    \frac{1}{b - a} & \mbox{for }a < x < b \\  \\
    0 & \mathrm{for}\ x<a\ \mathrm{or}\ x>b
    \end{matrix}
cdf 
    \begin{matrix}
    0 & \mbox{for }x < a \\
    \frac{x-a}{b-a} & ~~~~~ \mbox{for }a \le x < b \\
    1 & \mbox{for }x \ge b
    \end{matrix}
Mean \frac{a+b}{2}
Median \frac{a+b}{2}
Mode any value in [a,b]
Variance \frac{(b-a)^2}{12}
Skewness 0
Kurtosis -\frac{6}{5}
Entropy \ln(b-a)
mgf \frac{e^{tb}-e^{ta}}{t(b-a)}
Char. func. \frac{e^{itb}-e^{ita}}{it(b-a)}

In mathematics, the continuous uniform distributions are probability distributions such that all intervals of the same length are equally probable.

The continuous uniform distribution is a generalization of the rectangle function because of the shape of its probability density function. It is parametrised by the smallest and largest values that the uniformly-distributed random variable can take, a and b. The probability density function of the uniform distribution is thus:


  f(x)=\left\{\begin{matrix}
  \frac{1}{b - a} & \ \ \ \mbox{for }a < x < b, \\  \\
  0 & \mathrm{for}\ x<a\ \mathrm{or}\ x>b, \\  \\
  \mathrm{see}\ \mathrm{below} & \ \ \ \mbox{for }x=a \mbox{ or }x=b.
  \end{matrix}\right.

The values at the two boundaries a and b are usually unimportant because they do not alter the values of the integrals of f(xdx over any interval, nor of x f(xdx or the like. Sometimes they are chosen to be zero, and sometimes chosen to be 1/(b − a). The latter is appropriate in the context of estimation by the method of maximum likelihood. In the context of Fourier analysis, one may take the value of f(a) or f(b) to be 1/(2(b − a)), since then the inverse transform of many integral transforms of this uniform function will yield back the function itself, rather than a function which is equal "almost everywhere", i.e. except at a finite number of points. Also, it is consistent with the sign function which has no such ambiguity.

The uniform distribution is normalized:

\int_{-\infty}^\infty f(x)\,dx=1.

The cumulative distribution function Edit

The cumulative distribution function is:


  F(x)=\left\{\begin{matrix}
  0 & \mbox{for }x < a \\
  \frac{x-a}{b-a} & \ \ \ \mbox{for }a \le x < b \\
  1 & \mbox{for }x \ge b
  \end{matrix}\right.

The moment-generating function Edit

The moment-generating function is


M_x = E(e^{tx}) = \frac{e^{tb}-e^{ta}}{t(b-a)}

from which we may calculate the raw moments m k

m_1=\frac{a+b}{2},
m_2=\frac{a^2+ab+b^2}{3},
m_k=\frac{1}{k+1}\sum_{i=0}^k a^ib^{k-i}.

For a random variable following this distribution, the expected value is then m1 = (a + b)/2 and the variance is m2 − m12 = (b − a)2/12.

This distribution can be generalized to more complicated sets than intervals. If S is a Borel set of positive, finite measure, the uniform probability distribution on S can be specified by saying that the pdf is zero outside S and constantly equal to 1/K on S, where K is the Lebesgue measure of S.

For n ≥ 2, the nth cumulant of the uniform distribution on the interval [0, 1] is bb</sup>/n, where bn is the nth Bernoulli number.

Standard uniform Edit

Restricting a=0 and b=1, the resulting distribution is called a standard uniform distribution.

One interesting property of the standard uniform distribution is that if U1 is uniformly distributed,

U_1 \sim \mathrm{UNIFORM}(0,1)

then so is 1-U1:

1 - U_1 \sim \mathrm{UNIFORM}(0,1)

Related distributionsEdit

Relationship to other functions Edit

As long as the same conventions are followed at the transition points, the probability density function may also be expressed in terms of the Heaviside step function:

f(x)=\frac{H(x-a)-H(x-b)}{b-a}

and in terms of the rectangle function

f(x)=\frac{1}{b-a}\,\textrm{rect}\left(\frac{x-\left(\frac{a+b}{2}\right)}{b-a}\right)

There is no ambiguity at the transition point of the sign function. Using the half-maximum convention at the transition points, the uniform distribution may be expressed in terms of the sign function as:

f(x)=\frac{\textrm{sgn}(x-a)-\textrm{sgn}(x-b)}{2(b-a)}

The standard uniform distribution is the continuous uniform distribution with the values of a and b set to 0 and 1 respectively, so that the random variable can take values only between 0 and 1.

Sampling from a uniform distribution Edit

When working with probability, it is often useful to run experiments such as computational simulations. Many programming languages have the ability to generate pseudo-random numbers which are effectively distributed according to the standard uniform distribution.

If u is a value sampled from the standard uniform distribution, then the value a + (ba)u follows the uniform distribution parametrised by a and b, as described above. Other transformations can be used to generate other statistical distributions from the uniform distribution. (see uses below)

Order statistcs Edit

Let X1, ..., Xn be an i.i.d. sample from the uniform distribution on the interval [0, 1]. Let X(k) (with the subscript k enclosed in parentheses) be the kth order statistic from this sample. Then the probability distribution of X(k) is a Beta distribution with parameters k and n − k + 1. The expected value is

\operatorname{E}(X_{(k)}) = {k \over n+1}.

This fact is relevant to Q-Q plots.

Uses of the uniform distribution Edit

In statistics, when a p-value is used as a test statistic for a simple null hypothesis, and the distribution of the test statistic is continuous, then the test statistic is uniformly distributed between 0 and 1 if the null hypothesis is true.

Although the uniform distribution is not commonly found in nature, it is particularly useful for sampling from arbitrary distributions.

A general method is the inverse transform sampling method, which uses the cumulative distribution function (CDF) of the target random variable. This method is very useful in theoretical work. Since simulations using this method require inverting the CDF of the target variable, alternative methods have been devised for the cases where the CDF is not known in closed form. One such method is rejection sampling.

The normal distribution is an important example where the inverse transform method is not efficient. However, there is an exact method, the Box-Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables.de:Stetige Gleichverteilungnl:Uniforme verdelingfi:Tasajakauma

This page uses Creative Commons Licensed content from Wikipedia (view authors).

Around Wikia's network

Random Wiki