Psychology Wiki
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions (with an additional location parameter) spliced together back-to-back, but the term double exponential distribution is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.

Characterization[]

Probability density function[]

A random variable has a Laplace(μ, b) distribution if its probability density function is

Here, μ is a location parameter and b ≥ 0, which is sometimes referred to as the diversity, is a scale parameter. If μ = 0 and b = 1, the positive half-line is exactly an exponential distribution scaled by 1/2.

The probability density function of the Laplace distribution is also reminiscent of the normal distribution; however, whereas the normal distribution is expressed in terms of the squared difference from the mean μ, the Laplace density is expressed in terms of the absolute difference from the mean. Consequently, the Laplace distribution has fatter tails than the normal distribution.

Cumulative distribution function[]

The Laplace distribution is easy to integrate (if one distinguishes two symmetric cases) due to the use of the absolute value function. Its cumulative distribution function is as follows:

The inverse cumulative distribution function is given by

Generating random variables according to the Laplace distribution[]

Given a random variable U drawn from the uniform distribution in the interval (−1/2, 1/2], the random variable

has a Laplace distribution with parameters μ and b. This follows from the inverse cumulative distribution function given above.

A Laplace(0, b) variate can also be generated as the difference of two i.i.d. Exponential(1/b) random variables. Equivalently, a Laplace(0, 1) random variable can be generated as the logarithm of the ratio of two iid uniform random variables.

Parameter estimation[]

Given N independent and identically distributed samples x1, x2, ..., xN, the maximum likelihood estimator

of μ is the sample median,[1] and the maximum likelihood estimator of b is

(revealing a link between the Laplace distribution and least absolute deviations).

Moments[]

Related distributions[]

  • If X ~ Laplace(μ, b) then kX + c ~ Laplace(kμ + c, kb).
  • If X ~ Laplace(0, b) then |X| ~ Exponential(b−1).
  • If X, Y ~ Exponential(λ) then XY ~ Laplace(0, λ−1) .
  • If X ~ Laplace(μ, b) then |X − μ| ~ Exponential(b−1).
  • If X ~ Laplace(μ, b) then X ~ EPD(μ, b, 0).
  • If X1, ... X4 ~ N(0, 1) then X1X2X3X4 ~ Laplace(0, 1).
  • If Xi ~ Laplace(μ, b) then (Chi-squared distribution)
  • If X, Y ~ Laplace(μ, b) then (F-distribution)
  • If X, Y ~ U(0, 1) then log(X/Y) ~ Laplace(0, 1).
  • If X ~ Exponential(λ) and Y ~ Bernoulli(0.5) independent of X, then X(2Y − 1) ~ Laplace(0, λ−1).
  • If X ~ Exponential(λ) and Y ~ Exponential(ν) independent of X, then λX − νY ~ Laplace(0, 1) .
  • If V ~ Exponential(1) and Z ~ N(0, 1) independent of V, then.
  • If X ~ GeometricStable(2, 0, λ, 0) then X ~ Laplace(0, λ).
  • Laplace distribution is the limiting case of Hyperbolic distribution
  • If X|Y ~ Normal(μ, σ = Y) with Y ~ Rayleigh(b) then X ~ Laplace(μ, b).

Relation to the exponential distribution[]

A Laplace random variable can be represented as the difference of two iid exponential random variables.[2] One way to show this is by using the characteristic function approach. For any set of independent continuous random variables, for any linear combination of those variables, its characteristic function (which uniquely determines the distribution) can be acquired by multiplying the corresponding characteristic functions.

Consider two i.i.d random variables X, Y ~ Exponential(λ). The characteristic functions for X, −Y are

respectively. On multiplying these characteristic functions (equivalent to the characteristic function of the sum of therandom variables X + (−Y)), the result is

.

This is the same as the characteristic function for Z ~ Laplace(0,1/λ), which is

.

Sargan distributions[]

Sargan distributions are a system of distributions of which the Laplace distribution is a core member. A pth order Sargan distribution has density[3][4]

for parameters α ≥ 0, βj ≥ 0. The Laplace distribution results for p = 0.

Applications[]

The Laplacian distribution has been used in speech recognition to model priors on DFT

  1. redirect Template:Ambiguous link

coefficients.[5][citation needed]

The addition of noise drawn from a Laplacian distribution, with scaling parameters appropriate to a function's sensitivity, to the output of a statistical database query is the most common means to provide differential privacy in statistical databases.

History[]

This distribution is often referred to as Laplace's first law of errors. He published it in 1774 when he noted that the frequency of an error could be expressed as an exponential function of its magnitude once its sign was disregarded.[6][7]

Laplace in 1778 published his second law of errors wherein he noted that the frequency of an error was proportional to the exponential of the square of its magnitude. This was subsequently rediscovered by Gauss (possibly in 1795) and it is now best known as the Normal distribution.

Keynes published a paper in 1911 based on his earlier thesis wherein he showed that the Laplace distribution minimised the absolute deviation from the median.[8]

See also[]

References[]

  1. Robert M. Norton (May 1984). The Double Exponential Distribution: Using Calculus to Find a Maximum Likelihood Estimator. The American Statistician 38 (2): 135–136.
  2. (2001) The Laplace distribution and generalizations: a revisit with applications to Communications, Economics, Engineering and Finance, 23 (Proposition 2.2.2, Equation 2.2.8), Birkhauser.
  3. Everitt, B.S. (2002) The Cambridge Dictionary of Statistics, CUP. ISBN 0-521-81099-X
  4. Johnson, N.L., Kotz S., Balakrishnan, N. (1994) Continuous Univariate Distributions, Wiley. ISBN 0-471-58495-9. p. 60
  5. (2006). On the multivariate Laplace distribution. IEEE Signal Processing Letters 13 (5): 300–303.
  6. Laplace, P-S. (1774). Mémoire sur la probabilité des causes par les évènements. Mémoires de l’Academie Royale des Sciences Presentés par Divers Savan, 6, 621–656
  7. Wilson EB (1923) First and second laws of error. JASA 18, 143
  8. Keynes JM (1911) The principal averages and the laws of error which lead to them. J Roy Stat Soc, 74, 322–331

External links[]

  • Template:Springer
Bvn-small Probability distributions [[[:Template:Tnavbar-plain-nodiv]]]
Univariate Multivariate
Discrete: BernoullibinomialBoltzmanncompound PoissondegeneratedegreeGauss-Kuzmingeometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniformYule-SimonzetaZipfZipf-Mandelbrot Ewensmultinomial
Continuous: BetaBeta primeCauchychi-squareDirac delta functionErlangexponentialexponential powerFfadingFisher's zFisher-TippettGammageneralized extreme valuegeneralized hyperbolicgeneralized inverse GaussianHotelling's T-squarehyperbolic secanthyper-exponentialhypoexponentialinverse chi-squareinverse gaussianinverse gammaKumaraswamyLandauLaplaceLévyLévy skew alpha-stablelogisticlog-normalMaxwell-BoltzmannMaxwell speednormal (Gaussian)ParetoPearsonpolarraised cosineRayleighrelativistic Breit-WignerRiceStudent's ttriangulartype-1 Gumbeltype-2 GumbeluniformVoigtvon MisesWeibullWigner semicircle DirichletKentmatrix normalmultivariate normalvon Mises-FisherWigner quasiWishart
Miscellaneous: Cantorconditionalexponential familyinfinitely divisiblelocation-scale familymarginalmaximum entropy phase-typeposterior priorquasisampling


This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement