Psychology Wiki
(update wp)
Line 1: Line 1:
 
{{StatsPsy}}
 
{{StatsPsy}}
   
  +
'''Probability''' is a way of expressing knowledge or belief that an [[Event (probability theory)|event]] will occur or has occurred. In [[mathematics]] the concept has been given an exact meaning in [[probability theory]], that is used extensively in such [[areas of study]] as [[mathematics]], [[statistics]], [[finance]], [[gambling]], [[science]], and [[philosophy]] to draw conclusions about the likelihood of potential events and the underlying mechanics of [[complex systems]].
The word '''''probability''''' derives from the [[Latin]] ''probare'' (to prove, or to test).
 
Informally, ''probable'' is one of several words applied to uncertain events or knowledge,
 
being closely related in meaning to ''likely'', ''risky'', ''hazardous'', and ''doubtful''. ''Chance'', ''odds'', and ''bet'' are other words expressing similar notions.
 
Just as the [[classical mechanics|theory of mechanics]] assigns precise definitions to such everyday terms as ''work'' and ''force'', the [[probability theory|theory of probability]] attempts to quantify the notion of ''probable''.
 
   
  +
==Interpretations==
==Historical remarks==
 
  +
{{main|Probability interpretations}}
   
  +
The word ''probability'' does not have a consistent direct [[definition]]. In fact, there are sixteen broad categories of '''probability interpretations''', whose adherents possess different (and sometimes conflicting) views about the fundamental nature of probability:
The scientific study of probability is a modern development. [[Gambling]] shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions of use in those problems only arose much later.
 
   
  +
#[[Frequentists]] talk about probabilities only when dealing with [[experiments]] that are [[random]] and [[well-defined]]. The probability of a random event denotes the ''relative frequency of occurrence'' of an experiment's outcome, when repeating the experiment. Frequentists consider probability to be the relative frequency "in the long run" of outcomes.<ref>The Logic of Statistical Inference, Ian Hacking, 1965</ref>
The doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli]]'s ''Ars Conjectandi'' (posthumous, 1713) and Abraham de Moivre's ''Doctrine of Chances'' (1718) treated the subject as a branch of mathematics.
 
  +
#[[Bayesian probability|Bayesians]], however, assign probabilities to any [[Statement (logic)|statement]] whatsoever, even when no random process is involved. Probability, for a Bayesian, is a way to represent an [[Individual|individual's]] ''degree of belief'' in a statement, given the [[evidence]].
   
  +
==Etymology==
The theory of errors may be traced back to Roger Cotes's ''Opera Miscellanea'' (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given.
 
  +
The word ''probability'' [[Derivation (linguistics)|derives]] from ''[[:wiktionary:probity|probity]]'', a measure of the [[authority]] of a [[witness]] in a [[legal case]] in [[Europe]], and often correlated with the witness's [[nobility]]. In a sense, this differs much from the modern meaning of ''probability'', which, in contrast, is used as a measure of the weight of [[empirical evidence]], and is arrived at from [[inductive reasoning]] and [[statistical inference]].<ref> The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference, Ian Hacking, Cambridge University Press, 2006, ISBN 0521685575, 9780521685573</ref><ref> The Cambridge History of Seventeenth-century Philosophy, Daniel Garber, 2003</ref>
   
  +
==History==
Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve <math>y = \phi(x)</math>, <math>x</math> being any error and <math>y</math> its probability, and laid down three properties of this curve:
 
  +
{{See|History of probability}} {{See|History of statistics}}
# it is symmetric as to the <math>y</math>-axis;
 
  +
The scientific study of probability is a modern development. [[Gambling]] shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions of use in those problems only arose much later.
# the <math>x</math>-axis is an asymptote, the probability of the error <math>\infty</math> being 0;
 
#the area enclosed is 1, it being certain that an error exists.
 
He deduced a formula for the mean of three observations. He also gave (1781) a formula for the law of facility of error (a term due to Lagrange, 1774), but one which led to unmanageable equations. [[Daniel Bernoulli]] (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.
 
   
  +
According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin ''probabilis'') meant ''approvable'', and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances."<ref name="Jeffrey">Jeffrey, R.C., ''Probability and the Art of Judgment,'' Cambridge University Press. (1992). pp. 54-55 . ISBN 0-521-39459-7</ref> However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.<ref name="Franklin">Franklin, J., ''The Science of Conjecture: Evidence and Probability Before Pascal,'' Johns Hopkins University Press. (2001). pp. 22, 113, 127</ref>
The [[method of least squares]] is due to [Adrien-Marie Legendre (1805), who introduced it in his ''Nouvelles méthodes pour la détermination des orbites des comètes'' (''New Methods for Determining the Orbits of Comets''). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,
 
   
  +
Aside from some elementary considerations made by [[Girolamo Cardano]] in the 16th century, the doctrine of probabilities dates to the correspondence of [[Pierre de Fermat]] and [[Blaise Pascal]] (1654). [[Christiaan Huygens]] (1657) gave the earliest known scientific treatment of the subject. [[Jakob Bernoulli|Jakob Bernoulli's]] ''[[Ars Conjectandi]]'' (posthumous, 1713) and [[Abraham de Moivre|Abraham de Moivre's]] ''[[Doctrine of Chances]]'' (1718) treated the subject as a branch of mathematics. See [[Ian Hacking|Ian Hacking's]] ''The Emergence of Probability'' and [[James Franklin (philosopher)|James Franklin's]] ''The Science of Conjecture'' for histories of the early development of the very concept of mathematical probability.
:<math>\phi(x) = ce^{-h^2 x^2}</math>
 
   
  +
The theory of errors may be traced back to [[Roger Cotes|Roger Cotes's]] ''Opera Miscellanea'' (posthumous, 1722), but a memoir prepared by [[Thomas Simpson]] in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given.
<math>c</math> and <math>h</math> being constants depending on precision of observation. He gave two proofs, the second being essentially the same as John Herschel's (1850). Carl Friedrich Gauss gave the first proof which seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W. F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), Augustus De Morgan (1864), [Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for <math>r</math>, the probable error of a single observation, is well known.
 
   
  +
[[Pierre-Simon Laplace]] (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve <math>y = \phi(x)</math>, <math>x</math> being any error and <math>y</math> its probability, and laid down three properties of this curve:
In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion, and [[Karl Pearson]]. Augustus De Morgan and George Boole improved the exposition of the theory.
 
  +
# it is symmetric as to the <math>y</math>-axis;
  +
# the <math>x</math>-axis is an [[asymptote]], the probability of the error <math>\infty</math> being 0;
  +
#the area enclosed is 1, it being certain that an error exists.
  +
He also gave (1781) a formula for the law of facility of error (a term due to Lagrange, 1774), but one which led to unmanageable equations. [[Daniel Bernoulli]] (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.
   
  +
The [[method of least squares]] is due to [[Adrien-Marie Legendre]] (1805), who introduced it in his ''Nouvelles méthodes pour la détermination des orbites des comètes'' (''New Methods for Determining the Orbits of Comets''). In ignorance of Legendre's contribution, an Irish-American writer, [[Robert Adrain]], editor of "The Analyst" (1808), first deduced the law of facility of error,
==Concepts==
 
   
  +
:<math>\phi(x) = ce^{-h^2 x^2},</math>
There is essentially one set of mathematical rules for manipulating probability; these rules are listed under "Formalization of probability" below.
 
(There are other rules for quantifying uncertainty,
 
such as the [[Dempster-Shafer theory]] and [[possibility theory]],
 
but those are essentially different and not compatible with the laws of probability as they are usually understood.)
 
However, there is ongoing debate over what, exactly, the rules apply to; this is the topic of [[probability interpretations]].
 
   
  +
<math>h</math> being a constant depending on precision of observation, and <math>c</math> a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as [[John Herschel|John Herschel's]] (1850). [[Carl Friedrich Gauss|Gauss]] gave the first proof which seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), [[James Ivory (mathematician)|James Ivory]] (1825, 1826), Hagen (1837), [[Friedrich Bessel]] (1838), [[W. F. Donkin]] (1844, 1856), and [[Morgan Crofton]] (1870). Other contributors were Ellis (1844), [[Augustus De Morgan|De Morgan]] (1864), [[James Whitbread Lee Glaisher|Glaisher]] (1872), and [[Giovanni Schiaparelli]] (1875). Peters's (1856) formula for <math>r</math>, the probable error of a single observation, is well known.
The general idea of probability is often divided into two related concepts:
 
   
  +
In the nineteenth century authors on the general theory included [[Laplace]], [[Sylvestre Lacroix]] (1816), Littrow (1833), [[Adolphe Quetelet]] (1853), [[Richard Dedekind]] (1860), Helmert (1872), [[Hermann Laurent]] (1873), Liagre, Didion, and [[Karl Pearson]]. [[Augustus De Morgan]] and [[George Boole]] improved the exposition of the theory.
* [[Aleatory probability]], which represents the likelihood of future events whose occurrence is governed by some ''[[randomness|random]]'' physical phenomenon. This concept can be further divided into physical phenomena that are predictable, in principle, with sufficient information (see ''[[Determinism]]''), and phenomena which are essentially unpredictable. Examples of the first kind include tossing [[dice]] or spinning a [[roulette]] wheel; an example of the second kind is [[radioactive decay]].
 
   
  +
On the geometric side (see [[integral geometry]]) contributors to ''[[The Educational Times]]'' were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and [[Artemas Martin]]).
* [[Epistemic probability]], which represents one's uncertainty about propositions when one lacks complete knowledge of causative circumstances. Such propositions may be about past or future events, but need not be. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true, and to determine how "probable" it is that a suspect committed a crime, based on the evidence presented.
 
   
  +
==Mathematical treatment==
It is an open question whether aleatory probability is reducible to epistemic probability based on our inability to precisely predict every force that might affect the roll of a die, or whether such uncertainties exist in the nature of reality itself, particularly in [[quantum physics|quantum]] phenomena governed by Heisenberg's [[uncertainty principle]]. Although the same mathematical rules apply regardless of which interpretation is chosen, the choice has major implications for the way in which probability is used to model the real world.
 
  +
{{See|Probability theory}}
  +
In mathematics, a probability of an [[Event (probability theory)|event]] ''A'' is represented by a real number in the range from 0 to 1 and written as P(''A''), p(''A'') or Pr(''A'').<ref>Olofsson, Peter. (2005) Page 8.</ref> An impossible event has a probability of 0, and a certain event has a probability of 1. However, the converses are not always true: probability 0 events are not always impossible, nor probability 1 events certain. The rather subtle distinction between "certain" and "probability 1" is treated at greater length in the article on "[[almost surely]]".
   
  +
The ''opposite'' or ''complement'' of an event ''A'' is the event [not ''A''] (that is, the event of ''A'' not occurring); its probability is given by {{nowrap|1= P(not ''A'') = 1 - P(''A'')}}.<ref>Olofsson, page 9</ref> As an example, the chance of not rolling a six on a six-sided die is {{nowrap|1 - (chance of rolling a six)}} = <math>{1} - \tfrac{1}{6} = \tfrac{5}{6}</math>. See [[Complementary event]] for a more complete treatment.
==Formalization of probability==
 
   
  +
If both the events ''A'' and ''B'' occur on a single performance of an experiment this is called the intersection or [[Joint distribution|joint probability]] of ''A'' and ''B'', denoted as <math>P(A \cap B)</math>.
Like other [[theory|theories]], the [[probability theory|theory of probability]] is a representation of probabilistic concepts in formal terms—that is, in terms that can be considered separately from their meaning.
 
  +
If two events, ''A'' and ''B'' are [[Statistical independence|independent]] then the joint probability is
These formal terms are manipulated by the rules of mathematics and logic, and any results are then interpreted or translated back into the problem domain.
 
  +
:<math>P(A \mbox{ and }B) = P(A \cap B) = P(A) P(B),\,</math>
  +
for example, if two coins are flipped the chance of both being heads is <math>\tfrac{1}{2}\times\tfrac{1}{2} = \tfrac{1}{4}.</math><ref>Olofsson, page 35.</ref>
   
  +
If either event ''A'' or event ''B'' or both events occur on a single performance of an experiment this is called the union of the events ''A'' and ''B'' denoted as <math>P(A \cup B)</math>.
There have been at least two successful attempts to formalize probability, namely the [[Kolmogorov]] formulation and the [[Richard Threlkeld Cox|Cox]] formulation.
 
  +
If two events are [[Mutually exclusive events|mutually exclusive]] then the probability of either occurring is
In Kolmogorov's formulation,
 
  +
:<math>P(A\mbox{ or }B) = P(A \cup B)= P(A) + P(B).</math>
[[set]]s are interpreted as [[Event (probability theory)|event]]s and probability itself as a [[measure]] on a class of sets.
 
  +
For example, the chance of rolling a 1 or 2 on a six-sided die is <math>P(1\mbox{ or }2) = P(1) + P(2) = \tfrac{1}{6} + \tfrac{1}{6} = \tfrac{1}{3}.</math>
In Cox's formulation,
 
probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions.
 
In both cases,
 
the laws of probability are the same, except for technical details:
 
   
  +
If the events are not mutually exclusive then
# a probability is a number between 0 and 1;
 
  +
:<math>\mathrm{P}\left(A \hbox{ or } B\right)=\mathrm{P}\left(A\right)+\mathrm{P}\left(B\right)-\mathrm{P}\left(A \mbox{ and } B\right).</math>
# the probability of an event or proposition and its complement must add up to 1; and
 
  +
For example, when drawing a single card at random from a regular deck of cards, the chance of getting a heart or a face card (J,Q,K) (or one that is both) is <math>\tfrac{13}{52} + \tfrac{12}{52} - \tfrac{3}{52} = \tfrac{11}{26}</math>, because of the 52 cards of a deck 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards" but should only be counted once.
# the [[joint probability]] of two events or propositions is the product of the probability of one of them and the probability of the second, [[conditional probability|conditional]] on the first.
 
   
  +
''[[Conditional probability]]'' is the probability of some event ''A'', given the occurrence of some other event ''B''.
The reader will find an exposition of the Kolmogorov formulation in the [[probability theory]] article, and of the Cox formulation in the [[Cox's theorem]] article. See also the article on [[probability axioms]].
 
  +
Conditional probability is written ''P''(''A''|''B''), and is read "the probability of ''A'', given ''B''". It is defined by
  +
:<math>P(A \mid B) = \frac{P(A \cap B)}{P(B)}.\,</math><ref>Olofsson, page 29.</ref>
  +
If <math>P(B)=0</math> then <math>P(A \mid B)</math> is [[defined and undefined|undefined]].
   
  +
{| class="wikitable"
For an algebraic alternative to Kolmogorov's approach, see [[algebra of random variables]].
 
  +
|+Summary of probabilities
  +
|-
  +
!Event!!Probability
  +
|-
  +
|align=center|A||<math>P(A)\in[0,1]\,</math>
  +
|-
  +
|align=center|not A||<math>P(A')=1-P(A)\,</math>
  +
|-
  +
|align=center|A or B
  +
|<math>\begin{align}
  +
P(A\cup B) & = P(A)+P(B)-P(A\cap B) \\
  +
& = P(A)+P(B) \qquad\mbox{if A and B are mutually exclusive}\\
  +
\end{align}</math>
  +
|-
  +
|align=center|A and B
  +
|<math>\begin{align}
  +
P(A\cap B) & = P(A|B)P(B) \\
  +
& = P(A)P(B) \qquad\mbox{if A and B are independent}\\
  +
\end{align}</math>
  +
|-
  +
|align=center|A given B
  +
|<math>P(A \mid B) = \frac{P(A \cap B)}{P(B)}\,</math>
  +
|}
   
  +
==Theory==
=== Representation and interpretation of probability values ===
 
  +
{{main|Probability theory}}
  +
Like other [[theory|theories]], the [[probability theory|theory of probability]] is a representation of probabilistic concepts in formal terms—that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are then interpreted or translated back into the problem domain.
   
  +
There have been at least two successful attempts to formalize probability, namely the [[Kolmogorov]] formulation and the [[Richard Threlkeld Cox|Cox]] formulation. In Kolmogorov's formulation (see [[probability space]]), [[Set (mathematics)|sets]] are interpreted as [[Event (probability theory)|event]]s and probability itself as a [[Measure (mathematics)|measure]] on a class of sets. In [[Cox's theorem]], probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the [[probability axioms|laws of probability]] are the same, except for technical details.
The probability of an event is generally represented as a [[real number]] between 0 and 1, inclusive. An ''impossible'' event has a probability of exactly 0, and a ''certain'' event has a probability of 1, but the converses are not always true: probability 0 events are not always impossible, nor probability 1 events certain.
 
The rather subtle distinction between "certain" and "probability 1" is treated at greater length in the article on "[[almost surely]]".
 
   
  +
There are other methods for quantifying uncertainty,
Most probabilities that occur in practice are numbers between 0 and 1, indicating the event's position on the continuum between impossibility and certainty. The closer an event's probability is to 1, the more likely it is to occur.
 
  +
such as the [[Dempster-Shafer theory]] or [[possibility theory]],
  +
but those are essentially different and not compatible with the laws of probability as they are usually understood.
   
  +
== Applications ==
For example, if two [[mutually exclusive]] events are assumed equally probable, such as a flipped or spun coin landing heads-up or tails-up, we can express the probability of each event as "1 in 2", or, equivalently, "50%" or "1/2".
 
  +
Two major applications of probability theory in everyday life are in [[risk]] assessment and in trade on [[commodity markets]]. Governments typically apply probabilistic methods in [[environmental regulation]] where it is called "[[pathway analysis]]", often [[measuring well-being]] using methods that are [[stochastic]] in nature, and choosing projects to undertake based on statistical analyses of their probable effect on the population as a whole.
  +
  +
A good example is the effect of the perceived probability of any widespread Middle East conflict on oil prices - which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely vs. less likely sends prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are not assessed independently nor necessarily very rationally. The theory of [[behavioral finance]] emerged to describe the effect of such [[groupthink]] on pricing, on policy, and on peace and conflict.
   
  +
It can reasonably be said that the discovery of rigorous methods to assess and combine probability assessments has had a profound effect on modern society. Accordingly, it may be of some importance to most citizens to understand how odds and probability assessments are made, and how they contribute to reputations and to decisions, especially in a [[democracy]].
Probabilities are equivalently expressed as [[odds]], which is the ratio of the probability of one event to the probability of all other events.
 
The odds of heads-up, for the tossed/spun coin, are (1/2)/(1 - 1/2), which is equal to 1/1. This is expressed as "1 to 1 odds" and often written "1:1".
 
   
  +
Another significant application of probability theory in everyday life is [[Reliability theory of aging and longevity|reliability]]. Many consumer products, such as [[automobiles]] and consumer electronics, utilize [[reliability theory]] in the design of the product in order to reduce the probability of failure. The probability of failure may be closely associated with the product's [[warranty]].
Odds ''a'':''b'' for some event are equivalent to probability ''a''/(''a''+''b'').
 
For example, 1:1 odds are equivalent to probability 1/2, and 3:2 odds are equivalent to probability 3/5.
 
   
  +
== Relation to randomness ==
There remains the question of exactly what can be assigned probability, and how the numbers so assigned can be used; this is the question of [[probability interpretations]].
 
  +
{{main|Randomness}}
There are some who claim that probability can be assigned to any kind of an ''uncertain logical proposition''; this is the [[Bayesian probability|Bayesian]] interpretation.
 
  +
In a [[determinism|deterministic]] universe, based on [[Newtonian mechanics|Newtonian]] concepts, there is no probability if all conditions are known. In the case of a roulette wheel, if the force of the hand and the period of that force are known, then the number on which the ball will stop would be a certainty. Of course, this also assumes knowledge of inertia and friction of the wheel, weight, smoothness and roundness of the ball, variations in hand speed during the turning and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of roulette wheel. Physicists face the same situation in [[kinetic theory]] of gases, where the system, while deterministic ''in principle'', is so complex (with the number of molecules typically the order of magnitude of [[Avogadro constant]] 6.02·10<sup>23</sup>) that only statistical description of its properties is feasible.
There are others who argue that probability is properly applied only to ''random events'' as outcomes of some specified random experiment, for example sampling from a population; this is the [[frequency probability|frequentist]] interpretation.
 
There are several other interpretations which are variations on one or the other of those, or which have less acceptance at present.
 
   
  +
A revolutionary discovery of 20th century [[physics]] was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of [[quantum mechanics]]. The [[wave function]] itself evolves deterministically as long as no observation is made, but, according to the prevailing [[Copenhagen interpretation]], the randomness caused by the [[wave function collapse|wave function collapsing]] when an observation is made, is fundamental. This means that [[probability theory]] is required to describe nature. Others never came to terms with the loss of determinism. [[Albert Einstein]] famously [[:de:Albert Einstein#Quellenangaben und Anmerkungen|remarked]] in a letter to [[Max Born]]: ''Jedenfalls bin ich überzeugt, daß der Alte nicht würfelt.'' (''I am convinced that God does not play dice''). Although alternative viewpoints exist, such as that of [[quantum decoherence]] being the cause of an ''apparent'' random collapse, at present there is a firm consensus among [[physicist|physicists]] that probability theory is necessary to describe quantum phenomena.{{Fact|date=February 2008}}
=== Distributions ===
 
   
  +
== See also ==
A [[probability distribution]] is a function that assigns probabilities to events or propositions. For any set of events or propositions there are many ways to assign probabilities, so the choice of one distribution or another is equivalent to making different assumptions about the events or propositions in question.
 
   
  +
{{main|Outline of probability}}
There are several equivalent ways to specify a probability distribution.
 
Perhaps the most common is to specify a [[probability density function]].
 
Then the probability of an event or proposition is obtained by [[integration|integrating]] the density function.
 
The distribution function may also be specified directly.
 
In one dimension, the distribution function is called the [[cumulative distribution function]].
 
Probability distributions can also be specified via [[moment (mathematics)|moment]]s or the [[characteristic function (probability theory)|characteristic function]], or in still other ways.
 
   
  +
{{multicol}}
<!-- Most general defns of discrete, continuous?? -->
 
  +
* [[Black Swan theory]]
A distribution is called a '''discrete distribution''' if it is defined on a [[countable]], [[discrete]] set, such as a subset of the integers.
 
  +
* [[Calculus of predispositions]]
A distribution is called a '''continuous distribution''' if it has a continuous distribution function, such as a polynomial or exponential function.
 
  +
* [[Chance (Fortune)]]
Most distributions of practical importance are either discrete or continuous, but there are examples of distributions which are neither.
 
  +
* [[Chaos theory]]
<!-- Such as?? (to be filled in) -->
 
  +
* [[Class membership probabilities]]
 
Important discrete distributions include the discrete [[uniform distribution]], the [[Poisson distribution]], the [[binomial distribution]], the [[negative binomial distribution]], and the [[Maxwell-Boltzmann distribution]].
 
 
Important continuous distributions include the [[normal distribution]], the [[gamma distribution]], the [[Student's t-distribution]], and the [[exponential distribution]].
 
 
== Probability in mathematics ==
 
 
[[Probability axioms]] form the basis for mathematical [[probability theory]]. Calculation of probabilities can often be determined using [[combinatorics]] or by applying the axioms directly. Probability applications include even more than [[statistics]], which is usually based on the idea of [[probability distribution]]s and the [[central limit theorem]].
 
 
To give a mathematical meaning to probability, consider flipping a "fair" coin. Intuitively, the probability that heads will come up on any given coin toss is "obviously" 50%; but this statement alone lacks [[mathematical rigor]]. Certainly, while we might ''expect'' that flipping such a coin 10 times will yield 5 heads and 5 tails, there is no ''guarantee'' that this will occur; it is possible, for example, to flip 10 heads in a row. What then does the number "50%" mean in this context?
 
 
One approach is to use the [[law of large numbers]]. In this case, we assume that we can perform any number of coin flips, with each coin flip being independent—that is to say, the outcome of each coin flip is unaffected by previous coin flips. If we perform ''N'' trials (coin flips), and let ''N''<sub>H</sub> be the number of times the coin lands heads, then we can, for any ''N'', consider the ratio ''N''<sub>H</sub>/''N''.
 
 
As ''N'' gets larger and larger, we expect that in our example the ratio ''N''<sub>H</sub>/''N'' will get closer and closer to 1/2. This allows us to "define" the probability Pr(''H'') of flipping heads as the [[limit (mathematics)|limit]], as ''N'' approaches infinity, of this sequence of ratios:
 
 
:<math>\Pr(H) = \lim_{N \to \infty}{N_H \over N} </math>
 
 
In actual practice, of course, we cannot flip a coin an infinite number of times; so in general, this formula most accurately applies to situations in which we have already assigned an ''a priori'' probability to a particular outcome (in this case, our ''assumption'' that the coin was a "fair" coin). The law of large numbers then says that, given Pr(''H''), and any arbitrarily small number &epsilon;, there exists some number ''n'' such that for all ''N'' > ''n'',
 
 
:<math>\left| \Pr(H) - {N_H \over N}\right| < \epsilon</math>
 
 
In other words, by saying that "the probability of heads is 1/2", we mean that, if we flip our coin often enough, ''eventually'' the number of heads over the number of total flips will become arbitrarily close to 1/2; and will then stay ''at least'' as close to 1/2 for as long as we keep performing additional coin flips.
 
 
Note that a proper definition requires [[measure theory]], which provides means to cancel out those cases where the above limit does not provide the "right" result (or is even undefined) by showing that those cases have a [[measure]] of zero.
 
 
The ''a priori'' aspect of this approach to probability is sometimes troubling when applied to real world situations. For example, in the play ''[[Rosencrantz and Guildenstern are Dead]]'' by [[Tom Stoppard]], a character flips a coin which keeps coming up heads over and over again, a hundred times. He can't decide whether this is just a random event—after all, it is possible (although unlikely) that a fair coin would give this result—or whether his assumption that the coin is fair is at fault.
 
 
=== Remarks on probability calculations ===
 
 
The difficulty of probability calculations lies in setting up the problem in an appropriate way. (There is never a uniquely correct way to set up a problem, but some ways are better than others.) Especially difficult is drawing meaningful conclusions from the probabilities calculated. An amusing probability riddle, the [[Monty Hall problem]], demonstrates the pitfalls nicely.
 
 
To learn more about the basics of [[probability theory]], see the article on [[probability axiom]]s and the article on [[Bayes' theorem]], which explains the use of conditional probabilities in cases where the occurrence of two events is related.
 
 
== Applications of probability theory to everyday life ==
 
 
Two major applications of probability theory in everyday life are in [[risk]] assessment and in trade on [[commodity markets]]. Governments typically apply probability methods in [[environmental regulation]] where it is called "[[pathway analysis]]", and are often [[measuring well-being]] using methods that are stochastic in nature, and choosing projects to undertake based on statistical analyses of their probable effect on the population as a whole. It is not correct to say that [[statistics]] are involved in the modelling itself, as typically the assessments of [[risk]] are one-time and thus require more fundamental probability models, e.g. "the probability of another 9/11". A [[law of small numbers]] tends to apply to all such choices and perception of the effect of such choices, which makes probability measures a political matter.
 
 
A good example is the effect of the perceived probability of any widespread Middle East conflict on oil prices - which have ripple effects in the economy as a whole. An assessment by a commodity trade that a war is more likely vs. less likely sends prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are not assessed independently nor necessarily very rationally. The theory of [[behavioral finance]] emerged to describe the effect of such [[groupthink]] on pricing, on policy, and on peace and conflict.
 
 
It can reasonably be said that the discovery of rigorous methods to assess and combine probability assessments has had a profound effect on modern society. A good example is the application of [[game theory]], itself based strictly on probability, to the [[Cold War]] and the [[mutual assured destruction]] doctrine. Accordingly, it may be of some importance to most citizens to understand how odds and probability assessments are made, and how they contribute to reputations and to decisions, especially in a [[democracy]].
 
 
Another significant application of probability theory in everyday life is [[reliability]]. Many consumer products, such as [[automobiles]] and consumer electronics, utilize [[reliability theory]] in the design of the product in order to reduce the probability of failure. The probability of failure is also closely associated with the product's [[warranty]].
 
 
== See also ==
 
* [[Bayesian probability]]
 
* [[Bernoulli process]]
 
* [[Cox's theorem]]
 
 
* [[Decision theory]]
 
* [[Decision theory]]
  +
* [[Equiprobable]]
 
* [[Fuzzy measure theory]]
 
* [[Fuzzy measure theory]]
* [[Game of chance]]
 
 
* [[Game theory]]
 
* [[Game theory]]
  +
* [[Gaming mathematics]]
  +
* [[Hypothesis testing]]
  +
{{multicol-break}}
 
* [[Information theory]]
 
* [[Information theory]]
  +
* [[List of publications in statistics#Probability|Important publications in probability]]
* [[Law of averages]]
 
* [[Law of large numbers]]
+
* [[List of scientific journals in probability]]
  +
* [[List of statistical topics]]
  +
* [[Intrinsic random event]]
 
* [[Measure theory]]
 
* [[Measure theory]]
* [[Expected value]]
+
* [[Negative probability]]
  +
{{multicol-break}}
* [[Expected number]]
 
  +
* [[Predicatability (measurement)]]
* [[Normal distribution]]
 
  +
* [[Probabilistic argumentation]]
  +
* [[Probabilistic logic]]
  +
* [[Probability axioms]]
  +
* [[Probability density function]]
  +
* [[Probability distribution]]
  +
* [[Probability learning]]
  +
* [[Probability judgement]]
 
* [[Random fields]]
 
* [[Random fields]]
 
* [[Random variable]]
 
* [[Random variable]]
  +
* [[Response probability]]
  +
* [[Risk]]
  +
* [[Risk analysis]]
  +
* [[Risk assessment]]
  +
* [[Statistical probability]]
 
* [[Statistics]]
 
* [[Statistics]]
** [[List of statistical topics]]
 
 
* [[Stochastic process]]
 
* [[Stochastic process]]
  +
{{multicol-end}}
* [[Wiener process]]
 
* [[List of publications in statistics#Probability| Important publications in probability]]
 
   
== External links ==
 
   
  +
== Notes ==
* [http://en.wikibooks.org/wiki/Probability Probability] in [http://en.wikibooks.org Wikibooks]
 
  +
<references/>
* [http://www.cut-the-knot.org/probability.shtml A Collection of articles on Probability, many of which are accompanied by Java simulations] at [[cut-the-knot]]
 
  +
* [[Edwin Thompson Jaynes]]. ''Probability Theory: The Logic of Science''. Preprint: Washington University, (1996). -- [http://omega.albany.edu:8008/JaynesBook.html HTML] and [http://bayes.wustl.edu/etj/prob/book.pdf PDF]
 
  +
== References ==
*[http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/book.html An online probability textbook which uses computer programming as a teaching aid]
 
  +
* [[Olav Kallenberg|Kallenberg, O.]] (2005) ''Probabilistic Symmetries and Invariance Principles''. Springer -Verlag, New York. 510 pp. ISBN 0-387-25115-4
* "''[http://www.npr.org/display_pages/features/feature_1697475.html The Not So Random Coin Toss], Mathematicians Say Slight but Real Bias Toward Heads''". [[NPR]].
 
  +
* Kallenberg, O. (2002) ''Foundations of Modern Probability,'' 2nd ed. Springer Series in Statistics. 650 pp. ISBN 0-387-95313-2
*[http://www.benbest.com/science/theodds.html Figuring the Odds (Probability Puzzles)]
 
  +
*Olofsson, Peter (2005) ''Probability, Statistics, and Stochastic Processes'', Wiley-Interscience. 504 pp ISBN 0-471-67969-0.
*[http://etext.lib.virginia.edu/cgi-local/DHI/dhi.cgi?id=dv1-43 ''Dictionary of the History of Ideas'':] Certainty in Seventeenth-Century Thought
 
*[http://etext.lib.virginia.edu/cgi-local/DHI/dhi.cgi?id=dv1-44 ''Dictionary of the History of Ideas'':] Certainty since the Seventeenth Century
 
*[http://chetanpan.blogspot.com/2006/01/toss-coin.html ''Small thought experiment on toss of a coin'':]
 
   
 
== Quotations ==
 
== Quotations ==
Line 184: Line 173:
 
* [[Richard von Mises]] "The unlimited extension of the validity of the exact sciences was a characteristic feature of the exaggerated rationalism of the eighteenth century" (in reference to Laplace). ''Probability, Statistics, and Truth,'' p 9. Dover edition, 1981 (republication of second English edition, 1957).
 
* [[Richard von Mises]] "The unlimited extension of the validity of the exact sciences was a characteristic feature of the exaggerated rationalism of the eighteenth century" (in reference to Laplace). ''Probability, Statistics, and Truth,'' p 9. Dover edition, 1981 (republication of second English edition, 1957).
   
  +
== External links ==
  +
{{no footnotes|date=September 2008}}
  +
{{wikibooks|Probability}}
  +
*[http://wiki.stat.ucla.edu/socr/index.php/EBook Probability and Statistics EBook]
  +
*[[Edwin Thompson Jaynes]]. ''Probability Theory: The Logic of Science''. Preprint: Washington University, (1996). — [http://omega.albany.edu:8008/JaynesBook.html HTML index with links to PostScript files] and [http://bayes.wustl.edu/etj/prob/book.pdf PDF] (first three chapters)
  +
*[http://www.economics.soton.ac.uk/staff/aldrich/Figures.htm People from the History of Probability and Statistics (Univ. of Southampton)]
  +
*[http://www.economics.soton.ac.uk/staff/aldrich/Probability%20Earliest%20Uses.htm Probability and Statistics on the Earliest Uses Pages (Univ. of Southampton)]
  +
*[http://jeff560.tripod.com/stat.html Earliest Uses of Symbols in Probability and Statistics] on [http://jeff560.tripod.com/mathsym.html Earliest Uses of Various Mathematical Symbols]
  +
*[http://www.celiagreen.com/charlesmccreery/statistics/bayestutorial.pdf A tutorial on probability and Bayes’ theorem devised for first-year Oxford University students]
  +
*[http://ubu.com/historical/young/index.html pdf file of An Anthology of Chance Operations (1963)] at [[UbuWeb]]
  +
*[http://probability.infarom.ro Probability Theory Guide for Non-Mathematicians]
  +
*[http://www.bbc.co.uk/raw/money/express_unit_risk/ Understanding Risk and Probability] with BBC raw
  +
  +
{{Mathematics-footer}}
  +
{{Statistics|hide}}
  +
  +
[[Category:Probability and statistics]]
  +
[[Category:Probability| ]]
 
[[Category:Applied mathematics]]
 
[[Category:Applied mathematics]]
 
[[Category:Decision theory]]
 
[[Category:Decision theory]]
[[Category:Probability theory]]
 
   
  +
  +
<!--
 
[[ar:احتمال]]
 
[[ar:احتمال]]
  +
[[bs:Vjerovatnoća]]
  +
[[bg:Вероятност]]
  +
[[ca:Probabilitat]]
 
[[cs:Pravděpodobnost]]
 
[[cs:Pravděpodobnost]]
 
[[de:Wahrscheinlichkeit]]
 
[[de:Wahrscheinlichkeit]]
 
[[et:Tõenäosus]]
 
[[et:Tõenäosus]]
 
[[es:Probabilidad]]
 
[[es:Probabilidad]]
  +
[[eo:Probablo]]
  +
[[fa:احتمالات]]
 
[[fr:Probabilité]]
 
[[fr:Probabilité]]
  +
[[ko:확률]]
  +
[[hi:प्रायिकता]]
 
[[io:Probableso]]
 
[[io:Probableso]]
 
[[it:Probabilità]]
 
[[it:Probabilità]]
 
[[he:הסתברות]]
 
[[he:הסתברות]]
  +
[[la:Probabilitas]]
  +
[[lv:Varbūtība]]
  +
[[mt:Probabbiltà]]
 
[[nl:Kans (statistiek)]]
 
[[nl:Kans (statistiek)]]
 
[[ja:確率]]
 
[[ja:確率]]
  +
[[ka:ალბათობა]]
 
[[no:Sannsynlighet]]
 
[[no:Sannsynlighet]]
 
[[pl:Prawdopodobieństwo]]
 
[[pl:Prawdopodobieństwo]]
Line 206: Line 225:
 
[[simple:Probability]]
 
[[simple:Probability]]
 
[[sk:Pravdepodobnosť]]
 
[[sk:Pravdepodobnosť]]
  +
[[sl:Verjetnost]]
 
[[sr:Вероватноћа]]
 
[[sr:Вероватноћа]]
 
[[su:Probabilitas]]
 
[[su:Probabilitas]]
 
[[fi:Todennäköisyys]]
 
[[fi:Todennäköisyys]]
 
[[sv:Sannolikhet]]
 
[[sv:Sannolikhet]]
  +
[[ta:நிகழ்தகவு]]
 
[[th:ความน่าจะเป็น]]
 
[[th:ความน่าจะเป็น]]
  +
[[vi:Xác suất]]
  +
[[tr:Olasılık]]
 
[[uk:Ймовірність]]
 
[[uk:Ймовірність]]
  +
[[ur:احتمال (عام)]]
  +
[[vec:Probabiłità]]
 
[[zh:概率]]
 
[[zh:概率]]
  +
[[zh-yue:或然率‎]]
  +
-->
 
{{enWP|probability}}
 
{{enWP|probability}}

Revision as of 11:28, 3 January 2010

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


Probability is a way of expressing knowledge or belief that an event will occur or has occurred. In mathematics the concept has been given an exact meaning in probability theory, that is used extensively in such areas of study as mathematics, statistics, finance, gambling, science, and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems.

Interpretations

Main article: Probability interpretations

The word probability does not have a consistent direct definition. In fact, there are sixteen broad categories of probability interpretations, whose adherents possess different (and sometimes conflicting) views about the fundamental nature of probability:

  1. Frequentists talk about probabilities only when dealing with experiments that are random and well-defined. The probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. Frequentists consider probability to be the relative frequency "in the long run" of outcomes.[1]
  2. Bayesians, however, assign probabilities to any statement whatsoever, even when no random process is involved. Probability, for a Bayesian, is a way to represent an individual's degree of belief in a statement, given the evidence.

Etymology

The word probability derives from probity, a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which, in contrast, is used as a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference.[2][3]

History

Further information: History of probability
Further information: History of statistics

The scientific study of probability is a modern development. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions of use in those problems only arose much later.

According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances."[4] However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.[5]

Aside from some elementary considerations made by Girolamo Cardano in the 16th century, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the very concept of mathematical probability.

The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given.

Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve , being any error and its probability, and laid down three properties of this curve:

  1. it is symmetric as to the -axis;
  2. the -axis is an asymptote, the probability of the error being 0;
  3. the area enclosed is 1, it being certain that an error exists.

He also gave (1781) a formula for the law of facility of error (a term due to Lagrange, 1774), but one which led to unmanageable equations. Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.

The method of least squares is due to Adrien-Marie Legendre (1805), who introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,

being a constant depending on precision of observation, and a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof which seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W. F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for , the probable error of a single observation, is well known.

In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion, and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.

On the geometric side (see integral geometry) contributors to The Educational Times were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin).

Mathematical treatment

Further information: Probability theory

In mathematics, a probability of an event A is represented by a real number in the range from 0 to 1 and written as P(A), p(A) or Pr(A).[6] An impossible event has a probability of 0, and a certain event has a probability of 1. However, the converses are not always true: probability 0 events are not always impossible, nor probability 1 events certain. The rather subtle distinction between "certain" and "probability 1" is treated at greater length in the article on "almost surely".

The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring); its probability is given by P(not A) = 1 - P(A).[7] As an example, the chance of not rolling a six on a six-sided die is 1 - (chance of rolling a six) = . See Complementary event for a more complete treatment.

If both the events A and B occur on a single performance of an experiment this is called the intersection or joint probability of A and B, denoted as . If two events, A and B are independent then the joint probability is

for example, if two coins are flipped the chance of both being heads is [8]

If either event A or event B or both events occur on a single performance of an experiment this is called the union of the events A and B denoted as . If two events are mutually exclusive then the probability of either occurring is

For example, the chance of rolling a 1 or 2 on a six-sided die is

If the events are not mutually exclusive then

For example, when drawing a single card at random from a regular deck of cards, the chance of getting a heart or a face card (J,Q,K) (or one that is both) is , because of the 52 cards of a deck 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards" but should only be counted once.

Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P(A|B), and is read "the probability of A, given B". It is defined by

[9]

If then is undefined.

Summary of probabilities
Event Probability
A
not A
A or B
A and B
A given B

Theory

Main article: Probability theory

Like other theories, the theory of probability is a representation of probabilistic concepts in formal terms—that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are then interpreted or translated back into the problem domain.

There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see probability space), sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.

There are other methods for quantifying uncertainty, such as the Dempster-Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as they are usually understood.

Applications

Two major applications of probability theory in everyday life are in risk assessment and in trade on commodity markets. Governments typically apply probabilistic methods in environmental regulation where it is called "pathway analysis", often measuring well-being using methods that are stochastic in nature, and choosing projects to undertake based on statistical analyses of their probable effect on the population as a whole.

A good example is the effect of the perceived probability of any widespread Middle East conflict on oil prices - which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely vs. less likely sends prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are not assessed independently nor necessarily very rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict.

It can reasonably be said that the discovery of rigorous methods to assess and combine probability assessments has had a profound effect on modern society. Accordingly, it may be of some importance to most citizens to understand how odds and probability assessments are made, and how they contribute to reputations and to decisions, especially in a democracy.

Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, utilize reliability theory in the design of the product in order to reduce the probability of failure. The probability of failure may be closely associated with the product's warranty.

Relation to randomness

Main article: Randomness

In a deterministic universe, based on Newtonian concepts, there is no probability if all conditions are known. In the case of a roulette wheel, if the force of the hand and the period of that force are known, then the number on which the ball will stop would be a certainty. Of course, this also assumes knowledge of inertia and friction of the wheel, weight, smoothness and roundness of the ball, variations in hand speed during the turning and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of roulette wheel. Physicists face the same situation in kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of Avogadro constant 6.02·1023) that only statistical description of its properties is feasible.

A revolutionary discovery of 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The wave function itself evolves deterministically as long as no observation is made, but, according to the prevailing Copenhagen interpretation, the randomness caused by the wave function collapsing when an observation is made, is fundamental. This means that probability theory is required to describe nature. Others never came to terms with the loss of determinism. Albert Einstein famously remarked in a letter to Max Born: Jedenfalls bin ich überzeugt, daß der Alte nicht würfelt. (I am convinced that God does not play dice). Although alternative viewpoints exist, such as that of quantum decoherence being the cause of an apparent random collapse, at present there is a firm consensus among physicists that probability theory is necessary to describe quantum phenomena.[How to reference and link to summary or text]

See also

Main article: Outline of probability
  • Information theory
  • Important publications in probability
  • List of scientific journals in probability
  • List of statistical topics
  • Intrinsic random event
  • Measure theory
  • Negative probability


Notes

  1. The Logic of Statistical Inference, Ian Hacking, 1965
  2. The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference, Ian Hacking, Cambridge University Press, 2006, ISBN 0521685575, 9780521685573
  3. The Cambridge History of Seventeenth-century Philosophy, Daniel Garber, 2003
  4. Jeffrey, R.C., Probability and the Art of Judgment, Cambridge University Press. (1992). pp. 54-55 . ISBN 0-521-39459-7
  5. Franklin, J., The Science of Conjecture: Evidence and Probability Before Pascal, Johns Hopkins University Press. (2001). pp. 22, 113, 127
  6. Olofsson, Peter. (2005) Page 8.
  7. Olofsson, page 9
  8. Olofsson, page 35.
  9. Olofsson, page 29.

References

  • Kallenberg, O. (2005) Probabilistic Symmetries and Invariance Principles. Springer -Verlag, New York. 510 pp. ISBN 0-387-25115-4
  • Kallenberg, O. (2002) Foundations of Modern Probability, 2nd ed. Springer Series in Statistics. 650 pp. ISBN 0-387-95313-2
  • Olofsson, Peter (2005) Probability, Statistics, and Stochastic Processes, Wiley-Interscience. 504 pp ISBN 0-471-67969-0.

Quotations

  • Damon Runyon, "It may be that the race is not always to the swift, nor the battle to the strong - but that is the way to bet."
  • Pierre-Simon Laplace "It is remarkable that a science which began with the consideration of games of chance should have become the most important object of human knowledge." Théorie Analytique des Probabilités, 1812.
  • Richard von Mises "The unlimited extension of the validity of the exact sciences was a characteristic feature of the exaggerated rationalism of the eighteenth century" (in reference to Laplace). Probability, Statistics, and Truth, p 9. Dover edition, 1981 (republication of second English edition, 1957).

External links

Template:No footnotes

Wikibooks
Probability may have more about this subject.






This page uses Creative Commons Licensed content from Wikipedia (view authors).