# Classical interpretation of probability

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
34,200pages on
this wiki

The classical definition of probability is identified with the works of Pierre Simon Laplace. As stated in his Théorie analytique des probabilités,

The probability of an event is the ratio of the number of cases favorable to it, to the number of all cases possible when nothing leads us to expect that any one of these cases should occur more than any other, which renders them, for us, equally possible.

This definition is essentially a consequence of the principle of indifference. If elementary events are assigned equal probabilities, then the probability of a disjunction of elementary events is just the number of events in the disjunction divided by the total number of elementary events.

The classical definition of probability was called into question by several writers of the nineteenth century, including John Venn and George Boole. The frequentist definition of probability became widely accepted as a result of their criticism, and especially through the works of R.A. Fisher. The classical definition enjoyed a revival of sorts due to the general interest in Bayesian probability, in which the classical definition is seen as a special case of the more general Bayesian definition.

## HistoryEdit

As a mathematical subject the theory of probability arose very late—as compared to geometry for example—despite the fact that we have prehistoric evidence of man playing with dice from cultures from all over the world. In fact we have the exact year when it was born; in the year 1654 Blaise Pascal had some correspondence with his fathers friend Pierre de Fermat about two problems concerning games of chance he had heard from Chevalier de Méré earlier the same year, that Pascal happened to accompany during a trip.

One problem was the so called problem of points, a classic problem already then (treated by Luca Pacioli as early as 1494), dealing with the question how to split the money at stake in a fair way when the game at hand is interrupted half-way through. The other problem was one about a mathematical rule of thumb that didn't seem to hold when extending a game of dice from using one die to two dice. This last problem, or paradox, was the discovery of de Méré himself and showed, according to him, how dangerous it was to apply mathematics to reality. They discussed other mathematical-philosophical issues and paradoxes as well during the trip that de Méré thought was strengthening his general philosophical view.

Pascal, being a mathematician, got provoked by de Méré's philosophical ideas about mathematics as something beautiful, perfect and flawless but poorly connected to the imperfect world we call reality, and therefore often useless in practice. Pascal got determined to prove de Méré wrong by solving these two problems within pure mathematics—and so he did. When he learned that Fermat, already recognized as a distinguished mathematician, had come up with the same answers, albeit using other methods, he was convinced they had solved the problems conclusively. This correspondence circulated among other scholars at the time and is the starting point for when mathematicians in general began to be interested in problems emanating from games of chance.

This doesn't mean, however, that Pascal and Fermat had a clear concept of probability. (Nor that they made the first correct calculations concerning games of chance.) No clear distinction was yet made between probabilities and expected values, for example. The first man who saw the need of a clear definition of what we mean with probability was Laplace. As late as 1814 he states:

The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.

This is, finally, the classical interpretation of probability explicitly stated.