Wikia

Psychology Wiki

Information entropy

Talk0
34,141pages on
this wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


Binary entropy plot

Entropy of a Bernoulli trial as a function of success probability.

Entropy is a concept in thermodynamics (see thermodynamic entropy), statistical mechanics and information theory. The concepts of information and entropy have deep links with one another, although it took many years for the development of the theories of statistical mechanics and information theory to make this apparent. This article is about information entropy, the information-theoretic formulation of entropy. Information entropy is occasionally called Shannon's entropy in honor of Claude E. Shannon.

IntroductionEdit

The concept of entropy in information theory describes with how much randomness (or, alternatively, 'uncertainty') there is in a signal or random event. An alternative way to look at this is to talk about how much information is carried by the signal.

For example, consider some English text, encoded as a string of letters, spaces, and punctuation (so our signal is a string of characters). Since letter frequency for some characters is not very high (e.g. 'z'), while other letters are very common (e.g. 'e'), the string of characters is not really as random as it might be. On the other hand, since we cannot predict what the next character will be: it is, to some degree, 'random'. Entropy is a measure of this randomness, suggested by Shannon in his 1948 paper "A Mathematical Theory of Communication".

Shannon offers a definition of entropy which satisfies the assumptions that:

  • The measure should be proportional (continuous) - i.e. changing the value of one of the probabilities by a very small amount should only change the entropy by a small amount.
  • If all the outcomes (letters in the example above) are equally likely then increasing the number of letters should always increase the entropy.
  • We should be able to make the choice (in our example of a letter) in two steps, in which case the entropy of the final result should be a weighted sum of the entropies of the two steps.

(Note: Shannon/Weaver make reference to Tolman (1938) who in turn credits Pauli (1933) with the definition of entropy that is used by Shannon. Elsewhere in statistical mechanics, the literature includes references to von Neumann as having derived the same form of entropy in 1927, so it was that von Neumann favoured the use of the existing term 'entropy'. )

Formal definitionsEdit

Claude E. Shannon defines entropy in terms of a discrete random event x, with possible states (or outcomes) 1..n as:

H(x)=\sum_{i=1}^np(i)\log_2 \left(\frac{1}{p(i)}\right)=-\sum_{i=1}^np(i)\log_2 p(i).\,\!

That is, the entropy of the event x is the sum, over all possible outcomes i of x, of the product of the probability of outcome i times the log of the inverse of the probability of i (which is also called i's surprisal - the entropy of x is the expected value of its outcome's surprisal). We can also apply this to a general probability distribution, rather than a discrete-valued event.

Shannon shows that any definition of entropy satisfying his assumptions will be of the form:

-K\sum_{i=1}^np(i)\log p(i).\,\!

where K is a constant (and is really just a choice of measurement units).

Shannon defined a measure of entropy (H = − p1 log2 p1 − … − pn log2 pn) that, when applied to an information source, could determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. The formula can be derived by calculating the mathematical expectation of the amount of information contained in a digit from the information source. Shannon's entropy measure came to be taken as a measure of the uncertainty about the realization of a random variable. It thus served as a proxy capturing the concept of information contained in a message as opposed to the portion of the message that is strictly determined (hence predictable) by inherent structures. For example, redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. See Markov chain.

Shannon's definition of entropy is closely related to thermodynamic entropy as defined by physicists and many chemists. Boltzmann and Gibbs did considerable work on statistical thermodynamics, which became the inspiration for adopting the word entropy in information theory. There are relationships between thermodynamic and informational entropy. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information (needed to define the detailed microscopic state of the system) that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. (See article: MaxEnt thermodynamics). Similarly, Maxwell's demon reverses thermodynamic entropy with information; but if it is itself bound by the laws of thermodynamics, getting rid of that information exactly balances out the thermodynamic gain the demon would otherwise achieve.

It is important to remember that entropy is a quantity defined in the context of a probabilistic model for a data source. Independent fair coin flips have an entropy of 1 bit per flip. A source that always generates a long string of A's has an entropy of 0, since the next character will always be an 'A'.

The entropy rate of a data source means the average number of bits per symbol needed to encode it. Empirically, it seems that entropy of English text is between 1.1 and 1.6 bits per character, though clearly that will vary from text source to text source. Experiments with human predictors show an information rate of 1.1 or 1.6 bits per character, depending on the experimental setup; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character.

From the preceding example, note the following points:

  1. The amount of entropy is not always an integer number of bits.
  2. Many data bits may not convey information. For example, data structures often store information redundantly, or have identical sections regardless of the information in the data structure.

Entropy effectively bounds the performance of the strongest lossless (or nearly lossless) compression possible, which can be realized in theory by using the typical set or in practice using Huffman, Lempel-Ziv or arithmetic coding. The performance of existing data compression algorithms is often used as a rough estimate of the entropy of a block of data.

A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:

H(\mathcal{S}) = - \sum p_i \log_2 p_i, \,\!

where pi is the probability of i. For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:

H(\mathcal{S}) = - \sum_i p_i \sum_j  \  p_i (j) \log_2 p_i (j), \,\!

where i is a state (certain preceding characters) and p_i(j) is the probability of j given i as the previous character (s).

For a second order Markov source, the entropy rate is

 H(\mathcal{S}) = -\sum_i p_i \sum_j p_i(j) \sum_k p_{i,j}(k)\ \log \  p_{i,j}(k). \,\!

In general the b-ary entropy of a source \mathcal{S} = (S,P) with source alphabet S = {a1, …, an} and discrete probability distribution P = {p1, …, pn} where pi is the probability of ai (say pi = p(ai)) is defined by:

 H_b(\mathcal{S}) = - \sum_{i=1}^n p_i \log_b p_i \,\!

Note: the b in "b-ary entropy" is the number of different symbols of the "ideal alphabet" which is being used as the standard yardstick to measure source alphabets. In information theory, two symbols are necessary and sufficient for an alphabet to be able to encode information, therefore the default is to let b = 2 ("binary entropy"). Thus, the entropy of the source alphabet, with its given empiric probability distribution, is a number equal to the number (possibly fractional) of symbols of the "ideal alphabet", with an optimal probability distribution, necessary to encode for each symbol of the source alphabet. Also note that "optimal probability distribution" here means a uniform distribution: a source alphabet with n symbols has the highest possible entropy (for an alphabet with n symbols) when the probability distribution of the alphabet is uniform. This optimal entropy turns out to be  log_b \, n .

Another way to define the entropy function H (not using the Markov model) is by proving that H is uniquely defined (as earlier mentioned) iff H satisfies 1) - 3):

1) H(p1, …, pn) is defined and continuous for all p1, …, pn where pi \in[0,1] for all i = 1, …, n and p1 + … + pn = 1. (Remark that the function solely depends on the probability distribution, not the alphabet.)

2) For all positive integers n, H satisfies


H\underbrace{\left(\frac{1}{n}, \ldots, \frac{1}{n}\right)}_{n\ \mathrm{arguments}} < H\underbrace{\left(\frac{1}{n+1}, \ldots, \frac{1}{n+1}\right).}_{n+1\ \mathrm{arguments}}

3) For positive integers bi where b1 + … + bk = n, H satisfies


H\underbrace{\left(\frac{1}{n}, \ldots, \frac{1}{n}\right)}_n = H\underbrace{\left(\frac{b_1}{n}, \ldots, \frac{b_k}{n}\right)}_k + \sum_{i=1}^k \frac{b_i}{n} H\underbrace{\left(\frac{1}{b_i}, \ldots, \frac{1}{b_i}\right)}_{b_i}.

EfficiencyEdit

A source alphabet encountered in practice should be found to have a probability distribution which is less than optimal. If the source alphabet has n symbols, then it can be compared to an "optimized alphabet" with n symbols, whose probability distribution is uniform. The ratio of the entropy of the source alphabet with the entropy of its optimized version is the efficiency of the source alphabet, which can be expressed as a percentage.

This implies that the efficiency of a source alphabet with n symbols can be defined simply as being equal to its n-ary entropy.

Derivation of Shannon's entropyEdit

Since the entropy was given as a definition, it does not need to be derived. On the other hand, a "derivation" can be given which gives a sense of the motivation for the definition as well as the link to thermodynamic entropy.

Q. Given a roulette with n pockets which are all equally likely to be landed on by the ball, what is the probability of obtaining a distribution (A1, A2, …, An) where Ai is the number of times pocket i was landed on and

 P = \sum_{i=1}^n A_i \,\!

is the total number of ball-landing events?

A. The probability is a multinomial distribution, viz.

 p = {\Omega \over \Tau} = {P! \over A_1! \ A_2! \ A_3! \ \dots \ A_n!} \left(\frac1n\right)^P \,\!

where

 \Omega = {P! \over A_1! \ A_2! \ A_3! \ \dots \ A_n!} \,\!

is the number of possible combinations of outcomes (for the events) which fit the given distribution, and

 \Tau = n^P \

is the number of all possible combinations of outcomes for the set of P events.

Q. And what is the entropy?

A. The entropy of the distribution is obtained from the logarithm of Ω:

 H = \log \Omega = \log \frac{P!}{A_1! \ A_2! \ A_3! \dots \ A_n!} \,\!
 = \log P! - \log A_1! - \log A_2! - \log A_3! - \dots - \log A_n! \
 = \sum_i^P \log i - \sum_i^{A_1} \log i - \sum_i^{A_2} \log i - \dots - \sum_i^{A_n} \log i \,\!

The summations can be approximated closely by being replaced with integrals:

 H = \int_1^P \log x \, dx - \int_1^{A_1} \log x \, dx - \int_1^{A_2} \log x \, dx - \dots - \int_1^{A_n} \log x \, dx. \,\!

The integral of the logarithm is

 \int \log x \, dx = x \log x - \int x \, {dx \over x} = x \log x - x. \,\!

So the entropy is

 H = (P \log P - P + 1) - (A_1 \log A_1 - A_1 + 1) - (A_2 \log A_2 - A_2 + 1) - \dots - (A_n \log A_n - A_n + 1)
 = (P \log P + 1) - (A_1 \log A_1 + 1) - (A_2 \log A_2 + 1) - \dots - (A_n  \log A_n + 1)
 = P \log P - \sum_{x=1}^n A_x \log A_x + (1 - n) \,\!

Change Ax to px = Ax/P and change P to 1 (in order to measure the "bias" or "unevenness", in the probability distribution of the pockets for a single event), then

 H = (1 - n) - \sum_{x=1}^n p_x \log p_x \,\!

and the term (1 − n) can be dropped since it is a constant, independent of the px distribution. The result is

 H = - \sum_{x=1}^n p_x \log p_x \,\!.

Thus, the Shannon entropy is a consequence of the equation

 H = \log \Omega \

which relates to Boltzmann's definition,

 \mathcal{S} = K \ln \Omega ,

of thermodynamic entropy.

Properties of Shannon's information entropyEdit

We write H(X) as Hn(p1,...,pn). The Shannon entropy satisfies the following properties:

  • For any n, Hn(p1,...,pn) is a continuous and symmetric function on variables p1, p2,...,pn.
  • Event of probability zero does not contribute to the entropy, i.e. for any n,
H_{n+1}(p_1,\ldots,p_n,0) = H_n(p_1,\ldots,p_n).
  • Entropy is maximized when the probability distribution is uniform. For all n,
H_n(p_1,\ldots,p_n) \leq H_n\Big(\frac{1}{n},\ldots,\frac{1}{n} \Big).

Following from the Jensen inequality,

H(X) = E\Big[\log_b \Big( \frac{1}{p(X)}\Big) \Big] \leq \log_b \Big( E\Big[ \frac{1}{p(X)} \Big] \Big) = \log_b(n).
  • If p_{ij}, 1\leq i \leq m,  1\leq j \leq n are non-negative real numbers summing up to one, and q_i = \sum_{j=1}^n p_{ij}, then
H_{mn}(p_{11},\ldots, p_{mn}) = H_m(q_1,\ldots,q_m) + \sum_{i=1}^m q_i H_n\Big(\frac{p_{i1}}{q_i},\ldots, \frac{p_{in}}{q_i} \Big).

If we partition the mn outcomes of the random experiment into m groups with each group containing n elements, we can do the experiment in two steps: first, determine the group to which the actual outcome belongs to; then, find the outcome in that group. The probability that you will observe group i is qi. The conditional probability distribution function for group i is pi1/qi,...,pin/qi). The entropy

H_n\Big(\frac{p_{i1}}{q_i},\ldots, \frac{p_{in}}{q_i} \Big)

is the entropy of the probability distribution conditioned on group i. This property means that the total information is the sum of the information gained in the first step, Hm(q1,..., qn), and a weighted sum of the entropies conditioned on each group.

Khinchin in 1957 showed that the only function satisfying the above assumptions is of the form

H_n(p_1,\ldots,p_n) = -k \sum_{i=1}^n p_i \log p_i,

where k is a positive constant representing the desired unit of measurement.

Deriving continuous entropy from discrete entropy: the Boltzmann entropyEdit

The Shannon entropy is restricted to finite sets. It seems that the formula

h[f] = -\int_{-\infty}^{\infty} f(x) \log f(x) dx, (*)

where f denotes a probability density function on the real line, is analogous to the Shannon entropy and could thus be viewed as an extension of the Shannon entropy to the domain of real numbers. Formula (*) is usually referred to as the Boltzmann entropy, continuous entropy, or differential entropy. Although the analogy between both functions is suggestive, the following question must be set: is the Boltzmann entropy a valid extension of the Shannon entropy? To answer this question, we must establish a connection between the two functions:

We wish to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the n (finite or infinite) bins whose probabilities are denoted by pn. As we generalize to the continuous domain, we must make this width explicit.

To do this, start with a continuous function f discretized as shown in the figure. As the figure indicates, by the mean-value theorem there exists a value xi in each bin such that

f(x_i) \Delta = \int_{i\Delta}^{(i+1)\Delta} f(x) dx

and thus the integral of the function f can be approximated (in the Riemannian sense) by

\int_{-\infty}^{\infty} f(x) dx = \lim_{\Delta \to 0} \sum_{i = -\infty}^{\infty} f(x_i) \Delta

where this limit and bin size goes to zero are equivalent.

We will denote

H^{\Delta} :=- \sum_{i=-\infty}^{\infty} \Delta f(x_i) \log \Delta f(x_i)

and expanding the logarithm, we have

H^{\Delta} = - \sum_{i=-\infty}^{\infty} \Delta f(x_i) \log \Delta f(x_i)
 = - \sum_{i=-\infty}^{\infty} \Delta f(x_i) \log f(x_i) -\sum_{i=-\infty}^{\infty} f(x_i) \Delta \log \Delta

As \Delta \to 0, we have

\sum_{i=-\infty}^{\infty} f(x_i) \Delta \to \int f(x) dx = 1

and so

\sum_{i=-\infty}^{\infty} \Delta f(x_i) \log f(x_i) \to \int f(x) \log f(x) dx.

But note that \log \Delta \to -\infty as \Delta \to 0, therefore we need a special definition of the differential or continuous entropy:

h[f] = \lim_{\Delta \to 0} \left[H^{\Delta} + \log \Delta\right] = -\int_{-\infty}^{\infty} f(x) \log f(x) dx,

which is, as said before, referred to as the Boltzmann entropy. This means that the Boltzmann entropy is not a limit of the Shannon entropy for n → ∞ and, consequently is not a measure of uncertainty and information.

This article incorporates material from Shannon's entropy on PlanetMath, which is licensed under the GFDL.

See also Edit

External links Edit

es:Entropía (información) fr:Entropie de Shannonhu:Shannon-entrópiafüggvény nl:Entropie (informatietheorie)ru:Информационная энтропия th:เอนโทรปีของข้อมูล zh:熵 (信息论)

This page uses Creative Commons Licensed content from Wikipedia (view authors).

Around Wikia's network

Random Wiki