Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Language: Linguistics · Semiotics · Speech
- The topic of this article is distinct from the topics of Library and information science and Information technology.
Information theory is a field of mathematics that considers three fundamental questions:
- Compression: How much can data be compressed (abbreviated) so that another person can recover an identical copy of the uncompressed data?
- Lossy data compression: How much can data be compressed so that another person can recover an approximate copy of the uncompressed data?
- Channel capacity: How quickly can data be communicated to someone else through a noisy medium?
These somewhat abstract questions are answered quite rigorously by using mathematics introduced by Claude Shannon in 1948. His paper spawned the field of information theory, and the results have been crucial to the success of the Voyager missions to deep space, the invention of the CD, the feasibility of mobile phones, analysis of the code used by DNA, and numerous other fields.
OverviewEdit
Information theory is the mathematical theory of data communication and storage, generally considered to have been founded in 1948 by Claude E. Shannon. The central paradigm of classic information theory is the engineering problem of the transmission of information over a noisy channel. The most fundamental results of this theory are Shannon's source coding theorem, which establishes that on average the number of bits needed to represent the result of an uncertain event is given by the entropy; and Shannon's noisy-channel coding theorem, which states that reliable communication is possible over noisy channels provided that the rate of communication is below a certain threshold called the channel capacity. The channel capacity is achieved with appropriate encoding and decoding systems.
Information theory is closely associated with a collection of pure and applied disciplines that have been carried out under a variety of banners in different parts of the world over the past half century or more: adaptive systems, anticipatory systems, artificial intelligence, complex systems, complexity science, cybernetics, informatics, machine learning, along with systems sciences of many descriptions. Information theory is a broad and deep mathematical theory, with equally broad and deep applications, chief among them coding theory.
Coding theory is concerned with finding explicit methods, called codes, of increasing the efficiency and fidelity of data communication over a noisy channel up near the limit that Shannon proved is all but possible. These codes can be roughly subdivided into data compression and error-correction codes. It took many years to find the good codes whose existence Shannon proved. A third class of codes are cryptographic ciphers; concepts from coding theory and information theory are much used in cryptography and cryptanalysis; see the article on deciban for an interesting historical application.
Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even musical composition.
Mathematical theory of informationEdit
- For a more thorough discussion of these basic equations, see Information entropy.
The abstract idea of what "information" really is must be made more concrete so that mathematicians can analyze it.
Self-informationEdit
Shannon defined a measure of information content called the self-information or surprisal of a message m:
- $ I(m) = - \log p(m),\, $
where $ p(m) = Pr(M=m) $ is the probability that message m is chosen from all possible choices in the message space $ M $.
This equation causes messages with lower probabilities to contribute more to the overall value of I(m). In other words, infrequently occurring messages are more valuable. (This is a consequence from the property of logarithms that $ -\log p(m) $ is very large when $ p(m) $ is near 0 for unlikely messages and very small when $ p(m) $ is near 1 for almost certain messages).
For example, if John says "See you later, honey" to his wife every morning before leaving to office, that information holds little "content" or "value". But, if he shouts "Get lost" at his wife one morning, then that message holds more value or content (because, supposedly, the probability of him choosing that message is very low).
EntropyEdit
The entropy of a discrete message space $ M $ is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message $ m $ from that message space:
- $ H(M) = \mathbb{E} \{I(m)\} = \sum_{m \in M} p(m) I(m) = -\sum_{m \in M} p(m) \log p(m). $
The logarithm in the formula is usually taken to base 2, and entropy is measured in bits. An important property of entropy is that it is maximized when all the messages in the message space are equiprobable. In this case $ H(M) = \log |M| $.
Joint entropyEdit
The joint entropy of two discrete random variables $ X $ and $ Y $ is defined as the entropy of the joint distribution of $ X $ and $ Y $:
- $ H(X, Y) = \mathbb{E}_{X,Y} [-\log p(x,y)] = - \sum_{x, y} p(x, y) \log p(x, y) \, $
If $ X $ and $ Y $ are independent, then the joint entropy is simply the sum of their individual entropies.
(Note: The joint entropy is not to be confused with the cross entropy, despite similar notation.)
Conditional entropy (equivocation)Edit
Given a particular value of a random variable $ Y $, the conditional entropy of $ X $ given $ Y=y $ is defined as:
- $ H(X|y) = \mathbb{E}_{{X|Y}} [-\log p(x|y)] = -\sum_{x \in X} p(x|y) \log p(x|y) $
where $ p(x|y) = \frac{p(x,y)}{p(y)} $ is the conditional probability of $ x $ given $ y $.
The conditional entropy of $ X $ given $ Y $, also called the equivocation of $ X $ about $ Y $ is then given by:
- $ H(X|Y) = \mathbb E_Y \{H(X|y)\} = -\sum_{y \in Y} p(y) \sum_{x \in X} p(x|y) \log p(x|y) = \sum_{x,y} p(x,y) \log \frac{p(y)}{p(x,y)}. $
A basic property of the conditional entropy is that:
- $ H(X|Y) = H(X,Y) - H(Y) .\, $
Mutual information (transinformation)Edit
It turns out that one of the most useful and important measures of information is the mutual information, or transinformation. This is a measure of how much information can be obtained about one random variable by observing another. The transinformation of $ X $ relative to $ Y $ (which represents conceptually the amount of information about $ X $ that can be gained by observing $ Y $) is given by:
- $ I(X;Y) = \sum_{x,y} p(y)\, p(x|y) \log \frac{p(x|y)}{p(x)} = \sum_{x,y} p(x,y) \log \frac{p(x,y)}{p(x)\, p(y)}. $
A basic property of the transinformation is that:
- $ I(X;Y) = H(X) - H(X|Y)\, $
Mutual information is symmetric:
- $ I(X;Y) = I(Y;X) = H(X) + H(Y) - H(X,Y),\, $
Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the Multinomial distribution and to Pearson's χ^{2} test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Also, mutual information can be expressed through the Kullback-Leibler divergence by measuring the difference (so to say) of the actual joint distribution to the product of the marginal distributions:
- $ I(X; Y) = D_{KL}\left(p(X,Y) \| p(X)p(Y)\right)\, $
Continuous equivalents of entropyEdit
- See main article: Differential entropy.
Shannon information is appropriate for measuring uncertainty over a discrete space. Its basic measures have been extended by analogy to continuous spaces. The sums can be replaced with integrals and densities are used in place of probability mass functions. By analogy with the discrete case, entropy, joint entropy, conditional entropy, and mutual information can be defined as follows:
- $ h(X) = -\int_X f(x) \log f(x) \,dx $
- $ h(X,Y) = -\int_Y \int_X f(x,y) \log f(x,y) \,dx \,dy $
- $ h(X|y) = -\int_X f(x|y) \log f(x|y) \,dx $
- $ h(X|Y) = -\int_Y \int_X f(x,y) \log \frac{f(x,y)}{f(y)} \,dx \,dy $
- $ I(X;Y) = -\int_Y \int_X f(x,y) \log \frac{f(x,y)}{f(x)f(y)} \,dx \,dy $
where $ f(x,y) $ is the joint density function, $ f(x) $ and $ f(y) $ are the marginal distributions, and $ f(x|y) $ is the conditional distribution.
Channel capacityEdit
Let us return for the time being to our consideration of the communications process over a discrete channel. At this time it will be helpful to have a simple model of the process:
o---------o | Noise | o---------o | V o-------------o X o---------o Y o----------o | Transmitter |-------->| Channel |-------->| Receiver | o-------------o o---------o o----------o
Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let $ p(y|x) $ be the conditional probability distribution function of Y given X. We will consider $ p(y|x) $ to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of $ f(x) $, the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the amount of information, or the signal, we can communicate over the channel. The appropriate measure for this is the transinformation, and this maximum transinformation is called the channel capacity and is given by:
- $ C = \max_f I(X;Y).\, $
Source theoryEdit
Any process that generates successive messages can be considered a source of information. Sources can be classified in order of increasing generality as memoryless, ergodic, stationary, and stochastic, (with each class strictly containing the previous one). The term "memoryless" as used here has a slightly different meaning than it normally does in probability theory. Here a memoryless source is defined as one that generates successive messages independently of one another and with a fixed probability distribution. However, the position of the first occurrence of a particular message or symbol in a sequence generated by a memoryless source is actually a memoryless random variable. The other terms have fairly standard definitions and are actually well studied in their own right outside information theory.
RateEdit
The rate of a source of information is (in the most general case) $ r=\mathbb E H(M_t|M_{t-1},M_{t-2},M_{t-3}, \cdots) $, the expected, or average, conditional entropy per message (i.e. per unit time) given all the previous messages generated. It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a memoryless source is simply $ H(M) $, since by definition there is no interdependence of the successive messages of a memoryless source. The rate of a source of information is related to its redundancy and how well it can be compressed.
Fundamental theoremEdit
- See main article: Noisy channel coding theorem.
Statement (noisy-channel coding theorem)Edit
- 1. For every discrete memoryless channel, the channel capacity
- $ C = \max_{P_X} \,I(X;Y) $
- has the following property. For any ε > 0 and R < C, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε.
- 2. If a probability of bit error p_{b} is acceptable, rates up to R(p_{b}) are achievable, where
- $ R(p_b) = \frac{C}{1-H_2(p_b)} . $
- 3. For any p_{b}, rates greater than R(p_{b}) are not achievable.
(MacKay (2003), p. 162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p. 198; Shannon (1948) thm. 11)
Channel capacity of particular model channelsEdit
- A continuous-time analog communications channel subject to Gaussian noise — see Shannon-Hartley theorem.
Related conceptsEdit
Measure theoryEdit
Here is an interesting and illuminating connection between information theory and measure theory:
If to arbitrary discrete random variables X and Y we associate the existence of sets $ \tilde X $ and $ \tilde Y $, somehow representing the information borne by X and Y, respectively, such that:
- $ \mu(\tilde X \cap \tilde Y) = 0 $ whenever X and Y are independent, and
- $ \tilde X = \tilde Y $ whenever X and Y are such that either one is completely determined by the other (i.e. by a bijection);
where $ \mu $ is a measure over these sets, and we set:
- $ H(X) = \mu(\tilde X), $
- $ H(Y) = \mu(\tilde Y), $
- $ H(X,Y) = \mu(\tilde X \cup \tilde Y), $
- $ H(X|Y) = \mu(\tilde X \,\backslash\, \tilde Y), $
- $ I(X;Y) = \mu(\tilde X \cap \tilde Y); $
we find that Shannon's "measure" of information content satisfies all the postulates and basic properties of a formal measure over sets. This can be a handy mnemonic device in some situations. Certain extensions to the definitions of Shannon's basic measures of information are necessary to deal with the σ-algebra generated by the sets that would be associated to three or more arbitrary random variables. (See Reza pp. 106-108 for an informal but rather complete discussion.) Namely $ H(X,Y,Z,\cdots) $ needs to be defined in the obvious way as the entropy of a joint distribution, and an extended transinformation $ I(X;Y;Z;\cdots) $ defined in a suitable manner (left as an exercise for the ambitious reader) so that we can set:
- $ H(X,Y,Z,\cdots) = \mu(\tilde X \cup \tilde Y \cup \tilde Z \cup \cdots), $
- $ I(X;Y;Z;\cdots) = \mu(\tilde X \cap \tilde Y \cap \tilde Z \cap \cdots); $
in order to define the (signed) measure over the whole σ-algebra. (It is interesting to note that the mutual information of three or more random variables can be negative as well as positive: Let X and Y be two independent fair coin flips, and let Z be their exclusive or. Then $ I(X;Y;Z) = - 1 $ bit.)
This connection is important for two reasons: first, it reiterates and clarifies the fundamental properties of these basic concepts of information theory, and second, it justifies, in a certain formal sense, the practice of calling Shannon's entropy a "measure" of information.
Kolmogorov complexityEdit
A. N. Kolmogorov introduced an alternative information measure that is based on the length of the shortest algorithm to produce a message, called the Kolmogorov complexity. The practical usefulness of the Kolmogorov complexity, however, is somewhat limited by two issues:
- Due to the halting problem, it is in general not possible to actually calculate the Kolmogorov complexity of a given message.
- Due to an arbitrary choice of programming language involved, the Kolmogorov complexity is only defined up to an arbitrary additive constant.
These limitations tend to restrict the usefulness of the Kolmogorov complexity to proving asymptotic bounds, which is really more the domain of complexity theory. Nevertheless it is in a certain sense the "best" possible measure of the information content of a message, and it has the advantage of being independent of any prior probability distribution on the messages.
ApplicationsEdit
Coding theoryEdit
Coding theory is the most important and direct application of information theory. It can be subdivided into data compression theory and error correction theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data. There are two formulations for the compression problem — in lossless data compression the data must be reconstructed exactly, whereas lossy data compression examines how many bits are needed to reconstruct the data to within a specified fidelity level. This fidelity level is measured by a function called a distortion function. In information theory this is called rate distortion theory. Both lossless and lossy source codes produce bits at the output which can be used as the inputs to the channel codes mentioned above.
The idea is to first compress the data, i.e. remove as much of its redundancy as possible, and then add just the right kind of redundancy (i.e. error correction) needed to transmit the data efficiently and faithfully across a noisy channel.
This division of coding theory into compression and transmission is justified by the information transmission theorems, or source-channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models.
Detection and Estimation TheoryEdit
- Further information: Detection theory and Estimation theory
GamblingEdit
Information theory is also important in gambling and (with some ethical reservations) investing. An important but simple relation exists between the amount of side information a gambler obtains and the expected exponential growth of his capital (Kelly). The so-called equation of ill-gotten gains can be expressed in logarithmic form as
- $ \mathbb E \log K_t = \log K_0 + \sum_{i=1}^t H_i $
for an optimal betting strategy, where $ K_0 $ is the initial capital, $ K_t $ is the capital after the tth bet, and $ H_i $ is the amount of side information obtained concerning the ith bet (in particular, the mutual information relative to the outcome of each bettable event). This equation applies in the absence of any transaction costs or minimum bets. When these constraints apply (as they invariably do in real life), another important gambling concept comes into play: the gambler (or unscrupulous investor) must face a certain probability of ultimate ruin. Note that even food, clothing, and shelter can be considered fixed transaction costs and thus contribute to the gambler's probability of ultimate ruin. That is why food is so cheap at casinos.
This equation was the first application of Shannon's theory of information outside its prevailing paradigm of data communications (Pierce). No one knows how much lucre has been gained by the use of this notorious equation since its discovery a half century ago.
The ill-gotten gains equation actually underlies much if not all of mathematical finance, although certainly, when there is money to be made, and eyebrows not to be raised, extreme discretion is employed in its use.
HistoryEdit
The decisive event which established the subject of information theory, and brought it to immediate worldwide attention, was the publication of Claude E. Shannon (1916–2001)'s classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October of 1948.
In this revolutionary and groundbreaking paper, the work for which Shannon had substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process, which underlies information theory; and with it the ideas of the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, as underwritten by the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon-Hartley law for the channel capacity of a Gaussian channel; and of course the bit - a new common currency of information.
Before 1948 Edit
Quantitative ideas of information Edit
The most direct antecedents of Shannon's work were two papers published in the 1920s by Harry Nyquist and Ralph Hartley, who were both still very much research leaders at Bell Labs when Shannon arrived there in the early 1940s.
Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed is mostly concerned with some detailed engineering aspects of telegraph signals. But a more theoretical section discusses quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation
- $ W = K \log m \, $
where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant.
Hartley's 1928 paper, called simply Transmission of Information, went further by introducing the word information, and making explicitly clear the idea that information in this context was quantitative measurable quantity, reflecting only that the receiver was able to distinguish that one sequence of symbols had been sent rather than any other -- quite regardless of any associated meaning or other psychological or semantic aspect the symbols might represent. This amount of information he quantified as
- $ H = \log S^n \, $
where S was the number of possible symbols, and n the number of symbols in a transmission. The natural unit of information was therefore the decimal digit, much later renamed the Hartley in his honour as a unit or scale or measure of information. The Hartley information, H_{0}, is also still very much used as a quantity for the log of the total number of possibilities.
A similar unit of log_{10} probability, the ban, and its derived unit the deciban (one tenth of a ban), were introduced by Alan Turing in 1940 as part of the statistical analysis of the breaking of the German second world war Enigma cyphers. The decibannage represented the reduction in (the logarithm of) the total number of possibilities (similar to the change in the Hartley information); and also the log-likelihood ratio (or change in the weight of evidence) that could be inferred for one hypothesis over another from a set of observations. The expected change in the weight of evidence is equivalent to what was later called the Kullback discrimination information.
But underlying this notion was still the idea of equal a-priori probabilities, rather than the information content of events of unequal probability; nor yet any underlying picture of questions regarding the communication of such varied outcomes.
This work drew on earlier publications by . At the beginning of his paper, Shannon asserted that
ReferencesEdit
The classic paperEdit
- Shannon, C.E. (1948), "A Mathematical Theory of Communication", Bell System Technical Journal, 27, pp. 379–423 & 623–656, July & October, 1948. PDF.
Notes and other formats.
Other journal articlesEdit
- R.V.L. Hartley, "Transmission of Information," Bell System Technical Journal, July 1928
- J. L. Kelly, Jr., "New Interpretation of Information Rate," Bell System Technical Journal, Vol. 35, July 1956, pp. 917-26
- R. Landauer, "Information is Physical" Proc. Workshop on Physics and Computation PhysComp'92 (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1-4.
- R. Landauer, "Irreversibility and Heat Generation in the Computing Process" IBM J. Res. Develop. Vol. 5, No. 3, 1961
Textbooks on information theoryEdit
- Claude E. Shannon, Warren Weaver. The Mathematical Theory of Communication. Univ of Illinois Press, 1963. ISBN 0252725484
- Robert B. Ash. Information Theory. New York: Dover 1990. ISBN 0486665216
- Thomas M. Cover, Joy A. Thomas. Elements of information theory, 2nd Edition. New York: Wiley-Interscience, 2006. ISBN 0471241954 (forthcoming, to be released February 17, 2006.).
- Stanford Goldman. Information Theory. Mineola, N.Y.: Dover 2005 ISBN 0486442713
- Fazlollah M. Reza. An Introduction to Information Theory. New York: Dover 1994. ISBN 048668210
- David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0521642981
Other books Edit
- James Bamford, The Puzzle Palace, Penguin Books, 1983. ISBN 0140067485
- Leon Brillouin, Science and Information Theory, Mineola, N.Y.: Dover, [1956, 1962] 2004. ISBN 0486439186
- W. B. Johnson and J. Lindenstrauss, editors, Handbook of the Geometry of Banach Spaces, Vol. 1. Amsterdam: Elsevier 2001. ISBN 0444828427
- A. I. Khinchin, Mathematical Foundations of Information Theory, New York: Dover, 1957. ISBN 0486604349
- H. S. Leff and A. F. Rex, Editors, Maxwell's Demon: Entropy, Information, Computing, Princeton University Press, Princeton, NJ (1990). ISBN 069108727X
See alsoEdit
ApplicationsEdit
HistoryEdit
TheoryEdit
External linksEdit
- Gibbs, M., "Quantum Information Theory", Eprint
- Schneider, T., "Information Theory Primer", Eprint
- On-line textbook: Information Theory, Inference, and Learning Algorithms, by David MacKay - gives an entertaining and thorough introduction to Shannon theory, including state-of-the-art methods from coding theory, such as arithmetic coding, low-density parity-check codes, and Turbo codes.
es:Teoría de la información et:Informatsiooniteooria fa:نظریه اطلاعات fr:Théorie de l'information gl:Teoría da información he:תורת האינפורמציה hu:Információelmélet id:Teori Informasi io:Informo-teorionl:Informatietheorie no:Informasjonsteoript:Teoria da informação ru:Теория информации sv:Informationsteori th:ทฤษฎีข้อมูล zh:信息论
This page uses Creative Commons Licensed content from Wikipedia (view authors). |