# Conditional independence

*34,202*pages on

this wiki

Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |

Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |

**Statistics:**
Scientific method ·
Research methods ·
Experimental design ·
Undergraduate statistics courses ·
Statistical tests ·
Game theory ·
Decision theory

In probability theory, two events *A* and *B* are **conditionally independent** given a third event *C* precisely if the occurrence or non-occurrence of *A* *and* *B* are independent events in their conditional probability distribution given *C*. In other words,

Or equivalently,

Two random variables *X* and *Y* are **conditionally independent** given an event *C* if they are independent in their conditional probability distribution given *C*. Two random variables *X* and *Y* are conditionally independent given a third random variable *W* if for any measurable set *S* of possible values of *W*, *X* and *Y* are conditionally independent given the event [*W* ∈ *S*].

Conditional independence of more than two events, or of more than two random variables, is defined analogously.

## Uses in Bayesian statisticsEdit

Let *p* be the proportion of voters who will vote "yes" in an upcoming referendum. In taking an opinion poll, one chooses *n* voters randomly from the population. For *i* = 1, ..., *n*, let *X*_{i} = 1 or 0 according as the *i*th chosen voter will or will not vote "yes".

In a frequentist approach to statistical inference one would not attribute any probability distribution to *p* (unless the probabilities could be somehow interpreted as relative frequencies of occurrence of some event or as proportions of some population) and one would say that *X*_{1}, ..., *X*_{n} are independent random variables.

By contrast, in a Bayesian approach to statistical inference, one would assign a probability distribution to *p* regardless of the non-existence of any such "frequency" interpretation, and one would construe the probabilities as degrees of belief that *p* is in any interval to which a probability is assigned. In that model, the random variables *X*_{1}, ..., *X*_{n} are *not* independent, but they are **conditionally independent** given the value of *p*. In particular, if a large number of the *X*s are observed to be equal to 1, that would imply a high conditional probability, given that observation, that *p* is near 1, and thus a high conditional probability, given that observation, that the *next* *X* to be observed will be equal to 1.

## See alsoEdit

This page uses Creative Commons Licensed content from Wikipedia (view authors). |