Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory

In probability theory, it is almost a cliche to say that uncorrelatedness of two random variables does not entail independence. In some contexts, uncorrelatedness implies at least pairwise independence (as when the random variables involved have Bernoulli distributions).

It is sometimes mistakenly thought that one context in which uncorrelatedness implies independence is when the random variables involved are normally distributed. Here are the facts:

  • Suppose two random variables X and Y are jointly normally distributed. That is the same as saying that the random vector (XY) has a multivariate normal distribution. It means that the joint probability distribution of X and Y is such that for any two constant (i.e., non-random) scalars a and b, the random variable aX + bY is normally distributed. In that case if X and Y are uncorrelated, i.e., their covariance cov(XY) is zero, then they are independent.
  • But it is possible for two random variables X and Y to be so distributed jointly that each one alone is normally distributed, and they are uncorrelated, but they are not independent. Examples appear below.


  • Suppose X has a normal distribution with expected value 0 and variance 1. Let W = 1 or −1, each with probability 1/2, and assume W is independent of X. Let Y = WX. Then X and Y are uncorrelated, both have the same normal distribution, and X and Y are not independent. Again, the distribution of X + Y concentrates positive probability at 0, since Pr(X + Y = 0) = 1/2.
  • Suppose X has a normal distribution with expected value 0 and variance 1. Let
Y=\left\{\begin{matrix} -X & \mbox{if}\ \left|X\right|<c \\
X & \mbox{if}\ \left|X\right|>c \end{matrix}\right.
where c is a positive number to be specified below. If c is very small, then the correlation corr(XY) is near 1; if c is very large, then corr(XY) is near −1. Since the correlation is a continuous function of c, the intermediate value theorem implies there is some particular value of c that makes the correlation 0. That value is approximately 1.54. In that case, X and Y are uncorrelated, but they are clearly not independent, since X completely determines Y.
To see that Y is normally distributed—indeed, that its distribution is the same as that of X—let us find its cumulative distribution function:
\Pr(Y \leq x) = \Pr(\{|X|<c\mbox{ and }-X<x\}\mbox{ or }\{|X|>c\mbox{ and }X<x\})\,
= \Pr(|X|<c\mbox{ and }-X<x) + \Pr(|X|>c\mbox{ and }X<x)\,
= \Pr(|X|<c\mbox{ and }X<x) + \Pr(|X|>c\mbox{ and }X<x)\,
(This follows from the symmetry of the distribution of X and the symmetry of the condition that |X| < c.)
= \Pr(X<x).\,
Observe that the sum X + Y is nowhere near being normally distributed, since it has a substantial probability (about 0.88) of it being equal to 0, whereas the normal distribution, being a continuous distribution, has no discrete part, i.e., does not concentrate more than zero probability at any single point. Consequently X and Y are not jointly normally distributed, even though they are separately normally distributed.
This page uses Creative Commons Licensed content from Wikipedia (view authors).

Ad blocker interference detected!

Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.