Wikia

Psychology Wiki

Berkson's paradox

Talk0
34,139pages on
this wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


Berkson's paradox or Berkson's fallacy is a result in conditional probability and statistics which is counterintuitive for some people, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions. Specifically, it arises when there is an ascertainment bias inherent in a study design.

It is often described in the fields of medical statistics or biostatistics, as in the original description of the problem by Joseph Berkson.

StatementEdit

The result is that two independent events become conditionally dependent (negatively dependent) given that at least one of them occurs. Symbolically:

if 0 < P(A) < 1 and 0 < P(B) < 1,
and P(A|B) = P(A), i.e. they are independent,
then P(A|B,C) < P(A|C) where C = AB (i.e. A or B).

In words, given two independent events, if you only consider outcomes where at least one occurs, then they become negatively dependent.

ExplanationEdit

The cause is that the conditional probability of event A occurring, given that it or B occurs, is inflated: it is higher than the unconditional probability, because we have excluded cases where neither occur.

P(A|AB) > P(A)
conditional probability inflated relative to unconditional

One can see this in tabular form as follows: the gray regions are the outcomes where at least one event occurs (and ~A means "not A").

A ~A
B A & B ~A & B
~B A & ~B ~A & ~B

For instance, if one has a sample of 100, and both A and B occur independently half the time (So P(A) = P(B) = 1/2), one obtains:

A ~A
B 25 25
~B 25 25

So in 75 outcomes, either A or B occurs, of which 50 have A occurring, so

P(A|AB) = 50/75 = 2/3 > 1/2 = 50/100 = P(A).

Thus the probability of A is higher in the subset (of outcomes where it or B occurs), 2/3, than in the overall population, 1/2.

Berkson's paradox arises because the conditional probability of A given B within this subset equals the conditional probability in the overall population, but the unconditional probability within the subset is inflated relative to the unconditional probability in the overall population, hence, within the subset, the presence of B decreases the conditional probability of A (back to its overall unconditional probability):

P(A|B, AB) = P(A|B) = P(A)
P(A|AB) > P(A).

ExamplesEdit

A classic illustration involves a retrospective study examining a risk factor for a disease in a statistical sample from a hospital in-patient population. If a control group is also ascertained from the in-patient population, a difference in hospital admission rates for the case sample and control sample can result in a spurious association between the disease and the risk factor.

As another example, suppose a collector has 1000 postage stamps, of which 300 are pretty and 100 are rare, with 30 being both pretty and rare. 10% of all her stamps are rare and 10% of her pretty stamps are rare, so prettiness tells nothing about rarity. She puts the 370 stamps which are pretty or rare on display. Just over 27% of the stamps on display are rare, but still only 10% of the pretty stamps on display are rare (and 100% of the 70 not-pretty stamps on display are rare). If an observer only considers stamps on display, he will observe a spurious negative relationship between prettiness and rarity as a result of the selection bias (that is, not-prettiness strongly indicates rarity in the display, but not in the total collection).

See alsoEdit

ReferencesEdit

This page uses Creative Commons Licensed content from Wikipedia (view authors).

Around Wikia's network

Random Wiki