Psychology Wiki
Register
No edit summary
(update wp)
Line 1: Line 1:
 
{{ExpPsy}}
 
{{ExpPsy}}
   
  +
'''Confirmation bias''' (or '''myside bias'''<ref>[[David Perkins (geneticist)|David Perkins]], a [[geneticist]], coined the term ''myside bias'' referring to a preference for "my" side of the issue under consideration. {{Harvnb|Baron|2000|p=195}}</ref>) is a tendency for people to prefer information that confirms their preconceptions or hypotheses, independently of whether they are true.<ref name="lewicka">{{cite book|last=Lewicka|first=Maria|editors=Mirosław Kofta, Gifford Weary, Grzegorz Sedek|title=Personal control in action: cognitive and motivational mechanisms|publisher=Springer|date=1998|isbn=9780306457203|oclc=39002877|chapter=Confirmation Bias: Cognitive Error or Adaptive Strategy of Action Control?|pages=233–255}}</ref><ref name="bensley">{{cite book|last=Bensley|first=D. Alan |title=Critical thinking in psychology: a unified skills approach|publisher=Brooks/Cole|date=1998|isbn=9780534256203|oclc=36949875|page=137}}</ref> People can reinforce their existing attitudes by selectively collecting new evidence, by interpreting evidence in a biased way or by selectively recalling information from memory.<ref name="oswald" /> Some psychologists use "confirmation bias" for any of these three [[cognitive bias]]es, while others restrict the term to selective collection of evidence, using ''assimilation bias'' for biased interpretation.<ref name="risen">{{cite book|last=Risen|first=Jane|coauthors=Thomas Gilovich|title=Critical Thinking in Psychology|editor=Robert J. Sternberg, Henry L. Roediger III, Diane F. Halpern|publisher=Cambridge University Press|date=2007|pages=110–130|chapter=Informal Logical Fallacies|isbn=9780521608343}}</ref><ref name="lewicka" />
In [[psychology]] and [[cognitive science]], '''confirmation bias''' is a tendency to search for or interpret new information in a way that confirms one's preconceptions and avoid information and interpretations which contradict prior beliefs. It is a type of [[cognitive bias]] and represents an error of [[inductive inference]], or as a form of [[selection bias]] toward confirmation of the hypothesis under study or disconfirmation of an alternative hypothesis.
 
   
  +
People tend to test hypotheses in a one-sided way, focusing on one possibility and neglecting alternatives.<ref name="oswald" /><ref name="baron162" /> This strategy is not necessarily a bias, but combined with other effects it can reinforce existing beliefs.<ref name="klaymanha" /><ref name="oswald" /> The biases appear in particular for issues that are emotionally significant (including some personal and political topics) and for established beliefs that shape the individual's expectations.<ref name="oswald" /><ref name="taber_political" /><ref name="baron191" /> Biased search, interpretation and/or recall have been invoked to explain ''[[attitude polarization]]'' (when a disagreement becomes more extreme as the different parties are exposed to the same evidence), ''belief perseverance'' (when beliefs remain after the evidence for them is taken away)<ref name="shortcomings">{{cite book|last=Ross|first=Lee|coauthors=Craig A. Anderson|title=Judgment under uncertainty: Heuristics and biases|editor=Daniel Kahneman, Paul Slovic, Amos Tversky|publisher=Cambridge University Press|date=1982|pages=129–152|chapter=Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments|isbn=9780521284141}}</ref>, the ''irrational primacy effect'' (a stronger weighting for data encountered early in an arbitrary series)<ref name="lord1979" /> and ''[[illusory correlation]]'' (in which people falsely perceive an association between two events).<ref name="kunda127" />
Confirmation bias is an area of interest in the teaching of [[critical thinking]] as the skill is misused when rigorous critical scrutiny is applied to evidence supporting a preconceived idea but not to evidence challenging the same preconception.<ref>[http://www.philosophy.unimelb.edu.au/tgelder/papers/HeadsIWin.pdf Tim van Gelder, "Heads I win, tails you lose": A Foray Into the Psychology of Philosophy]</ref>
 
   
  +
Confirmation biases are effects in information processing, distinct from the ''behavioral confirmation effect'' (also called ''[[self-fulfilling prophecy]]''), in which people's expectations influence their own behavior.<ref>{{cite book|last=Darley|first=John M.|coauthors=Paget H. Gross|title=Stereotypes and prejudice: essential readings|editor=Charles Stangor|date=2000|publisher=Psychology Press|page=212|chapter=A Hypothesis-Confirming Bias in Labelling Effects|isbn=9780863775895}}</ref> They can lead to disastrous decisions, especially in organizational, military and political contexts.<ref name="kida">{{cite book|last=Kida|first=Thomas|title=Don't Believe Everything You Think: The 6 Basic Mistakes We Make in Thinking|publisher=Prometheus Books|date=2006|pages=155–165|isbn=9781591024088}}</ref><ref name="sutherland" /> Confirmation biases contribute to [[overconfidence effect|overconfidence in personal beliefs]].<ref name="baron191">{{Harvnb|Baron|2000|p=191}}</ref>
== Naming ==
 
The effect is also known as '''belief bias''', '''belief preservation''', '''belief overkill''', '''hypothesis locking''', '''polarization effect''', the '''Tolstoy syndrome''', '''selective thinking''' and '''myside bias'''.
 
   
==Overview==
+
==Types==
  +
===Biased search for information===
Among the first to investigate this phenomenon was [[Peter Cathcart Wason]] (1960), whose subjects were presented with three numbers (a [[triple]]):
 
  +
In studies of hypothesis-testing, people reject tests that are guaranteed to give a positive answer, in favor of more informative tests.<ref>{{cite journal|last=Devine|first=Patricia G.|coauthors=Edward R. Hirt, Elizabeth M. Gehrke|date=1990|title=Diagnostic and confirmation strategies in trait hypothesis testing|journal=Journal of Personality and Social Psychology|publisher=American Psychological Association|volume=58|issue=6 |pages=952–963|issn=1939-1315|doi=10.1037/0022-3514.58.6.952}}</ref><ref>{{cite journal|last=Trope|first=Yaacov|coauthors=Miriam Bassok|date=1982|title=Confirmatory and diagnosing strategies in social information gathering|journal=Journal of Personality and Social Psychology|publisher=American Psychological Association|volume=43|issue=1|pages=22–34|issn=1939-1315|doi=10.1037/0022-3514.43.1.22}}</ref> However, many experiments have found that people tend to test in a one-sided way, by searching for evidence consistent with their currently held hypothesis.<ref name="oswald" /><ref name="nickerson" /><ref name="kunda112" /> Rather than searching through all the relevant evidence, they frame questions in such a way that a "yes" answer supports their hypothesis and stop as soon as they find supporting information.<ref name="baron162" /> They look for the evidence that they would expect to see if their hypothesis was true, neglecting what would happen if it were false.<ref name="baron162">{{Harvnb|Baron|2000|pp=162–164}}</ref> For example, someone who is trying to identify a number using yes/no questions and suspects that the number is 3 would ask a question such as, "Is it an odd number?" People prefer this sort of question even when a negative test (such as, "Is it an even number?") would yield exactly the same information.
   
  +
This preference for ''positive tests'' is not itself a bias, since positive tests can be highly informative.<ref name="klaymanha" /> However, in conjunction with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true.<ref name="oswald" />
:{| border=2 cellpadding=6 cellspacing=6
 
  +
| <big>2</big>
 
  +
In many real-world situations, evidence is complex and mixed. For example, many different ideas about someone's personality could be supported by looking at isolated things that he or she does.<ref name="kunda112" /> Thus any search for evidence in favor of a hypothesis is likely to succeed.<ref name="oswald" /> One illustration of this is the way the phrasing of a question can significantly change the answer.<ref name="kunda112">{{Harvnb|Kunda|1999|pp=112–115}}</ref> For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you ''un''happy with your social life?"<ref>{{cite journal|last=Kunda|first=Ziva|coauthors=G. T. Fong, R. Sanitoso, E. Reber|date=1993|title=Directional questions direct self-conceptions|journal=Journal of Experimental Social Psychology|publisher=Society of Experimental Social Psychology|volume=29|pages=62–63|issn=0022-1031}} via {{Harvnb|Fine|2006|pp=63–65}}</ref>
| <big>4</big>
 
  +
| <big>6</big>
 
  +
Even a small change in the wording of a question can affect how someone searches through the available information, and hence, the conclusion they come to. This was shown in an experiment in which subjects read about a [[child custody]] case.<ref name="shafir" /> Of the two parents, Parent A was moderately suitable to be the guardian on a number of dimensions, while Parent B had a mix of salient positive qualities (such as a close relationship with the child) and negative qualities (including a job that would take him or her away for long periods). When the subjects were asked, "Which parent should have custody of the child?" they looked for positive attributes and a majority chose Parent B. However, when the question was, "Which parent should be denied custody of the child?" they looked for negative attributes, and this time a majority answered Parent B, implying that Parent A should have custody.<ref name="shafir">{{cite journal|last=Shafir|first=E.|date=1983|title=Choosing versus rejecting: why some options are both better and worse than others|journal=Memory and Cognition|volume=21|pages=546–556|pmid=8350746|issue=4}} via {{Harvnb|Fine|2006|pp=63–65}}</ref>
  +
  +
In a similar study, subjects had to rate another person on the [[Extraversion and introversion|introversion-extraversion]] personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the subjects chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extraverted, almost all the questions presumed extraversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them.<ref>{{cite journal|last=Snyder|first=Mark|coauthors=William B. Swann, Jr.|date=1978|title=Hypothesis-Testing Processes in Social Interaction|journal=Journal of Personality and Social Psychology|publisher=American Psychological Association|volume=36|issue=11|pages=1202–1212|doi=10.1037/0022-3514.36.11.1202}} via {{cite book|last=Poletiek|first=Fenna|title=Hypothesis-testing behaviour|publisher=Psychology Press|location=Hove, UK|date=2001|page=131|isbn=9781841691596}}</ref> However, a later experiment gave the subjects less presumptive questions to choose from, such as, "Do you shy away from social interactions?"<ref name="kunda117" /> Subjects preferred to ask the more informative questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker secondary preference for positive tests, has been replicated in other studies.<ref name="kunda117">{{Harvnb|Kunda|1999|pp=117–118}}</ref>
  +
  +
One particularly complex rule-discovery task used a computer simulation of a dynamic system.<ref name="mynatt1978">{{cite journal|last=Mynatt|first=Clifford R.|coauthors=Michael E. Doherty, Ryan D. Tweney|date=1978|title=Consequences of confirmation and disconfirmation in a simulated research environment|journal=Quarterly Journal of Experimental Psychology|volume=30|issue=3|pages=395–406|doi=10.1080/00335557843000007}}</ref> Objects on the computer screen moved according to specific laws, which the subjects had to find out. They could "fire" objects across the screen to test their hypotheses. Despite making many attempts, none of the subjects worked out the rules of the system. They typically sought to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. They tended to stick to hypotheses even after they had been falsified by the evidence. Some of the subjects were instructed in proper hypothesis-testing, but these instructions had almost no effect.<ref name="mynatt1978" />
  +
  +
===Biased interpretation===
  +
{{Quote box
  +
| quote = Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.
  +
| source = [[Michael Shermer]], quoted in Thomas Kida's ''Don't Believe Everything You Think'',<ref name="kida" /> p. 157
  +
| width = 30%
  +
| align = right
  +
}}
  +
Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.
  +
  +
Charles Lord, [[Lee Ross]], and [[Mark Lepper]] ran an experiment with subjects who felt strongly about [[capital punishment]], with half in favor and half against.<ref name="lord1979" /> Each of these subjects read descriptions of two studies; one supporting and one undermining the effectiveness of the death penalty. After reading a quick description of each study, the subjects were asked whether their opinions had changed. They then read a much more detailed account of the study's procedure and had to rate how well-conducted and convincing that research was.<ref name="lord1979" /> In fact, the studies were fictional. Half the subjects were told that one kind of study supported the death penalty and the other undermined it, while for other subjects the conclusions were swapped.<ref name="lord1979" />
  +
  +
The subjects, whether proponents and opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed study, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Students described studies supporting their pre-existing view as superior to those that contradicted it, in a number of detailed and specific ways.<ref name="lord1979" /><ref name="vyse122">{{Harvnb|Vyse|1997|p=122}}</ref> Writing about a study that seemed to undermine the deterrence effect, a proponent of the death penalty wrote, "The research didn't cover a long enough period of time," while an opponent's comment on the same study said that, "no strong evidence to contradict the researchers has been presented."<ref name="lord1979" /> The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as ''disconfirmation bias'', has been supported by other experiments.<ref name="taber_political" />
  +
  +
Another study of biased interpretation took place during the [[United States presidential election, 2004|2004 US presidential election]], and involved subjects who described themselves as having strong emotions about the candidates.<ref name="westen2006">{{cite journal|last=Westen|first=Drew|coauthors=Pavel S. Blagov, Keith Harenski, Clint Kilts, Stephan Hamann|date=2006|title=Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election|journal=Journal of Cognitive Neuroscience|publisher=Massachusetts Institute of Technology|volume=18|issue=11|pages=1947–1958|url=http://psychsystems.net/lab/06_Westen_fmri.pdf|accessdate=2009-08-14|doi=10.1162/jocn.2006.18.11.1947|pmid=17069484}}</ref> They were shown apparently contradictory pairs of statements, either from the Republican candidate [[George W. Bush]], the Democratic candidate [[John Kerry]] or from a politically neutral public figure such as [[Tom Hanks]]. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not the target individual's statements were inconsistent. There were strong differences in these evaluations, with subjects much more likely to interpret their opposing candidate as contradictory.
  +
  +
In this experiment, the subjects made their judgements while in an [[MRI|Magnetic Resonance Imaging (MRI)]] scanner, allowing the researchers to monitor their brain activity.<ref name="westen2006" /> As subjects evaluated contradictory statements by their favored candidate, centres of the brain involved in [[emotion]] were aroused. This did not happen with the other targets. The experimenters interpreted this as showing that the differences in evaluation of the statements were not due to passive reasoning errors, but an active strategy by the subjects to reduce the [[cognitive dissonance]] of being confronted by their favored candidate's irrational or hypocritical behavior.
  +
  +
Biased interpretation is not restricted to emotionally significant topics. In another experiment, subjects were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.<ref>{{cite journal|last=Gadenne|first=V.|coauthors=M. Oswald|date=1986|title=Entstehung und Veränderung von Bestätigungstendenzen beim Testen von Hypothesen [Formation and alteration of confirmatory tendencies during the testing of hypotheses]|journal=Zeitschrift für experimentelle und angewandte Psychologie|volume=33|pages=360–374}} via {{Harvnb|Oswald|Grosjean|2004|p=89}}</ref>
  +
  +
===Biased memory===
  +
Even if someone has sought and interpreted evidence in a neutral manner, they may still remember it selectively to reinforce their expectations. This effect is called ''selective recall'', ''confirmatory memory'' or ''access-biased memory''.<ref>{{cite book|last=Hastie|first=Reid|coauthors=Bernadette Park|chapter=The Relationship Between Memory and Judgment Depends on Whether the Judgment Task is Memory-Based or On-Line|title=Social cognition: key readings|editor=David L. Hamilton|publisher=Psychology Press|location=New York|date=2005|page=394|isbn=0863775918}}</ref>
  +
  +
Existing psychological theories make conflicting predictions about selective recall. [[Schema (psychology)|Schema theory]] predicts that information matching prior expectations will be more easily stored and recalled.<ref name="oswald" /> Some alternative approaches say that surprising information stands out more and so is more memorable.<ref name="oswald" /> Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.<ref>{{cite journal|last=Stangor|first=Charles|coauthors=David McMillan|date=1992|title=Memory for expectancy-congruent and expectancy-incongruent information: A review of the social and social developmental literatures|journal=Psychological Bulletin|publisher=American Psychological Association|volume=111|issue=1|pages=42–61|doi=10.1037/0033-2909.111.1.42}}</ref>
  +
  +
In one study, subjects read a description of a woman, including both [[Extraversion and introversion|introverted and extraverted]] behaviors.<ref name="snydercantor" /> Then they had to recall examples of her introversion and extraversion. One group were told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the "librarian" group recalling more examples of introversion and the "sales" groups recalling more extroverted behavior.<ref name="snydercantor">{{cite journal|last=Snyder|first=M.|coauthors=N. Cantor|date=1979|title=Testing hypotheses about other people: the use of historical knowledge|journal=Journal of Experimental Social Psychology|volume=15|pages=330–342|doi=10.1016/0022-1031(79)90042-8}} via {{cite book|last=Goldacre|first=Ben|title=Bad Science|publisher=Fourth Estate|location=London|date=2008|page=231|isbn=9780007240197}}</ref>
  +
A selective memory effect has also been shown in several experiments that manipulate the desirability of personality types.<ref>{{Harvnb|Kunda|1999|pp=225–232}}</ref><ref name="oswald" /> In one of these, a group of subjects were shown evidence that extraverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated, study, they were asked to recall events from their lives in which they had been either introverted or extraverted. Each group of subjects provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.<ref>{{cite journal|last=Sanitioso|first=Rasyid|coauthors=Ziva Kunda, G. T. Fong|date=1990|title=Motivated recruitment of autobiographical memories|journal=Journal of Personality and Social Psychology|publisher=American Psychological Association|issn=0022-3514|volume=59|issue=2|pages=229–241|doi=10.1037/0022-3514.59.2.229|pmid=2213492}}</ref>
  +
  +
One study showed how selective memory can maintain belief in [[extrasensory perception]] (ESP).<ref name="russell_jones">{{cite journal|last=Russell|first=Dan|coauthors=Warren H. Jones|date=1980|title=When superstition fails: Reactions to disconfirmation of paranormal beliefs|journal=Personality and Social Psychology Bulletin|publisher=Society for Personality and Social Psychology|volume=6|issue=1|pages=83–88|issn=1552-7433|doi=10.1177/014616728061012}} via {{Harvnb|Vyse|1997|p=121}}</ref> Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, subjects recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.<ref name="russell_jones" />
  +
  +
==Related effects==
  +
===Polarization of opinion===
  +
{{main|Attitude polarization}}
  +
When people with strongly opposing views interpret new information in a biased way, their views can move even further apart. This is called ''attitude polarization''.<ref name="kuhn_lao" /> One demonstration of this effect involved a series of colored balls being drawn from a "bingo basket". Subjects were told that the basket either contained 60% black and 40% red balls or 40% black and 60% red: their task was to decide which. When one of each color were drawn in succession, subjects usually became more confident in their hypotheses, even though those two observations give no evidence either way. This only happened when the subjects had to commit to their hypotheses, by stating them out loud after each draw.<ref>{{Harvnb|Baron|2000|p=201}}</ref>
  +
  +
A less abstract study was Lord, Ross and Lepper's experiment in which subjects with strong opinions about the death penalty read about experimental evidence. Twenty-three percent of the subjects reported that their views had become more extreme, and this self-reported shift [[correlation|correlated]] strongly with their initial attitudes.<ref name="lord1979">{{cite journal|last=Lord|first=Charles G.|coauthors=Lee Ross, Mark R. Lepper|date=1979|title=Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence|journal=Journal of Personality and Social Psychology|publisher=American Psychological Association|volume=37|issue=11|pages=2098–2109|issn=0022-3514|doi=10.1037/0022-3514.37.11.2098}}</ref> In several later experiments, subjects also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real.<ref>{{cite journal|last=Miller|first=A. G.|coauthors=J. W. McHoskey, C. M. Bane, T. G. Dowd|date=1993|title=The attitude polarization phenomenon: Role of response measure, attitude extremity, and behavioral consequences of reported attitude change |journal=Journal of Personality and Social Psychology|volume=64|pages=561–574|doi=10.1037/0022-3514.64.4.561}}</ref><ref name="taber_political">{{cite journal|last=Taber|first=Charles S.|coauthors=Milton Lodge|date=July 2006|title=Motivated Skepticism in the Evaluation of Political Beliefs|journal=American Journal of Political Science|publisher=Midwest Political Science Association|volume=50|issue=3|pages=755–769|issn=0092-5853|doi=10.1111/j.1540-5907.2006.00214.x}}</ref><ref name="kuhn_lao">{{cite journal|last=Kuhn|first=Deanna|coauthors=Joseph Lao|date=March 1996|title=Effects of Evidence on Attitudes: Is Polarization the Norm?|journal=Psychological Science|publisher=American Psychological Society|volume=7|issue=2|pages=115–120|doi=10.1111/j.1467-9280.1996.tb00340.x}}</ref> Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases. They found that it was prompted not only by considering mixed evidence, but by merely thinking about the topic.<ref name="kuhn_lao" />
  +
  +
Charles Tabor and Milton Lodge argued that the Lee, Ross and Lepper result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. Their study used the emotionally-charged topics of [[gun control]] and [[affirmative action]].<ref name="taber_political" /> They measured the attitudes of their subjects towards these issues before and after reading arguments on each side of the debate. Two groups of subjects showed attitude polarization; those with strong prior opinions and those who were politically knowledgeable. In part of this study, subjects chose which information sources to read, from a list prepared by the experimenters. For example they could read the [[National Rifle Association]]'s and the [[Brady Campaign|Brady Anti-Handgun Coalition]]'s arguments on gun control. Even when instructed to be even-handed, subjects were more likely to read arguments that supported their existing attitudes. This biased search for information correlated well with the polarization effect.<ref name="taber_political" />
  +
  +
===Persistence of discredited beliefs===
  +
{{Quote box
  +
| quote = [B]eliefs can survive potent logical or empirical challenges. They can survive and even be bolstered by evidence that most uncommitted observers would agree logically demands some weakening of such beliefs. They can even survive the total destruction of their original evidential bases.
  +
| source = Lee Ross and Craig Anderson (1982). p.149<ref name="shortcomings" />
  +
| width = 30%
  +
| align = right
  +
}}
  +
Confirmation biases can be used to explain why some beliefs remain when the initial evidence for them is removed.<ref name="shortcomings" /> This ''belief perseverance'' effect has been shown by a series of experiments using what is called the ''debriefing paradigm'': subjects examine faked evidence for a hypothesis, their attitude change is measured, then they learn that the evidence was fictitious. Their attitudes are then measured once more to see if their belief returns to its previous level.<ref name="shortcomings" />
  +
  +
A typical finding is that at least some of the initial belief remains even after a full debrief.<ref name="kunda99">{{Harvnb|Kunda|1999|p=99}}</ref> In one experiment, subjects had to distinguish between real and fake suicide notes. They were given feedback at random, some being told they had done well on this task and some being told they were bad at it. Even after being fully debriefed, subjects were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.<ref>{{cite journal|last=Ross|first= Lee|coauthors=Mark R. Lepper, Michael Hubbard |title=Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm|journal= Journal of Personality and Social Psychology|volume=32|publisher=American Psychological Association |issn=0022-3514|pages= 880–892|issue=5|date=1975|doi=10.1037/0022-3514.32.5.880|pmid=1185517}} via {{Harvnb|Kunda|1999|p=99}}</ref>
  +
  +
In another study, subjects read job performance ratings of two firefighters, along with their responses to a [[risk aversion]] test.<ref name="shortcomings" /> These fictional data were arranged to show either a negative or positive [[correlation|association]] between risk-taking attitudes and job success.<ref name="socialperseverance" /> Even if these case studies had been true, they would have been scientifically poor evidence. However, the subjects found them subjectively persuasive.<ref name="socialperseverance">{{cite journal|title=Perseverance of Social Theories: The Role of Explanation
  +
in the Persistence of Discredited Information|first=Craig A.|last=Anderson |coauthors=Mark R. Lepper, Lee Ross|journal=Journal of Personality and Social Psychology|date=1980 |volume= 39 |issue=6|pages=1037–1049|publisher=American Psychological Association|issn=0022-3514|doi=10.1037/h0077720}}</ref> When the case studies were shown to be fictional, subjects' belief in a link diminished, but around half of the original effect remained.<ref name="shortcomings" /> The researchers conducted follow-up interviews to make sure the subjects had understood the debriefing and taken it seriously. Subjects seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.<ref name="socialperseverance" />
  +
  +
===Preference for early information===
  +
Many psychological experiments have found that information is weighted more strongly when it appears early in a series, even when the order is evidentially unimportant. For example, people form a more positive impression of someone described as, "intelligent, industrious, impulsive, critical, stubborn, envious," than when they are given the same words in reverse order.<ref name="baron197">{{Harvnb|Baron|2000|pp=197–200}}</ref> This ''irrational primacy effect'' is independent of the [[serial position effect|primacy effect in memory]] in which the earlier items in a series leave a stronger memory trace.<ref name="baron197" /> Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.<ref name="nickerson" />
  +
  +
One demonstration of irrational primacy involved colored chips supposedly drawn from two urns. Subjects were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them.<ref name="baron197" /> In fact, the colors appeared in a pre-arranged order. The first thirty draws favored one urn and the next thirty favored the other.<ref name="nickerson" /> The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, subjects favored the urn suggested by the initial thirty.<ref name="baron197" /> Another experiment displayed a slide show of a single object, starting with just a blur and showing slightly better focus each time.<ref name="baron197" /> At each stage, subjects had to state their best guess of what the object was. Subjects whose early guesses were wrong persisted with those guesses, even when the pictures were so in focus that other people could clearly see what the objects were.<ref name="nickerson" />
  +
  +
===Illusory association between events===
  +
{{main|Illusory correlation}}
  +
''[[Illusory correlation]]'' is the tendency to see non-existent [[correlation]]s, in a set of data, that fit one's preconceptions.<ref name="fine">{{Harvnb|Fine|2006|pp=66–70}}</ref> This phenomenon was first demonstrated in a 1969 experiment involving the [[Rorschach inkblot test]]. The subjects in the experiment read a set of case studies, and reported that the [[homosexual]] men in the set were more likely to report seeing buttocks or anuses in the ambiguous figures. In fact the case studies were fictional and, in one version of the experiment, had been constructed so that the homosexual men were ''less'' likely to report such imagery.<ref name="fine" /> Another study recorded the symptoms experienced by [[arthritis|arthritic]] patients, along with weather conditions over a fifteen month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.<ref>{{cite journal|last=Redelmeir|first=D. A.|coauthors=Amos Tversky|date=1996|title=On the belief that arthritis pain is related to the weather|journal=Proceedings of the National Academy of Science|volume=93|pages=2895–2896|doi=10.1073/pnas.93.7.2895}} via {{Harvnb|Kunda|1999|p=127}}</ref>
  +
  +
This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior.<ref name="kunda127">{{Harvnb|Kunda|1999|pp=127–130}}</ref> In judging whether two events (such as illness and bad weather) are correlated, people rely heavily on the number of ''positive-positive'' cases (in this example, instances of both pain and bad weather). They pay relatively little attention to the other kinds of observation (of no pain and/or good weather).<ref>{{cite book|last=Plous|first=Scott|title=The Psychology of Judgment and Decision Making|publisher=McGraw-Hill|date=1993|pages=162–164|isbn=9780070504776}}</ref> This parallels the reliance on positive tests in hypothesis testing.<ref name="kunda127" /> It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.<ref name="kunda127" />
  +
  +
{| class="wikitable" border="1" cellpadding="4" style="width:250px;text-align:center;margin: 1em auto 1em auto"
  +
|+ Example
  +
|-
  +
! Days !! Rain !! No rain
  +
|-
  +
! Arthritis
  +
| 14 || 6
  +
|-
  +
! No arthritis
  +
| 7 || 2
 
|}
 
|}
  +
In the above fictional example, there is actually a slightly negative correlation between rain and arthritis symptoms, considering all four cells of the table. However, people are likely to focus on the relatively large number of ''positive-positive'' cases in the top-left cell (days with both rain and arthritic symptoms), and think they see a positive association.<ref>Adapted from {{Harvnb|Oswald|Grosjean|2004|p=103}}</ref>
   
  +
==History==
and told that triple conforms to a particular rule. They were then asked to discover the rule by generating their own triples and use the feedback they received from the experimenter. Every time the subject generated a triple, the experimenter would indicate whether the triple conformed to the rule (right) or not (wrong). The subjects were told that once they were sure of the correctness of their hypothesized rule, they should announce the rule.
 
  +
[[Image:Francis Bacon.jpg|thumb|right|alt=Engraved head-and-shoulders portrait of Francis Bacon wearing a hat and ruff. |[[Francis Bacon]] wrote that biased assessment of evidence drove "all superstitions, whether in astrology, dreams, omens, divine judgments or the like."<ref name="bacon">Bacon, Francis (1620). ''Novum Organum''. reprinted in {{cite book|title=The English philosophers from Bacon to Mill|editor=E. A. Burtt|publisher=Random House|location=New York|date=1939|page=36}} via {{cite journal|last=Nickerson|first=Raymond S.|date=1998|title=Confirmation Bias; A Ubiquitous Phenomenon in Many Guises|journal=Review of General Psychology|publisher=Educational Publishing Foundation|volume=2|issue=2|pages=175–220|issn=1089-2680|doi=10.1037/1089-2680.2.2.175}}</ref>]]
   
  +
===Informal observation===
While the actual rule was simply “any ascending [[sequence]]”, the subjects seemed to have a great deal of difficulty in inducing it, often announcing rules that were far more complex than the correct rule. More interestingly, the subjects seemed to only test “positive” examples, triples that subjects believed would conform to their rule and thus confirm their hypothesis. What the subjects did not do was attempt to [[falsificationism|falsify]] their hypotheses by testing triples that they believed would not conform to their rule. Wason referred to this phenomenon as the confirmation bias, whereby subjects systematically seek evidence to confirm rather than to deny their hypotheses.
 
  +
Prior to the psychological research on confirmation bias, the phenomenon had been observed anecdotally by writers including [[Thucydides]] (c. 460 BC – c. 395 BC), [[Francis Bacon]] (1561-1626)<ref name="baron195">{{Harvnb|Baron|2000|pp=195–196}}</ref> and [[Leo Tolstoy]] (1828-1910).
   
  +
Thucydides, in the [[History of the Peloponnesian War]] wrote,
The confirmation bias was Wason’s original explanation for the systematic errors made by subjects in the [[Wason selection task]]. In essence, the subjects were only choosing to examine cards that could confirm the given rule rather than disconfirm it. Confirmation bias has been used as a theory for why people believe and maintain [[pseudoscientific]] ideas.
 
   
  +
{{quote|...it is a habit of mankind (...) to use sovereign reason to thrust aside what they do not fancy.<ref>Thucydides, Richard Crawley (trans) ''The History of the Peloponnesian War'' http://classics.mit.edu/Thucydides/pelopwar.mb.txt</ref>}}
==Political bias study==
 
In January 2006, [[Drew Westen]] and a team from [[Emory University]] announced at the annual Society for Personality and Social Psychology conference in Palm Springs, California the results of a study<ref>{{cite journal | last = Westen | first = Drew | authorlink = Drew Westen | coauthors = Kilts, C., Blagov, P., Harenski, K., and Hamann, S. | title = The neural basis of motivated reasoning: An fMRI study of emotional constraints on political judgment during the U.S. Presidential election of 2004. | journal = Journal of Cognitive Neuroscience. | volume = | issue = | pages = | date = 2006 | url = http://www.psychsystems.net/lab/type4.cfm?id=400&section=4&source=200&source2=1}}</ref> showing the brain activity for confirmation bias. Their results suggest the unconscious and emotion driven nature of this form of bias.
 
   
  +
Bacon, in the [[Novum Organum]] wrote,
The study was carried out during the pre-electoral period of the 2004 presidential election on 30 men, half who described themselves as strong Republicans]] and half as strong
 
Democrats. During a [[functional magnetic resonance imaging]] (fMRI) scan, the subjects were asked to assess contradictory statements by both George W. Bush and John Kerry. The scans showed that the part of the brain associated with reasoning, the [[prefrontal cortex|dorsolateral prefrontal cortex]], was not involved when assessing the statements. Conversely, the most active regions of the brain were those involved in processing emotions ([[orbitofrontal cortex]]), conflict resolution ([[anterior cingulate cortex]]) and making judgment about moral accountability ([[posterior cingulate cortex]]).<ref>{{cite web|url= http://www.sciam.com/article.cfm?chanID=sa006&colID=13&articleID=000CE155-1061-1493-906183414B7F0162 |title= The Political Brain|accessdate= 2006-09-03 |last=Shermer |first= Michael |authorlink= Michael Shermer|year=2006 |month=July |publisher= Scientific American}}</ref>
 
   
  +
{{quote|The human understanding when it has once adopted an opinion (...)<!--"(either as being the received opinion or as being agreeable to itself)" omitted for space--> draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.]<ref name="bacon" />}}
Dr. Westen summarised the work:
 
{{Cquote|None of the circuits involved in conscious reasoning were particularly engaged. Essentially, it appears as if partisans twirl the cognitive kaleidoscope until they get the conclusions they want, and then they get massively reinforced for it, with the elimination of negative emotional states and activation of positive ones.... Everyone from executives and judges to scientists and politicians may reason to emotionally biased judgments when they have a vested interest in how to interpret 'the facts'.<ref>{{cite web|url= http://www.sciencedaily.com/releases/2006/01/060131092225.htm |title= Emory Study Lights Up The Political Brain |accessdate=2006-09-03|author= Emory University Health Sciences Center|date=2006-01-31 |publisher= Science Daily }}</ref>}}
 
   
  +
===Wason's research on hypothesis-testing===
== Evans experiment ==
 
  +
The first paper to use the term "confirmation bias" was [[Peter Cathcart Wason|Peter Wason]]'s (1960) rule-discovery experiment.<ref name="oswald" /> He challenged subjects to identify a rule applying to [[Tuple|triple]]s of numbers, starting from the information that (2,4,6) fits the rule. Subjects could generate their own triples and the experimenter told them whether or not each triple conformed to the rule.<ref name="wason1960" />
In a series of experiments by Evans, et al., subjects were presented with [[deductive reasoning|deductive arguments]] (in each of which a series of premises and a conclusion are given) and asked to indicate if each conclusion necessarily follows from the premises given. In other words, the subjects are asked to make an evaluation of logical [[validity]]. The subjects, however, exhibited confirmation bias when they rejected valid arguments with unbelievable conclusions, and endorsed invalid arguments with believable conclusions. It seems that instead of following directions and assessing logical validity, the subjects base their assessments on personal [[beliefs]]. <ref>Evans, J. St. B. T., Barston, J.L., & Pollard, P. (1983). On the conflict between logic and belief in syllogistic reasoning. ''Memory and Cognition'', 11, 295-306.</ref>
 
   
  +
While the actual rule was simply "any ascending sequence", the subjects had a great deal of difficulty in arriving at it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last".<ref name="wason1960">{{cite journal|last=Wason |first=Peter C. |date=1960 |title=On the failure to eliminate hypotheses in a conceptual task |journal=Quarterly Journal of Experimental Psychology |volume=12 |publisher=Psychology Press|pages=129–140}}</ref> The subjects seemed to test only positive examples—triples that obeyed their hypothesised rule. For example, if they thought the rule was, "Each number is two greater than its predecessor," they would offer a triple that fit this rule, such as (11,13,15) rather than a triple that violates it, such as (11,12,19).
It has been argued that like in the case of the [[matching bias]], using more realistic content in [[syllogisms]] can facilitate more normative performance, and the use of more abstract, artificial content has a biasing effect on performance.
 
   
  +
The ''normative'' theory (of how people ''ought'' to test hypotheses) used by Wason was [[falsificationism]], according to which a scientific test of a theory is a serious attempt to falsify it. Wason interpreted his results as showing a preference for confirmation over falsification, hence the term "confirmation bias".<ref name="oswald">{{Harvnb|Oswald|Grosjean|2004|pp=79–96}}</ref> He also used confirmation bias to explain the results of his [[Wason selection task|selection task]] experiment.<ref>{{cite journal|last=Wason |first= Peter C. |date=1968 |title=Reasoning about a rule |journal=Quarterly Journal of Experimental Psychology |publisher=Psychology Press|volume=20 |pages=273–28}}</ref> In this task, subjects are given partial information about a set of objects, and have to specify what further information they would need to tell whether or not a [[Material conditional|conditional rule]] ("If A, then B") applies. It has been found repeatedly that people perform badly on various forms of this test, in most cases ignoring information that could potentially refute the rule.<ref name="sutherland" /><ref>{{cite book|last=Barkow|first=Jerome H.|coauthors=Leda Cosmides, John Tooby|title=The adapted mind: evolutionary psychology and the generation of culture|publisher=Oxford University Press US|date=1995|pages=181–184|isbn=9780195101072}}</ref>
== Reasons for effect ==
 
There are prosaic reasons why beliefs persevere despite contrary evidence. Embarrassment over having to withdraw a publicly declared belief, for example, or stubbornness or hope. [[Superstition]], [[religion]], or [[ideology]] can allow a believer to give a greater weight to articles of faith over facts.{{Fact|date=March 2007}}<!-- This reluctance needs to be tied into a confirmation bias-->
 
   
  +
===Klayman and Ha's critique===
One explanation may lie in the workings of the human sensory system. Human brains and senses are organised in such a manner so as to facilitate rapid evaluation of social situations and others' states of mind. There is an evolutionary benefit in just estimating significance and relevance quickly, rather than waiting for an exact answer. Studies have shown that this behaviour is evident in the choosing of friends and partners <ref>[http://abcnews.go.com/Technology/story?id=69942&page=1]</ref> and houses, even <ref>[http://www.nature.com/news/2006/060109/pf/060109-13_pf.html websites]</ref>, though it is largely subconscious. Although it can be a very fast process<ref>[http://www.sciencedaily.com/releases/2006/01/060124223317.htm]</ref>, the initial impression has a lasting effect as a byproduct of the brain's tendency to fill in the gaps of what it perceives and an unwillingness of the part of a believer to admit that their cogitation was erroneous.
 
  +
A 1987 paper by Klayman and Ha showed that the Wason experiments had demonstrated a ''positive test strategy'' rather than a true confirmation bias.<ref name="oswald" /> A positive test strategy is an example of a [[heuristic]]: a reasoning short-cut that is imperfect but easy to compute. Klayman and Ha used [[Bayesian probability]] and [[information theory]] as their normative standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, scientific tests of a hypothesis aim to maximise the expected information content. This in turn depends on the initial probabilities of the hypotheses, so a positive test can either be highly informative or uninformative, depending on the likelihood of the different possible outcomes. Klayman and Ha argued that in most real situations, targets are specific and have a small initial probability. In this case, positive tests are usually usually more informative than negative tests.<ref name="klaymanha">{{cite journal|last=Klayman|first=Joshua|coauthors=Young-Won Ha|date=1987|title=Confirmation, Disconfirmation and Information in Hypothesis Testing|journal=Psychological Review|publisher=American Psychological Association|volume=94|issue=2|pages=211–228|issn=0033-295X|url=http://www.stats.org.uk/statistical-inference/KlaymanHa1987.pdf|accessdate=2009-08-14|doi=10.1037/0033-295X.94.2.211}}</ref> However, in Wason's rule discovery task the target rule was very broad, so positive tests are unlikely to yield informative answers. This interpretation was supported by a similar experiment that used the labels "DAX" and "MED" in place of "fits the rule" and "doesn't fit the rule". Subjects in this version of the experiment were much more successful at finding the correct rule.<ref>{{cite journal|last=Tweney|first=Ryan D.|coauthors=Michael E. Doherty, Winifred J. Worner, Daniel B. Pliske, Clifford R. Mynatt, Kimberly A. Gross, Daniel L. Arkkelin|date=1980|title=Strategies of rule discovery in an inference task|journal=The Quarterly Journal of Experimental Psychology|publisher=Psychology Press|volume=32|issue=1|pages=109–123|doi=10.1080/00335558008248237}} (Experiment IV)</ref><ref name="lewicka" />
  +
{|
  +
|-valign="top"
  +
| [[Image:Klayman Ha1.svg|thumb|alt=Within the universe of all possible triples, those that fit the true rule are shown schematically as a circle. The hypothesised rule is a smaller circle enclosed within it. |If the true rule (T) encompasses the current hypothesis (H), then ''positive tests''
  +
(examining an H to see if it is T) will not show that the hypothesis is false.]]
   
  +
| [[Image:Klayman Ha2.svg|thumb|alt=Two overlapping circles represent the true rule and the hypothesized rule. Any observation falling in the non-overlapping parts of the circles shows that the two rules are not exactly the same. In other words, those observations falsify the hypothesis.|If the true rule (T) ''overlaps'' the current hypothesis (H), then either a negative test or a positive test can potentially falsify H.]]
== Polarization effect ==
 
Polarization occurs when mixed or neutral evidence is used to bolster an already established and clearly biased point of view.
 
As a result, people on both sides can move farther apart, or
 
polarize, when they are presented with the same mixed evidence.
 
   
  +
| [[Image:Klayman ha3 annotations.svg|thumb|alt=The triples fitting the hypothesis are represented as a circle within the universe of all triples. The true tule is a smaller circle within this.|When the working hypothesis (H) includes the true rule (T) then positive tests are the ''only'' way to falsify H.]]
In 1979, [[Charles Lord|Lord]], [[Lee Ross|Ross]] and [[Mark Lepper|Lepper]] conducted an experiment to explore what would happen if they presented subjects harboring divergent opinions with the same body of mixed evidence. They hypothesized that each opposing group would use the same pieces of evidence to further support their opinions. The subjects chosen were 24 proponents and 24 opponents of the [[death penalty]]. They were given an article about the effectiveness of [[capital punishment]] and were asked to evaluate it. Then, the subjects were given detailed research descriptions of the study they had just read, but this time it included procedure, results, prominent criticisms and results shown in a table or graph. They were then asked to evaluate the study, stating how well it was conducted and how convincing the evidence was overall.
 
  +
|}
  +
In light of this and other critiques, the focus of research moved away from confirmation versus falsification to examine whether people test hypotheses in an informative way, or an uninformative but positive way. The search for "true" confirmation bias led psychologists to look at a wider range of effects in how people process information.<ref name="oswald" />
   
  +
==Explanations==
The results were congruous with the hypothesis. Students found that studies which supported their pre-existing view were superior to those which contradicted it, in a number of detailed and specific ways. In fact, the studies all described the same experimental procedure but with only the purported result changed.<ref>[http://faculty.babson.edu/krollag/org_site/soc_psych/lord_death_pen.html summary here]</ref>
 
  +
Confirmation biases are generally explained in terms of motivation and/or cognitive (information processing) errors. [[Ziva Kunda]] argues that these two effects work together, with motivation creating the bias, but cognitive factors determining the size of the effect.<ref name="nickerson" />
   
  +
Motivational explanations involve an effect of [[Desire (emotion)|desire]] on [[belief]], sometimes called ''[[wishful thinking]]''.<ref name="nickerson" /> It is known that people prefer pleasant thoughts over unpleasant ones in a number of ways: this is called the ''[[Pollyanna principle]]''.<ref>{{cite book|last=Matlin|first=Margaret W.|title=Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory|editor=Rüdiger F. Pohl|publisher=Psychology Press|location=Hove|date=2004|pages=255–272|chapter=Pollyanna Principle|isbn=9781841693514}}</ref> Applied to [[argument]]s or sources of [[evidence]], this could explain why desired conclusions are more likely to be believed true.<ref name="nickerson" /> According to experiments that manipulate the desirability of the conclusion, people apply a high evidential standard ("Must I believe this?") to unpalatable ideas and a low standard ("Can I believe this?") to preferred ideas.<ref>{{cite journal|last=Dawson|first=Erica |coauthors=Thomas Gilovich, Dennis T. Regan|date=October 2002|title=Motivated Reasoning and Performance on the Wason Selection Task|journal=Personality and Social Psychology Bulletin|publisher=Society for Personality and Social Psychology|volume=28|issue=10|pages=1379–1387|url=http://comp9.psych.cornell.edu/sec/pubPeople/tdg1/Dawson.Gilo.Regan.pdf|accessdate=2009-09-30|doi=10.1177/014616702236869}}</ref><ref>{{cite journal|last=Ditto|first=Peter H.|coauthors=David F. Lopez|date=1992|title=Motivated skepticism : use of differential decision criteria for preferred and nonpreferred conclusions|journal=Journal of personality and social psychology|publisher=American Psychological Association|volume=63|issue=4|pages=568–584|issn=0022-3514|doi=10.1037/0022-3514.63.4.568}}</ref> Although [[consistency]] is a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information.<ref name="nickerson" />
Overall, there was a visible increase of attitude polarization. Initial analysis of the experiment shows that proponents and opponents confessed to shifting their attitudes slightly in the direction of the first study they read, but, once subjects read the more detailed study, they returned to their original belief regardless of the evidence provided, pointing to the details that support their viewpoint and disregarding anything that was contrary.
 
   
  +
Trope and Liberman use [[cost-benefit analysis]] to explain the motivational effect. Their theory assumes that people unconsciously weigh the costs of different kinds of error. For instance, someone who underestimates a friend's honesty might treat them suspiciously and so undermine the friendship. Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way.<ref>{{cite book|last=Trope|first=Y.|coauthors=A. Liberman|title=Social Psychology: Handbook of basic principles|editor=E. Tory Higgins, Arie W. Kruglanski|publisher=Guilford Press|location=New York|date=1996|chapter=Social hypothesis testing: cognitive and motivational mechanisms|isbn=9781572301009}} via {{Harvnb|Oswald|Grosjean|2004|pp=91–93}}</ref>
It is not accurate to say that the subjects were trying to view the evidence in a biased manner, but, since the subjects already had such strong opinions about capital punishment their reading of the evidence was colored towards their point of view. Looking at the same piece of evidence, an opponent and proponent would each argue that it supports their own cause, thus pushing their contrary opinions even further into their opposing corners.
 
   
  +
The information-processing explanations are based on limitations in people's ability to handle complex tasks, and the ''heuristics'' (information-processing shortcuts) that they use. For example, judgments of the reliability of evidence may be based on the ''[[availability heuristic]]'' (how readily a particular idea comes to mind). Another possibility is that people can only focus on one thought at a time, so find it difficult to test altenative hypotheses in parallel.<ref name="nickerson" /> Another heuristic is the ''positive test strategy'' identified by Klayman and Ha, according to which people test a hypothesis by examining cases when they expect a property or event to occur.<ref name="klaymanha" /> By using this heuristic, people avoid the difficult or impossible task of evaluating the informativeness of each possible question. However, the strategy is not universally reliable, so people can overlooking challenges to their existing beliefs.
Polarization can occur in conjunction with other [[assimilation biases]] such as [[illusory correlation]], [[selective exposure]] or the [[primary effects]]. The normative model for this bias is the [[neutral evidence principle]]. Interestingly, a formulated belief can prevail even if the evidence that was used in the initial formation of that belief is entirely negated. <ref>Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. ''Journal of Personality and Social Psychology, 37,'' 2098-2109.</ref>
 
   
  +
==Consequences==
== Tolstoy syndrome ==
 
  +
===In physical and mental health===
The behavior of confirmation bias has been named after a quote from Count Leo Tolstoy (1828-1910): {{Cquote|''"I know that most men, including those at ease with problems of the greatest complexity, can seldom accept the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have proudly taught to others, and which they have woven, thread by thread, into the fabrics of their life"''.<ref>{{cite web|title=Glossary|work=Ask Dr. Stoll|url=http://askwaltstollmd.com/wwwboard/glossary/t.html|accessdate=2006-04-13}}</ref>}}
 
  +
Raymond Nickerson blames confirmation bias for the ineffective [[Medicine|medical procedures]] that were continued for centuries before the [[History of medicine|arrival of scientific medicine]].<ref name="nickerson" /> Medical authorities focused on positive instances (treatments followed by recovery) rather than looking for alternative explanations, such as that the disease had run its natural course. According to [[Ben Goldacre]], biased assimilation is a factor in the modern appeal of [[alternative medicine]], whose proponents are swayed by positive [[anecdotal evidence]] but treat scientific evidence hyper-critically.<ref>{{cite book|last=Goldacre|first=Ben|title=Bad Science|publisher=Fourth Estate|location=London|date=2008|page=233|isbn=9780007240197}}</ref>
   
  +
[[Aaron T. Beck]] describes the role of this type of bias in depressive patients.<ref>{{cite book|last=Beck |first= Aaron T. |date=1976 |title=Cognitive therapy and the emotional disorders |location=New York |publisher=International Universities Press|isbn=9780823609901|oclc=2330993}}</ref> He argues that depressive patients maintain their depressive state because they fail to recognize information that might make them happier, and only focus on evidence showing that their lives are unfulfilling. According to Beck, an important step in the cognitive treatment of these individuals is to overcome this bias, and to search and recognize information about their lives more impartially. [[Jonathan Baron]] points out that some forms of [[psychopathology]], particularly [[delusion]], are defined by irrational maintenance of a belief.<ref name="baron195" />
A related Tolstoy quote is: {{Cquote|''"The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him."''}}
 
   
== Myside bias ==
+
===In politics and law===
  +
[[Image:Witness impeachment.jpg|thumb|right|alt=A woman and a man reading a document in a courtroom|[[Mock trial]]s allow researchers to examine confirmation biases in a realistic setting]]
The term "myside bias" was coined by [[David Perkins]], ''myside'' referring to "my" side of the issue under consideration. An important consequence of the myside bias is that many incorrect beliefs are slow to change and often become stronger even when evidence is presented which should weaken the belief. Generally, such ''irrational belief persistence'' results from according too much weight to evidence that accords with one's belief, and too little weight to evidence that does not. It can also result from the failure to search impartially for information.
 
  +
Nickerson also argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to.<ref name="nickerson" /> Since the evidence in a jury trial can be complex, and jurors often form a decision about the outcome early on, it is reasonable to expect an attitude polarization effect. This prediction (that jurors will become more extreme in their views as they see more evidence) has been borne out in experiments with [[mock trial]]s.<ref>{{cite journal|last=Myers|first=D. G.|coauthors=H. Lamm|date=1976|title=The group polarization phenomenon|journal=Psychological Bulletin|volume=83|pages=602–627|doi=10.1037/0033-2909.83.4.602}} via {{cite journal|last=Nickerson|first=Raymond S.|date=1998|title=Confirmation Bias; A Ubiquitous Phenomenon in Many Guises|journal=Review of General Psychology|publisher=Educational Publishing Foundation|volume=2|issue=2|pages=193–194|issn=1089-2680}}</ref><ref name="halpern">{{cite book|last=Halpern|first=Diane F.|title=Critical thinking across the curriculum: a brief edition of thought and knowledge|publisher=Lawrence Erlbaum Associates|date=1987|page=194|isbn=9780805827316}}</ref>
   
  +
Confirmation bias can be a factor in creating or extending [[conflict]]s, from emotionally-charged debates to [[war]]s, because each side may interpret the evidence to suggest that they are in a stronger position and will win.<ref name="baron195" /> On the other hand, confirmation bias can make people ignore or misinterpret the signs of an imminent conflict or other undesirable situation. For example, psychologists [[Stuart Sutherland]] and Thomas Kida have each argued that [[Husband E. Kimmel|US Admiral Husband E. Kimmel]]'s confirmation bias played a role in the success of the [[Attack on Pearl Harbor|Japanese attack on Pearl Harbor]].<ref name="kida" /><ref name="sutherland" />
[[Jonathan Baron]] describes many instances where myside bias affects our lives. For example, poor students suffer from irrational belief persistence when they fail to criticize their own ideas and remain rigid in their mistaken beliefs. These students suffer from myside bias because they do not look for or tend to ignore evidence against their mistaken claims. Baron also mentions certain forms of [[psychopathology]] as good examples of myside bias. Delusional patients, for instance, might continually wrongly believe that a [[cough]] or [[sneeze]] means that they are dying, even when doctors insist that they are healthy.
 
   
  +
A two-decade study of political [[Pundit (expert)|pundits]] by [[Philip E. Tetlock]] found they performed worse than chance when asked to make multiple-choice predictions. Tetlock divided the experts into "foxes" who maintained multiple hypotheses, and "hedgehogs" who were more dogmatic. He blamed the failure of the hedgehogs on confirmation bias; specifically, their inability to make use of new information that contradicted their existing theories.<ref>{{cite book|last=Tetlock|first=Philip E.|title=Expert Political Judgment: How Good Is It? How Can We Know?|publisher=Princeton University Press|location=Princeton, N.J.|date=2005|isbn=9780691123028|oclc=56825108 |pages=125–128}}</ref>
[[A.T. Beck]] describes the role of this type of bias in depressive patients. He argues that depressive patients maintain their depressive state because they fail to recognize information that might make them happier, and only focus on evidence showing that their lives are unfulfilling. According to Beck, an important step in the cognitive treatment of these individuals is to overcome this bias, and to search and recognize information about their lives more impartially.
 
   
== See also ==
+
===In the paranormal===
  +
One factor in the appeal of "readings" by [[psychics]] is that listeners apply a confirmation bias in fitting the psychic's statements to their own lives.<ref name="toolkit">{{cite book|last=Smith|first=Jonathan C.|title=Pseudoscience and Extraordinary Claims of the Paranormal: A Critical Thinker's Toolkit|publisher=John Wiley and Sons|date=2009|pages=149–151|isbn=9781405181228}}</ref> The technique of [[cold reading]] (giving a subjectively impressive reading without any prior information about the target) can be enhanced by making ambiguous statements and by "shotgunning" lots of statements so that the target has more opportunities to find a match.<ref name="toolkit" /> Investigator [[James Randi]] compared the transcript of a reading to the client's report of what the psychic had said, and found that the client showed a strong selective memory for the "hits".<ref>{{cite book|last=Randi|first=James|title=James Randi: psychic investigator|publisher=Boxtree|date=1991|isbn=9781852831448|pages=58–62}}</ref>
* [[Expectancy effect]]
 
  +
* [[List of cognitive biases]]
 
  +
Nickerson gives numerological [[pyramidology]] (the practice of finding meaning in the proportions of the Egyptian pyramids) as "a striking illustration" of confirmation bias in the real world.<ref name="nickerson">{{cite journal|last=Nickerson|first=Raymond S.|date=1998|title=Confirmation Bias; A Ubiquitous Phenomenon in Many Guises|journal=Review of General Psychology|publisher=Educational Publishing Foundation|volume=2|issue=2|pages=175–220|issn=1089-2680|doi=10.1037/1089-2680.2.2.175}}</ref> There are many different length measurements that can be made of, for example, the [[Great Pyramid of Giza]] and many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.<ref name="nickerson" />
* [[Informational listening]]
 
  +
* [[Effective listening]]
 
  +
===In scientific procedure===
* [[Active listening]]
 
  +
A distinguishing feature of [[science|scientific thinking]] is the search for falsifying as well as confirming evidence.<ref name="nickerson" /> However, many times in the [[history of science]], scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data.<ref name="nickerson" />
* [[Wason selection task]]
 
  +
In the context of scientific research, confirmation biases can lead to theories or research programmes persevering in the face of inadequate or even contradictory evidence,<ref name="sutherland">{{cite book|last=Sutherland|first=Stuart|title=Irrationality|edition=2nd|publisher=Pinter and Martin|location=London|date=2007|pages=95–103|isbn=9781905177073}}</ref><ref>{{cite book|last=Proctor|first=Robert W.|coauthors=E. John Capaldi|title=Why science matters: understanding the methods of psychological research|publisher=Wiley-Blackwell|date=2006|page=68|isbn=9781405130493}}</ref> with [[parapsychology]] being particularly affected.<ref>{{cite book |last=Sternberg |first=Robert J. |authorlink= |editor=Robert J. Sternberg, Henry L. Roediger, Diane F. Halpern |title=Critical Thinking in Psychology |year=2007|publisher=Cambridge University Press |isbn=0521608341 |page=292 |chapter=Critical Thinking in Psychology: It really is critical |quote=Some of the worst examples of confirmation bias are in research on parapsychology (...) Arguably, there is a whole field here with no powerful confirming data at all. But people want to believe, and so they find ways to believe.}}</ref> An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter's expectations may be more likely to be discarded as the product of assumed experimental error (the so-called [[filedrawer effect]]). Although this tendency exists, scientific training teaches some ways in which to avoid bias.<ref name="shadish">{{cite book|last=Shadish|first=William R.|title=Critical Thinking in Psychology|editor=Robert J. Sternberg, Henry L. Roediger III, Diane F. Halpern|publisher=Cambridge University Press|date=2007|page=49|chapter=Critical Thinking in Quasi-Experimentation|isbn=9780521608343}}</ref> [[Design of experiments|Experimental designs]] involving [[Randomization#Randomized_experiments|randomization]] and [[double blind trials]], along with the social process of [[peer review]], mitigate the effect of individual scientists' bias.<ref name="shadish" /><ref>{{cite journal|last=Shermer|first=Michael|date=July 2006|title=The Political Brain|journal=Scientific American|url=http://www.scientificamerican.com/article.cfm?id=the-political-brain|accessdate=2009-08-14}}</ref>
* [[Robert Jervis]]
 
  +
* [[Hostile media effect]]
 
  +
==See also==
  +
{{Portalbox
  +
|left=
  +
|boxwidth=200px
  +
|margin=0px
  +
|name1=Psychology
  +
|image1=Psi2.svg
  +
|name2=Thinking
  +
|image2=Leonardo self.jpg
  +
}}
  +
{{multicol}}
  +
* [[Belief bias]]
  +
* [[Choice-supportive bias]]
  +
* [[Doublethink]]
 
* [[Experimenter's regress]]
 
* [[Experimenter's regress]]
  +
* [[Forer effect]]
* George Orwell's "[[Doublethink]]"
 
* [[Forer effect]], the personal validation fallacy
+
* [[Hostile media effect]]
* [[Fortune-telling]]
+
* [[Informational listening]]
  +
{{multicol-break}}
* Retroactive clairvoyance, or [[Postdiction]]
 
  +
* [[List of cognitive biases]]
  +
* [[List of memory biases]]
  +
* [[Observer-expectancy effect]]
  +
* [[Selective exposure theory]]
  +
* [[Selection bias]]
  +
* [[Semmelweis reflex]]
  +
* [[Subjective validation]]
  +
{{multicol-end}}
   
== References ==
+
==Notes==
  +
{{reflist|2}}
<div class="references-small">
 
<references/>
 
</div>
 
   
 
==References==
 
==References==
  +
* {{cite book|last=Baron |first=Jonathan |year=2000 |title=Thinking and deciding| edition=3rd |location= New York|publisher= Cambridge University Press |isbn=0521650305|oclc=316403966|ref=harv}}
*Wason, P.C. (1960). On the failure to eliminate hypotheses in a conceptual task. ''Quarterly Journal of Experimental Psychology, 12'', 129-140.
 
  +
* {{cite book|last=Fine|first=Cordelia|title=A Mind of its Own: how your brain distorts and deceives|publisher=Icon books|location=Cambridge, UK|year=2006|isbn=1840466782|oclc=60668289|ref=harv}}
* Wason,cvbcvb P.C. (1966). Reasoning. In B. M. Foss (Ed.), ''New horizons in psychology I'', 135-151. Harmondsworth, UK: Penguin.
 
  +
* {{cite book|last=Kunda|first=Ziva|title=Social Cognition: Making Sense of People|publisher=MIT Press|year=1999|isbn=9780262611435|oclc=40618974|ref=harv}}
* Wason, P.C. (1968). Reasoning about a rule. ''Quarterly Journal of Experimental Psychology, 20'', 273-281.
 
  +
* {{cite book|last=Oswald|first=Margit E.|first2=Stefan|last2=Grosjean|title=Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory|editor-last=Pohl|editor-first=Rüdiger F. |publisher=Psychology Press|location=Hove, UK|year=2004|chapter=Confirmation Bias|isbn=9781841693514|oclc=55124398|ref=harv}}
* Mynatt, C.R., Doherty, M.E., & Tweney, R.D. (1977). Confirmation bias in a simulated research environment: an experimental study of scientific inference. ''Quarterly Journal of Experimental Psychology, 29'', 85-95.
 
  +
* {{cite book|last=Vyse|first=Stuart A.|title=Believing in magic: The psychology of superstition|year=1997|isbn=0195136349|location=New York |publisher=Oxford University Press|oclc=35025826|ref=harv}}
* Griggs, R.A. & Cox, J.R. (1982). The elusive thematic materials effect in the Wason selection task. ''British Journal of Psychology, 73'', 407-420.
 
  +
* Nickerson, R.S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. ''Review of General Psychology, 2'', 175-220.
 
  +
==Further reading==
* Fugelsang, J., Stein, C., Green, A., & Dunbar, K. (2004). Theory and data interactions of the scientific mind: Evidence from the molecular and the cognitive laboratory. ''Canadian Journal of Experimental Psychology, 58'', 132-141.
 
  +
* {{cite book|last=Bell|first=Robert|title=Impure Science: fraud, compromise, and political influence in scientific research|publisher=John Wiley & Sons|location=New York|date=1992|isbn=9780471529132|oclc=24913051}}
  +
* {{cite book|last=Westen|first=Drew|title=The political brain: the role of emotion in deciding the fate of the nation|publisher=PublicAffairs|date=2007|isbn=9781586484255|oclc=86117725}}
  +
  +
==External links==
  +
* [http://skepdic.com/confirmbias.html Skeptic's Dictionary: confirmation bias]
  +
* [http://www.devpsy.org/teaching/method/confirmation_bias.html Teaching about confirmation bias]
  +
* [http://www.cxoadvisory.com/gurus/Fisher/article/ Confirmation Bias as applied to financial theory]
  +
* [http://hosted.xamai.ca/confbias/ Confirmation bias learning object (mathematically oriented)]
  +
* [http://faculty.babson.edu/krollag/org_site/soc_psych/lord_death_pen.html Brief summary of the Lord, Ross and Lepper (1979) assimilation bias paper]
  +
* [http://www.talkorigins.org/origins/postmonth/feb02.html "Morton's demon"], Usenet post by Glenn Morton, February 2, 2002<!-- cited on page 12 of Mark Isaak (2007). ''The counter-creationism handbook''. University of California Press. isbn 9780520249264 -->
  +
  +
{{Biases}}
  +
  +
[[Category:Cognitive biases]]
  +
[[Category:Critical thinking]]
  +
[[Category:Logical fallacies]]
   
  +
<!--
* Cohen, L.J. (1981). Can human irrationality be experimentally demonstrated? ''The Behavioral and Brain Sciences'', 4, 317-370.
 
  +
[[de:Bestätigungsfehler]]
* Ross, L., Lepper, M. R. and Hubbard, M. "Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm" ''Journal of Personality and Scoial Psychology'', 32, 880-892 1975
 
  +
[[es:Sesgo de confirmación]]
* D.W. Schumann (Ed.) "Causal reasoning and belief perseverance" ''Proceedings of the Society for Consumer Psychology'' (pp. 115-120) Knoxville, TN: University of Tennessee 1989
 
  +
[[fr:Biais de confirmation d'hypothèse]]
* Tutin, Judith "Belief Perseverance: A Replication and Extension of Social Judgment Findings" Edu. Resources Inf. Ctr. ED240409 1983
 
  +
[[ko:확증편향]]
* Bell, R, 1992, Impure Science, John Wiley & Sons, New York
 
  +
[[is:Staðfestingartilhneiging]]
* Helman, H, 1998, Great Feuds in Science, John Wiley & Sons, New York
 
  +
[[he:הטיית אישור]]
* Kohn, A, 1986, False Prophets, Basil Blackwell Inc, New York
 
  +
[[ja:確証バイアス]]
* Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: Use of differential decision criteria for preferred and non-preferred conclusions. ''Journal of Personality and Social Psychology'', 63, 568-584.
 
  +
[[pl:Efekt potwierdzania]]
* Edwards K. & Smith E. E. (1996). A disconfirmation bias in the evaluation of arguments.'' Journal of Personality and Social Psychology'', 71, 5-24.
 
  +
[[sv:Konfirmeringsbias]]
* Baron, Jonathan. (1988, 1994, 2000). ''Thinking and Deciding.'' Cambridge University Press.
 
  +
-->
* Beck, A.T. (1976). ''Cognitive therapy and the emotional disorders''. New York: International Universities Press.
 
   
== External links ==
 
*[http://skepdic.com/confirmbias.html Skeptic's Dictionary: confirmation bias]
 
*[http://www.devpsy.org/teaching/method/confirmation_bias.html Teaching about confirmation bias]
 
   
[[Category:cognitive biases]]
 
   
:es:Sesgo de confirmación
 
:fr:Biais de confirmation d'hypothèse
 
:he:הטיית אישור
 
:pl:Efekt potwierdzania
 
 
{{enWP|Confirmation bias}}
 
{{enWP|Confirmation bias}}

Revision as of 19:40, 25 February 2010

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


Confirmation bias (or myside bias[1]) is a tendency for people to prefer information that confirms their preconceptions or hypotheses, independently of whether they are true.[2][3] People can reinforce their existing attitudes by selectively collecting new evidence, by interpreting evidence in a biased way or by selectively recalling information from memory.[4] Some psychologists use "confirmation bias" for any of these three cognitive biases, while others restrict the term to selective collection of evidence, using assimilation bias for biased interpretation.[5][2]

People tend to test hypotheses in a one-sided way, focusing on one possibility and neglecting alternatives.[4][6] This strategy is not necessarily a bias, but combined with other effects it can reinforce existing beliefs.[7][4] The biases appear in particular for issues that are emotionally significant (including some personal and political topics) and for established beliefs that shape the individual's expectations.[4][8][9] Biased search, interpretation and/or recall have been invoked to explain attitude polarization (when a disagreement becomes more extreme as the different parties are exposed to the same evidence), belief perseverance (when beliefs remain after the evidence for them is taken away)[10], the irrational primacy effect (a stronger weighting for data encountered early in an arbitrary series)[11] and illusory correlation (in which people falsely perceive an association between two events).[12]

Confirmation biases are effects in information processing, distinct from the behavioral confirmation effect (also called self-fulfilling prophecy), in which people's expectations influence their own behavior.[13] They can lead to disastrous decisions, especially in organizational, military and political contexts.[14][15] Confirmation biases contribute to overconfidence in personal beliefs.[9]

Types

Biased search for information

In studies of hypothesis-testing, people reject tests that are guaranteed to give a positive answer, in favor of more informative tests.[16][17] However, many experiments have found that people tend to test in a one-sided way, by searching for evidence consistent with their currently held hypothesis.[4][18][19] Rather than searching through all the relevant evidence, they frame questions in such a way that a "yes" answer supports their hypothesis and stop as soon as they find supporting information.[6] They look for the evidence that they would expect to see if their hypothesis was true, neglecting what would happen if it were false.[6] For example, someone who is trying to identify a number using yes/no questions and suspects that the number is 3 would ask a question such as, "Is it an odd number?" People prefer this sort of question even when a negative test (such as, "Is it an even number?") would yield exactly the same information.

This preference for positive tests is not itself a bias, since positive tests can be highly informative.[7] However, in conjunction with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true.[4]

In many real-world situations, evidence is complex and mixed. For example, many different ideas about someone's personality could be supported by looking at isolated things that he or she does.[19] Thus any search for evidence in favor of a hypothesis is likely to succeed.[4] One illustration of this is the way the phrasing of a question can significantly change the answer.[19] For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you unhappy with your social life?"[20]

Even a small change in the wording of a question can affect how someone searches through the available information, and hence, the conclusion they come to. This was shown in an experiment in which subjects read about a child custody case.[21] Of the two parents, Parent A was moderately suitable to be the guardian on a number of dimensions, while Parent B had a mix of salient positive qualities (such as a close relationship with the child) and negative qualities (including a job that would take him or her away for long periods). When the subjects were asked, "Which parent should have custody of the child?" they looked for positive attributes and a majority chose Parent B. However, when the question was, "Which parent should be denied custody of the child?" they looked for negative attributes, and this time a majority answered Parent B, implying that Parent A should have custody.[21]

In a similar study, subjects had to rate another person on the introversion-extraversion personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the subjects chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extraverted, almost all the questions presumed extraversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them.[22] However, a later experiment gave the subjects less presumptive questions to choose from, such as, "Do you shy away from social interactions?"[23] Subjects preferred to ask the more informative questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker secondary preference for positive tests, has been replicated in other studies.[23]

One particularly complex rule-discovery task used a computer simulation of a dynamic system.[24] Objects on the computer screen moved according to specific laws, which the subjects had to find out. They could "fire" objects across the screen to test their hypotheses. Despite making many attempts, none of the subjects worked out the rules of the system. They typically sought to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. They tended to stick to hypotheses even after they had been falsified by the evidence. Some of the subjects were instructed in proper hypothesis-testing, but these instructions had almost no effect.[24]

Biased interpretation

Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.
Michael Shermer, quoted in Thomas Kida's Don't Believe Everything You Think,[14] p. 157

Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.

Charles Lord, Lee Ross, and Mark Lepper ran an experiment with subjects who felt strongly about capital punishment, with half in favor and half against.[11] Each of these subjects read descriptions of two studies; one supporting and one undermining the effectiveness of the death penalty. After reading a quick description of each study, the subjects were asked whether their opinions had changed. They then read a much more detailed account of the study's procedure and had to rate how well-conducted and convincing that research was.[11] In fact, the studies were fictional. Half the subjects were told that one kind of study supported the death penalty and the other undermined it, while for other subjects the conclusions were swapped.[11]

The subjects, whether proponents and opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed study, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Students described studies supporting their pre-existing view as superior to those that contradicted it, in a number of detailed and specific ways.[11][25] Writing about a study that seemed to undermine the deterrence effect, a proponent of the death penalty wrote, "The research didn't cover a long enough period of time," while an opponent's comment on the same study said that, "no strong evidence to contradict the researchers has been presented."[11] The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as disconfirmation bias, has been supported by other experiments.[8]

Another study of biased interpretation took place during the 2004 US presidential election, and involved subjects who described themselves as having strong emotions about the candidates.[26] They were shown apparently contradictory pairs of statements, either from the Republican candidate George W. Bush, the Democratic candidate John Kerry or from a politically neutral public figure such as Tom Hanks. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not the target individual's statements were inconsistent. There were strong differences in these evaluations, with subjects much more likely to interpret their opposing candidate as contradictory.

In this experiment, the subjects made their judgements while in an Magnetic Resonance Imaging (MRI) scanner, allowing the researchers to monitor their brain activity.[26] As subjects evaluated contradictory statements by their favored candidate, centres of the brain involved in emotion were aroused. This did not happen with the other targets. The experimenters interpreted this as showing that the differences in evaluation of the statements were not due to passive reasoning errors, but an active strategy by the subjects to reduce the cognitive dissonance of being confronted by their favored candidate's irrational or hypocritical behavior.

Biased interpretation is not restricted to emotionally significant topics. In another experiment, subjects were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.[27]

Biased memory

Even if someone has sought and interpreted evidence in a neutral manner, they may still remember it selectively to reinforce their expectations. This effect is called selective recall, confirmatory memory or access-biased memory.[28]

Existing psychological theories make conflicting predictions about selective recall. Schema theory predicts that information matching prior expectations will be more easily stored and recalled.[4] Some alternative approaches say that surprising information stands out more and so is more memorable.[4] Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.[29]

In one study, subjects read a description of a woman, including both introverted and extraverted behaviors.[30] Then they had to recall examples of her introversion and extraversion. One group were told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the "librarian" group recalling more examples of introversion and the "sales" groups recalling more extroverted behavior.[30] A selective memory effect has also been shown in several experiments that manipulate the desirability of personality types.[31][4] In one of these, a group of subjects were shown evidence that extraverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated, study, they were asked to recall events from their lives in which they had been either introverted or extraverted. Each group of subjects provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.[32]

One study showed how selective memory can maintain belief in extrasensory perception (ESP).[33] Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, subjects recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.[33]

Related effects

Polarization of opinion

Main article: Attitude polarization

When people with strongly opposing views interpret new information in a biased way, their views can move even further apart. This is called attitude polarization.[34] One demonstration of this effect involved a series of colored balls being drawn from a "bingo basket". Subjects were told that the basket either contained 60% black and 40% red balls or 40% black and 60% red: their task was to decide which. When one of each color were drawn in succession, subjects usually became more confident in their hypotheses, even though those two observations give no evidence either way. This only happened when the subjects had to commit to their hypotheses, by stating them out loud after each draw.[35]

A less abstract study was Lord, Ross and Lepper's experiment in which subjects with strong opinions about the death penalty read about experimental evidence. Twenty-three percent of the subjects reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes.[11] In several later experiments, subjects also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real.[36][8][34] Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases. They found that it was prompted not only by considering mixed evidence, but by merely thinking about the topic.[34]

Charles Tabor and Milton Lodge argued that the Lee, Ross and Lepper result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. Their study used the emotionally-charged topics of gun control and affirmative action.[8] They measured the attitudes of their subjects towards these issues before and after reading arguments on each side of the debate. Two groups of subjects showed attitude polarization; those with strong prior opinions and those who were politically knowledgeable. In part of this study, subjects chose which information sources to read, from a list prepared by the experimenters. For example they could read the National Rifle Association's and the Brady Anti-Handgun Coalition's arguments on gun control. Even when instructed to be even-handed, subjects were more likely to read arguments that supported their existing attitudes. This biased search for information correlated well with the polarization effect.[8]

Persistence of discredited beliefs

[B]eliefs can survive potent logical or empirical challenges. They can survive and even be bolstered by evidence that most uncommitted observers would agree logically demands some weakening of such beliefs. They can even survive the total destruction of their original evidential bases.
—Lee Ross and Craig Anderson (1982). p.149[10]

Confirmation biases can be used to explain why some beliefs remain when the initial evidence for them is removed.[10] This belief perseverance effect has been shown by a series of experiments using what is called the debriefing paradigm: subjects examine faked evidence for a hypothesis, their attitude change is measured, then they learn that the evidence was fictitious. Their attitudes are then measured once more to see if their belief returns to its previous level.[10]

A typical finding is that at least some of the initial belief remains even after a full debrief.[37] In one experiment, subjects had to distinguish between real and fake suicide notes. They were given feedback at random, some being told they had done well on this task and some being told they were bad at it. Even after being fully debriefed, subjects were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.[38]

In another study, subjects read job performance ratings of two firefighters, along with their responses to a risk aversion test.[10] These fictional data were arranged to show either a negative or positive association between risk-taking attitudes and job success.[39] Even if these case studies had been true, they would have been scientifically poor evidence. However, the subjects found them subjectively persuasive.[39] When the case studies were shown to be fictional, subjects' belief in a link diminished, but around half of the original effect remained.[10] The researchers conducted follow-up interviews to make sure the subjects had understood the debriefing and taken it seriously. Subjects seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.[39]

Preference for early information

Many psychological experiments have found that information is weighted more strongly when it appears early in a series, even when the order is evidentially unimportant. For example, people form a more positive impression of someone described as, "intelligent, industrious, impulsive, critical, stubborn, envious," than when they are given the same words in reverse order.[40] This irrational primacy effect is independent of the primacy effect in memory in which the earlier items in a series leave a stronger memory trace.[40] Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.[18]

One demonstration of irrational primacy involved colored chips supposedly drawn from two urns. Subjects were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them.[40] In fact, the colors appeared in a pre-arranged order. The first thirty draws favored one urn and the next thirty favored the other.[18] The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, subjects favored the urn suggested by the initial thirty.[40] Another experiment displayed a slide show of a single object, starting with just a blur and showing slightly better focus each time.[40] At each stage, subjects had to state their best guess of what the object was. Subjects whose early guesses were wrong persisted with those guesses, even when the pictures were so in focus that other people could clearly see what the objects were.[18]

Illusory association between events

Main article: Illusory correlation

Illusory correlation is the tendency to see non-existent correlations, in a set of data, that fit one's preconceptions.[41] This phenomenon was first demonstrated in a 1969 experiment involving the Rorschach inkblot test. The subjects in the experiment read a set of case studies, and reported that the homosexual men in the set were more likely to report seeing buttocks or anuses in the ambiguous figures. In fact the case studies were fictional and, in one version of the experiment, had been constructed so that the homosexual men were less likely to report such imagery.[41] Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a fifteen month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.[42]

This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior.[12] In judging whether two events (such as illness and bad weather) are correlated, people rely heavily on the number of positive-positive cases (in this example, instances of both pain and bad weather). They pay relatively little attention to the other kinds of observation (of no pain and/or good weather).[43] This parallels the reliance on positive tests in hypothesis testing.[12] It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.[12]

Example
Days Rain No rain
Arthritis 14 6
No arthritis 7 2

In the above fictional example, there is actually a slightly negative correlation between rain and arthritis symptoms, considering all four cells of the table. However, people are likely to focus on the relatively large number of positive-positive cases in the top-left cell (days with both rain and arthritic symptoms), and think they see a positive association.[44]

History

File:Francis Bacon.jpg

Francis Bacon wrote that biased assessment of evidence drove "all superstitions, whether in astrology, dreams, omens, divine judgments or the like."[45]

Informal observation

Prior to the psychological research on confirmation bias, the phenomenon had been observed anecdotally by writers including Thucydides (c. 460 BC – c. 395 BC), Francis Bacon (1561-1626)[46] and Leo Tolstoy (1828-1910).

Thucydides, in the History of the Peloponnesian War wrote,

...it is a habit of mankind (...) to use sovereign reason to thrust aside what they do not fancy.[47]

Bacon, in the Novum Organum wrote,

The human understanding when it has once adopted an opinion (...) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.][45]

Wason's research on hypothesis-testing

The first paper to use the term "confirmation bias" was Peter Wason's (1960) rule-discovery experiment.[4] He challenged subjects to identify a rule applying to triples of numbers, starting from the information that (2,4,6) fits the rule. Subjects could generate their own triples and the experimenter told them whether or not each triple conformed to the rule.[48]

While the actual rule was simply "any ascending sequence", the subjects had a great deal of difficulty in arriving at it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last".[48] The subjects seemed to test only positive examples—triples that obeyed their hypothesised rule. For example, if they thought the rule was, "Each number is two greater than its predecessor," they would offer a triple that fit this rule, such as (11,13,15) rather than a triple that violates it, such as (11,12,19).

The normative theory (of how people ought to test hypotheses) used by Wason was falsificationism, according to which a scientific test of a theory is a serious attempt to falsify it. Wason interpreted his results as showing a preference for confirmation over falsification, hence the term "confirmation bias".[4] He also used confirmation bias to explain the results of his selection task experiment.[49] In this task, subjects are given partial information about a set of objects, and have to specify what further information they would need to tell whether or not a conditional rule ("If A, then B") applies. It has been found repeatedly that people perform badly on various forms of this test, in most cases ignoring information that could potentially refute the rule.[15][50]

Klayman and Ha's critique

A 1987 paper by Klayman and Ha showed that the Wason experiments had demonstrated a positive test strategy rather than a true confirmation bias.[4] A positive test strategy is an example of a heuristic: a reasoning short-cut that is imperfect but easy to compute. Klayman and Ha used Bayesian probability and information theory as their normative standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, scientific tests of a hypothesis aim to maximise the expected information content. This in turn depends on the initial probabilities of the hypotheses, so a positive test can either be highly informative or uninformative, depending on the likelihood of the different possible outcomes. Klayman and Ha argued that in most real situations, targets are specific and have a small initial probability. In this case, positive tests are usually usually more informative than negative tests.[7] However, in Wason's rule discovery task the target rule was very broad, so positive tests are unlikely to yield informative answers. This interpretation was supported by a similar experiment that used the labels "DAX" and "MED" in place of "fits the rule" and "doesn't fit the rule". Subjects in this version of the experiment were much more successful at finding the correct rule.[51][2]

File:Klayman Ha1.svg

If the true rule (T) encompasses the current hypothesis (H), then positive tests (examining an H to see if it is T) will not show that the hypothesis is false.

File:Klayman Ha2.svg

If the true rule (T) overlaps the current hypothesis (H), then either a negative test or a positive test can potentially falsify H.

File:Klayman ha3 annotations.svg

When the working hypothesis (H) includes the true rule (T) then positive tests are the only way to falsify H.

In light of this and other critiques, the focus of research moved away from confirmation versus falsification to examine whether people test hypotheses in an informative way, or an uninformative but positive way. The search for "true" confirmation bias led psychologists to look at a wider range of effects in how people process information.[4]

Explanations

Confirmation biases are generally explained in terms of motivation and/or cognitive (information processing) errors. Ziva Kunda argues that these two effects work together, with motivation creating the bias, but cognitive factors determining the size of the effect.[18]

Motivational explanations involve an effect of desire on belief, sometimes called wishful thinking.[18] It is known that people prefer pleasant thoughts over unpleasant ones in a number of ways: this is called the Pollyanna principle.[52] Applied to arguments or sources of evidence, this could explain why desired conclusions are more likely to be believed true.[18] According to experiments that manipulate the desirability of the conclusion, people apply a high evidential standard ("Must I believe this?") to unpalatable ideas and a low standard ("Can I believe this?") to preferred ideas.[53][54] Although consistency is a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information.[18]

Trope and Liberman use cost-benefit analysis to explain the motivational effect. Their theory assumes that people unconsciously weigh the costs of different kinds of error. For instance, someone who underestimates a friend's honesty might treat them suspiciously and so undermine the friendship. Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way.[55]

The information-processing explanations are based on limitations in people's ability to handle complex tasks, and the heuristics (information-processing shortcuts) that they use. For example, judgments of the reliability of evidence may be based on the availability heuristic (how readily a particular idea comes to mind). Another possibility is that people can only focus on one thought at a time, so find it difficult to test altenative hypotheses in parallel.[18] Another heuristic is the positive test strategy identified by Klayman and Ha, according to which people test a hypothesis by examining cases when they expect a property or event to occur.[7] By using this heuristic, people avoid the difficult or impossible task of evaluating the informativeness of each possible question. However, the strategy is not universally reliable, so people can overlooking challenges to their existing beliefs.

Consequences

In physical and mental health

Raymond Nickerson blames confirmation bias for the ineffective medical procedures that were continued for centuries before the arrival of scientific medicine.[18] Medical authorities focused on positive instances (treatments followed by recovery) rather than looking for alternative explanations, such as that the disease had run its natural course. According to Ben Goldacre, biased assimilation is a factor in the modern appeal of alternative medicine, whose proponents are swayed by positive anecdotal evidence but treat scientific evidence hyper-critically.[56]

Aaron T. Beck describes the role of this type of bias in depressive patients.[57] He argues that depressive patients maintain their depressive state because they fail to recognize information that might make them happier, and only focus on evidence showing that their lives are unfulfilling. According to Beck, an important step in the cognitive treatment of these individuals is to overcome this bias, and to search and recognize information about their lives more impartially. Jonathan Baron points out that some forms of psychopathology, particularly delusion, are defined by irrational maintenance of a belief.[46]

In politics and law

File:Witness impeachment.jpg

Mock trials allow researchers to examine confirmation biases in a realistic setting

Nickerson also argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to.[18] Since the evidence in a jury trial can be complex, and jurors often form a decision about the outcome early on, it is reasonable to expect an attitude polarization effect. This prediction (that jurors will become more extreme in their views as they see more evidence) has been borne out in experiments with mock trials.[58][59]

Confirmation bias can be a factor in creating or extending conflicts, from emotionally-charged debates to wars, because each side may interpret the evidence to suggest that they are in a stronger position and will win.[46] On the other hand, confirmation bias can make people ignore or misinterpret the signs of an imminent conflict or other undesirable situation. For example, psychologists Stuart Sutherland and Thomas Kida have each argued that US Admiral Husband E. Kimmel's confirmation bias played a role in the success of the Japanese attack on Pearl Harbor.[14][15]

A two-decade study of political pundits by Philip E. Tetlock found they performed worse than chance when asked to make multiple-choice predictions. Tetlock divided the experts into "foxes" who maintained multiple hypotheses, and "hedgehogs" who were more dogmatic. He blamed the failure of the hedgehogs on confirmation bias; specifically, their inability to make use of new information that contradicted their existing theories.[60]

In the paranormal

One factor in the appeal of "readings" by psychics is that listeners apply a confirmation bias in fitting the psychic's statements to their own lives.[61] The technique of cold reading (giving a subjectively impressive reading without any prior information about the target) can be enhanced by making ambiguous statements and by "shotgunning" lots of statements so that the target has more opportunities to find a match.[61] Investigator James Randi compared the transcript of a reading to the client's report of what the psychic had said, and found that the client showed a strong selective memory for the "hits".[62]

Nickerson gives numerological pyramidology (the practice of finding meaning in the proportions of the Egyptian pyramids) as "a striking illustration" of confirmation bias in the real world.[18] There are many different length measurements that can be made of, for example, the Great Pyramid of Giza and many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.[18]

In scientific procedure

A distinguishing feature of scientific thinking is the search for falsifying as well as confirming evidence.[18] However, many times in the history of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data.[18] In the context of scientific research, confirmation biases can lead to theories or research programmes persevering in the face of inadequate or even contradictory evidence,[15][63] with parapsychology being particularly affected.[64] An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter's expectations may be more likely to be discarded as the product of assumed experimental error (the so-called filedrawer effect). Although this tendency exists, scientific training teaches some ways in which to avoid bias.[65] Experimental designs involving randomization and double blind trials, along with the social process of peer review, mitigate the effect of individual scientists' bias.[65][66]

See also

Template:Portalbox


Notes

  1. David Perkins, a geneticist, coined the term myside bias referring to a preference for "my" side of the issue under consideration. Baron 2000, p. 195
  2. 2.0 2.1 2.2 Lewicka, Maria (1998). "Confirmation Bias: Cognitive Error or Adaptive Strategy of Action Control?" Personal control in action: cognitive and motivational mechanisms, 233–255, Springer.
  3. Bensley, D. Alan (1998). Critical thinking in psychology: a unified skills approach, Brooks/Cole.
  4. 4.00 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.10 4.11 4.12 4.13 Oswald & Grosjean 2004, pp. 79–96
  5. Risen, Jane; Thomas Gilovich (2007). "Informal Logical Fallacies" Robert J. Sternberg, Henry L. Roediger III, Diane F. Halpern Critical Thinking in Psychology, 110–130, Cambridge University Press.
  6. 6.0 6.1 6.2 Baron 2000, pp. 162–164
  7. 7.0 7.1 7.2 7.3 Klayman, Joshua, Young-Won Ha (1987). Confirmation, Disconfirmation and Information in Hypothesis Testing. Psychological Review 94 (2): 211–228.
  8. 8.0 8.1 8.2 8.3 8.4 Taber, Charles S., Milton Lodge (July 2006). Motivated Skepticism in the Evaluation of Political Beliefs. American Journal of Political Science 50 (3): 755–769.
  9. 9.0 9.1 Baron 2000, p. 191
  10. 10.0 10.1 10.2 10.3 10.4 10.5 Ross, Lee; Craig A. Anderson (1982). "Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments" Daniel Kahneman, Paul Slovic, Amos Tversky Judgment under uncertainty: Heuristics and biases, 129–152, Cambridge University Press.
  11. 11.0 11.1 11.2 11.3 11.4 11.5 11.6 Lord, Charles G., Lee Ross, Mark R. Lepper (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology 37 (11): 2098–2109.
  12. 12.0 12.1 12.2 12.3 Kunda 1999, pp. 127–130
  13. Darley, John M.; Paget H. Gross (2000). "A Hypothesis-Confirming Bias in Labelling Effects" Charles Stangor Stereotypes and prejudice: essential readings, Psychology Press.
  14. 14.0 14.1 14.2 Kida, Thomas (2006). Don't Believe Everything You Think: The 6 Basic Mistakes We Make in Thinking, 155–165, Prometheus Books.
  15. 15.0 15.1 15.2 15.3 Sutherland, Stuart (2007). Irrationality, 2nd, 95–103, London: Pinter and Martin.
  16. Devine, Patricia G., Edward R. Hirt, Elizabeth M. Gehrke (1990). Diagnostic and confirmation strategies in trait hypothesis testing. Journal of Personality and Social Psychology 58 (6): 952–963.
  17. Trope, Yaacov, Miriam Bassok (1982). Confirmatory and diagnosing strategies in social information gathering. Journal of Personality and Social Psychology 43 (1): 22–34.
  18. 18.00 18.01 18.02 18.03 18.04 18.05 18.06 18.07 18.08 18.09 18.10 18.11 18.12 18.13 18.14 Nickerson, Raymond S. (1998). Confirmation Bias; A Ubiquitous Phenomenon in Many Guises. Review of General Psychology 2 (2): 175–220.
  19. 19.0 19.1 19.2 Kunda 1999, pp. 112–115
  20. Kunda, Ziva, G. T. Fong, R. Sanitoso, E. Reber (1993). Directional questions direct self-conceptions. Journal of Experimental Social Psychology 29: 62–63. via Fine 2006, pp. 63–65
  21. 21.0 21.1 Shafir, E. (1983). Choosing versus rejecting: why some options are both better and worse than others. Memory and Cognition 21 (4): 546–556. via Fine 2006, pp. 63–65
  22. Snyder, Mark, William B. Swann, Jr. (1978). Hypothesis-Testing Processes in Social Interaction. Journal of Personality and Social Psychology 36 (11): 1202–1212. via Poletiek, Fenna (2001). Hypothesis-testing behaviour, Hove, UK: Psychology Press.
  23. 23.0 23.1 Kunda 1999, pp. 117–118
  24. 24.0 24.1 Mynatt, Clifford R., Michael E. Doherty, Ryan D. Tweney (1978). Consequences of confirmation and disconfirmation in a simulated research environment. Quarterly Journal of Experimental Psychology 30 (3): 395–406.
  25. Vyse 1997, p. 122
  26. 26.0 26.1 Westen, Drew, Pavel S. Blagov, Keith Harenski, Clint Kilts, Stephan Hamann (2006). Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election. Journal of Cognitive Neuroscience 18 (11): 1947–1958.
  27. Gadenne, V., M. Oswald (1986). Entstehung und Veränderung von Bestätigungstendenzen beim Testen von Hypothesen [Formation and alteration of confirmatory tendencies during the testing of hypotheses]. Zeitschrift für experimentelle und angewandte Psychologie 33: 360–374. via Oswald & Grosjean 2004, p. 89
  28. Hastie, Reid; Bernadette Park (2005). "The Relationship Between Memory and Judgment Depends on Whether the Judgment Task is Memory-Based or On-Line" David L. Hamilton Social cognition: key readings, New York: Psychology Press.
  29. Stangor, Charles, David McMillan (1992). Memory for expectancy-congruent and expectancy-incongruent information: A review of the social and social developmental literatures. Psychological Bulletin 111 (1): 42–61.
  30. 30.0 30.1 Snyder, M., N. Cantor (1979). Testing hypotheses about other people: the use of historical knowledge. Journal of Experimental Social Psychology 15: 330–342. via Goldacre, Ben (2008). Bad Science, London: Fourth Estate.
  31. Kunda 1999, pp. 225–232
  32. Sanitioso, Rasyid, Ziva Kunda, G. T. Fong (1990). Motivated recruitment of autobiographical memories. Journal of Personality and Social Psychology 59 (2): 229–241.
  33. 33.0 33.1 Russell, Dan, Warren H. Jones (1980). When superstition fails: Reactions to disconfirmation of paranormal beliefs. Personality and Social Psychology Bulletin 6 (1): 83–88. via Vyse 1997, p. 121
  34. 34.0 34.1 34.2 Kuhn, Deanna, Joseph Lao (March 1996). Effects of Evidence on Attitudes: Is Polarization the Norm?. Psychological Science 7 (2): 115–120.
  35. Baron 2000, p. 201
  36. Miller, A. G., J. W. McHoskey, C. M. Bane, T. G. Dowd (1993). The attitude polarization phenomenon: Role of response measure, attitude extremity, and behavioral consequences of reported attitude change. Journal of Personality and Social Psychology 64: 561–574.
  37. Kunda 1999, p. 99
  38. Ross, Lee, Mark R. Lepper, Michael Hubbard (1975). Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm. Journal of Personality and Social Psychology 32 (5): 880–892. via Kunda 1999, p. 99
  39. 39.0 39.1 39.2 Anderson, Craig A., Mark R. Lepper, Lee Ross (1980). Perseverance of Social Theories: The Role of Explanation in the Persistence of Discredited Information. Journal of Personality and Social Psychology 39 (6): 1037–1049.
  40. 40.0 40.1 40.2 40.3 40.4 Baron 2000, pp. 197–200
  41. 41.0 41.1 Fine 2006, pp. 66–70
  42. Redelmeir, D. A., Amos Tversky (1996). On the belief that arthritis pain is related to the weather. Proceedings of the National Academy of Science 93: 2895–2896. via Kunda 1999, p. 127
  43. Plous, Scott (1993). The Psychology of Judgment and Decision Making, 162–164, McGraw-Hill.
  44. Adapted from Oswald & Grosjean 2004, p. 103
  45. 45.0 45.1 Bacon, Francis (1620). Novum Organum. reprinted in (1939) E. A. Burtt The English philosophers from Bacon to Mill, New York: Random House. via Nickerson, Raymond S. (1998). Confirmation Bias; A Ubiquitous Phenomenon in Many Guises. Review of General Psychology 2 (2): 175–220.
  46. 46.0 46.1 46.2 Baron 2000, pp. 195–196
  47. Thucydides, Richard Crawley (trans) The History of the Peloponnesian War http://classics.mit.edu/Thucydides/pelopwar.mb.txt
  48. 48.0 48.1 Wason, Peter C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology 12: 129–140.
  49. Wason, Peter C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology 20: 273–28.
  50. Barkow, Jerome H.; Leda Cosmides, John Tooby (1995). The adapted mind: evolutionary psychology and the generation of culture, 181–184, Oxford University Press US.
  51. Tweney, Ryan D., Michael E. Doherty, Winifred J. Worner, Daniel B. Pliske, Clifford R. Mynatt, Kimberly A. Gross, Daniel L. Arkkelin (1980). Strategies of rule discovery in an inference task. The Quarterly Journal of Experimental Psychology 32 (1): 109–123. (Experiment IV)
  52. Matlin, Margaret W. (2004). "Pollyanna Principle" Rüdiger F. Pohl Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, 255–272, Hove: Psychology Press.
  53. Dawson, Erica, Thomas Gilovich, Dennis T. Regan (October 2002). Motivated Reasoning and Performance on the Wason Selection Task. Personality and Social Psychology Bulletin 28 (10): 1379–1387.
  54. Ditto, Peter H., David F. Lopez (1992). Motivated skepticism : use of differential decision criteria for preferred and nonpreferred conclusions. Journal of personality and social psychology 63 (4): 568–584.
  55. Trope, Y.; A. Liberman (1996). "Social hypothesis testing: cognitive and motivational mechanisms" E. Tory Higgins, Arie W. Kruglanski Social Psychology: Handbook of basic principles, New York: Guilford Press. via Oswald & Grosjean 2004, pp. 91–93
  56. Goldacre, Ben (2008). Bad Science, London: Fourth Estate.
  57. Beck, Aaron T. (1976). Cognitive therapy and the emotional disorders, New York: International Universities Press.
  58. Myers, D. G., H. Lamm (1976). The group polarization phenomenon. Psychological Bulletin 83: 602–627. via Nickerson, Raymond S. (1998). Confirmation Bias; A Ubiquitous Phenomenon in Many Guises. Review of General Psychology 2 (2): 193–194.
  59. Halpern, Diane F. (1987). Critical thinking across the curriculum: a brief edition of thought and knowledge, Lawrence Erlbaum Associates.
  60. Tetlock, Philip E. (2005). Expert Political Judgment: How Good Is It? How Can We Know?, 125–128, Princeton, N.J.: Princeton University Press.
  61. 61.0 61.1 Smith, Jonathan C. (2009). Pseudoscience and Extraordinary Claims of the Paranormal: A Critical Thinker's Toolkit, 149–151, John Wiley and Sons.
  62. Randi, James (1991). James Randi: psychic investigator, 58–62, Boxtree.
  63. Proctor, Robert W.; E. John Capaldi (2006). Why science matters: understanding the methods of psychological research, Wiley-Blackwell.
  64. Sternberg, Robert J. (2007). "Critical Thinking in Psychology: It really is critical" Robert J. Sternberg, Henry L. Roediger, Diane F. Halpern Critical Thinking in Psychology, Cambridge University Press. "Some of the worst examples of confirmation bias are in research on parapsychology (...) Arguably, there is a whole field here with no powerful confirming data at all. But people want to believe, and so they find ways to believe."
  65. 65.0 65.1 Shadish, William R. (2007). "Critical Thinking in Quasi-Experimentation" Robert J. Sternberg, Henry L. Roediger III, Diane F. Halpern Critical Thinking in Psychology, Cambridge University Press.
  66. Shermer, Michael (July 2006). The Political Brain. Scientific American.

References

  • Baron, Jonathan (2000). Thinking and deciding, 3rd, New York: Cambridge University Press.
  • Fine, Cordelia (2006). A Mind of its Own: how your brain distorts and deceives, Cambridge, UK: Icon books.
  • Kunda, Ziva (1999). Social Cognition: Making Sense of People, MIT Press.
  • Oswald, Margit E. (2004). "Confirmation Bias" Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press.
  • Vyse, Stuart A. (1997). Believing in magic: The psychology of superstition, New York: Oxford University Press.

Further reading

  • Bell, Robert (1992). Impure Science: fraud, compromise, and political influence in scientific research, New York: John Wiley & Sons.
  • Westen, Drew (2007). The political brain: the role of emotion in deciding the fate of the nation, PublicAffairs.

External links




This page uses Creative Commons Licensed content from Wikipedia (view authors).