Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


Confirmation bias (or myside bias[1]) is a tendency for people to prefer information that confirms their preconceptions or hypotheses, independently of whether they are true.[2][3] People can reinforce their existing attitudes by selectively collecting new evidence, by interpreting evidence in a biased way or by selectively recalling information from memory.[4] Some psychologists use "confirmation bias" for any of these three cognitive biases, while others restrict the term to selective collection of evidence, using assimilation bias for biased interpretation.[5][2]

People tend to test hypotheses in a one-sided way, focusing on one possibility and neglecting alternatives.[4][6] This strategy is not necessarily a bias, but combined with other effects it can reinforce existing beliefs.[7][4] The biases appear in particular for issues that are emotionally significant (including some personal and political topics) and for established beliefs that shape the individual's expectations.[4][8][9] Biased search, interpretation and/or recall have been invoked to explain attitude polarization (when a disagreement becomes more extreme as the different parties are exposed to the same evidence), belief perseverance (when beliefs remain after the evidence for them is taken away)[10], the irrational primacy effect (a stronger weighting for data encountered early in an arbitrary series)[11] and illusory correlation (in which people falsely perceive an association between two events).[12]

Confirmation biases are effects in information processing, distinct from the behavioral confirmation effect (also called self-fulfilling prophecy), in which people's expectations influence their own behavior.[13] They can lead to disastrous decisions, especially in organizational, military and political contexts.[14][15] Confirmation biases contribute to overconfidence in personal beliefs.[9]

Types[]

Biased search for information[]

In studies of hypothesis-testing, people reject tests that are guaranteed to give a positive answer, in favor of more informative tests.[16][17] However, many experiments have found that people tend to test in a one-sided way, by searching for evidence consistent with their currently held hypothesis.[4][18][19] Rather than searching through all the relevant evidence, they frame questions in such a way that a "yes" answer supports their hypothesis and stop as soon as they find supporting information.[6] They look for the evidence that they would expect to see if their hypothesis was true, neglecting what would happen if it were false.[6] For example, someone who is trying to identify a number using yes/no questions and suspects that the number is 3 would ask a question such as, "Is it an odd number?" People prefer this sort of question even when a negative test (such as, "Is it an even number?") would yield exactly the same information.

This preference for positive tests is not itself a bias, since positive tests can be highly informative.[7] However, in conjunction with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true.[4]

In many real-world situations, evidence is complex and mixed. For example, many different ideas about someone's personality could be supported by looking at isolated things that he or she does.[19] Thus any search for evidence in favor of a hypothesis is likely to succeed.[4] One illustration of this is the way the phrasing of a question can significantly change the answer.[19] For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you unhappy with your social life?"[20]

Even a small change in the wording of a question can affect how someone searches through the available information, and hence, the conclusion they come to. This was shown in an experiment in which subjects read about a child custody case.[21] Of the two parents, Parent A was moderately suitable to be the guardian on a number of dimensions, while Parent B had a mix of salient positive qualities (such as a close relationship with the child) and negative qualities (including a job that would take him or her away for long periods). When the subjects were asked, "Which parent should have custody of the child?" they looked for positive attributes and a majority chose Parent B. However, when the question was, "Which parent should be denied custody of the child?" they looked for negative attributes, and this time a majority answered Parent B, implying that Parent A should have custody.[21]

In a similar study, subjects had to rate another person on the introversion-extraversion personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the subjects chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extraverted, almost all the questions presumed extraversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them.[22] However, a later experiment gave the subjects less presumptive questions to choose from, such as, "Do you shy away from social interactions?"[23] Subjects preferred to ask the more informative questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker secondary preference for positive tests, has been replicated in other studies.[23]

One particularly complex rule-discovery task used a computer simulation of a dynamic system.[24] Objects on the computer screen moved according to specific laws, which the subjects had to find out. They could "fire" objects across the screen to test their hypotheses. Despite making many attempts, none of the subjects worked out the rules of the system. They typically sought to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. They tended to stick to hypotheses even after they had been falsified by the evidence. Some of the subjects were instructed in proper hypothesis-testing, but these instructions had almost no effect.[24]

Biased interpretation[]

Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.
Michael Shermer, quoted in Thomas Kida's Don't Believe Everything You Think,[14] p. 157

Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.

Charles Lord, Lee Ross, and Mark Lepper ran an experiment with subjects who felt strongly about capital punishment, with half in favor and half against.[11] Each of these subjects read descriptions of two studies; one supporting and one undermining the effectiveness of the death penalty. After reading a quick description of each study, the subjects were asked whether their opinions had changed. They then read a much more detailed account of the study's procedure and had to rate how well-conducted and convincing that research was.[11] In fact, the studies were fictional. Half the subjects were told that one kind of study supported the death penalty and the other undermined it, while for other subjects the conclusions were swapped.[11]

The subjects, whether proponents or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed study, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Students described studies supporting their pre-existing view as superior to those that contradicted it, in a number of detailed and specific ways.[11][25] Writing about a study that seemed to undermine the deterrence effect, a proponent of the death penalty wrote, "The research didn't cover a long enough period of time," while an opponent's comment on the same study said that, "no strong evidence to contradict the researchers has been presented."[11] The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as disconfirmation bias, has been supported by other experiments.[8]

Another study of biased interpretation took place during the 2004 US presidential election, and involved subjects who described themselves as having strong emotions about the candidates.[26] They were shown apparently contradictory pairs of statements, either from the Republican candidate George W. Bush, the Democratic candidate John Kerry or from a politically neutral public figure such as Tom Hanks. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not the target individual's statements were inconsistent. There were strong differences in these evaluations, with subjects much more likely to interpret their opposing candidate as contradictory.

In this experiment, the subjects made their judgements while in an Magnetic Resonance Imaging (MRI) scanner, allowing the researchers to monitor their brain activity.[26] As subjects evaluated contradictory statements by their favored candidate, centres of the brain involved in emotion were aroused. This did not happen with the other targets. The experimenters interpreted this as showing that the differences in evaluation of the statements were not due to passive reasoning errors, but an active strategy by the subjects to reduce the cognitive dissonance of being confronted by their favored candidate's irrational or hypocritical behavior.

Biased interpretation is not restricted to emotionally significant topics. In another experiment, subjects were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.[27]

Biased memory[]

Even if someone has sought and interpreted evidence in a neutral manner, they may still remember it selectively to reinforce their expectations. This effect is called selective recall, confirmatory memory or access-biased memory.[28]

Existing psychological theories make conflicting predictions about selective recall. Schema theory predicts that information matching prior expectations will be more easily stored and recalled.[4] Some alternative approaches say that surprising information stands out more and so is more memorable.[4] Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.[29]

In one study, subjects read a description of a woman, including both introverted and extraverted behaviors.[30] Then they had to recall examples of her introversion and extraversion. One group were told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the "librarian" group recalling more examples of introversion and the "sales" groups recalling more extroverted behavior.[30] A selective memory effect has also been shown in several experiments that manipulate the desirability of personality types.[31][4] In one of these, a group of subjects were shown evidence that extraverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated, study, they were asked to recall events from their lives in which they had been either introverted or extraverted. Each group of subjects provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.[32]

One study showed how selective memory can maintain belief in extrasensory perception (ESP).[33] Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, subjects recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.[33]

Related effects[]

Polarization of opinion[]

Main article: Attitude polarization

When people with strongly opposing views interpret new information in a biased way, their views can move even further apart. This is called attitude polarization.[34] One demonstration of this effect involved a series of colored balls being drawn from a "bingo basket". Subjects were told that the basket either contained 60% black and 40% red balls or 40% black and 60% red: their task was to decide which. When one of each color were drawn in succession, subjects usually became more confident in their hypotheses, even though those two observations give no evidence either way. This only happened when the subjects had to commit to their hypotheses, by stating them out loud after each draw.[35]

A less abstract study was Lord, Ross and Lepper's experiment in which subjects with strong opinions about the death penalty read about experimental evidence. Twenty-three percent of the subjects reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes.[11] In several later experiments, subjects also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real.[36][8][34] Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases. They found that it was prompted not only by considering mixed evidence, but by merely thinking about the topic.[34]

Charles Tabor and Milton Lodge argued that the Lee, Ross and Lepper result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. Their study used the emotionally-charged topics of gun control and affirmative action.[8] They measured the attitudes of their subjects towards these issues before and after reading arguments on each side of the debate. Two groups of subjects showed attitude polarization; those with strong prior opinions and those who were politically knowledgeable. In part of this study, subjects chose which information sources to read, from a list prepared by the experimenters. For example they could read the National Rifle Association's and the Brady Anti-Handgun Coalition's arguments on gun control. Even when instructed to be even-handed, subjects were more likely to read arguments that supported their existing attitudes. This biased search for information correlated well with the polarization effect.[8]

Persistence of discredited beliefs[]

[B]eliefs can survive potent logical or empirical challenges. They can survive and even be bolstered by evidence that most uncommitted observers would agree logically demands some weakening of such beliefs. They can even survive the total destruction of their original evidential bases.
—Lee Ross and Craig Anderson (1982). p.149[10]

Confirmation biases can be used to explain why some beliefs remain when the initial evidence for them is removed.[10] This belief perseverance effect has been shown by a series of experiments using what is called the debriefing paradigm: subjects examine faked evidence for a hypothesis, their attitude change is measured, then they learn that the evidence was fictitious. Their attitudes are then measured once more to see if their belief returns to its previous level.[10]

A typical finding is that at least some of the initial belief remains even after a full debrief.[37] In one experiment, subjects had to distinguish between real and fake suicide notes. They were given feedback at random, some being told they had done well on this task and some being told they were bad at it. Even after being fully debriefed, subjects were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.[38]

In another study, subjects read job performance ratings of two firefighters, along with their responses to a risk aversion test.[10] These fictional data were arranged to show either a negative or positive association between risk-taking attitudes and job success.[39] Even if these case studies had been true, they would have been scientifically poor evidence. However, the subjects found them subjectively persuasive.[39] When the case studies were shown to be fictional, subjects' belief in a link diminished, but around half of the original effect remained.[10] The researchers conducted follow-up interviews to make sure the subjects had understood the debriefing and taken it seriously. Subjects seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.[39]

Preference for early information[]

Many psychological experiments have found that information is weighted more strongly when it appears early in a series, even when the order is evidentially unimportant. For example, people form a more positive impression of someone described as, "intelligent, industrious, impulsive, critical, stubborn, envious," than when they are given the same words in reverse order.[40] This irrational primacy effect is independent of the primacy effect in memory in which the earlier items in a series leave a stronger memory trace.[40] Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.[18]

One demonstration of irrational primacy involved colored chips supposedly drawn from two urns. Subjects were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them.[40] In fact, the colors appeared in a pre-arranged order. The first thirty draws favored one urn and the next thirty favored the other.[18] The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, subjects favored the urn suggested by the initial thirty.[40] Another experiment displayed a slide show of a single object, starting with just a blur and showing slightly better focus each time.[40] At each stage, subjects had to state their best guess of what the object was. Subjects whose early guesses were wrong persisted with those guesses, even when the pictures were so in focus that other people could clearly see what the objects were.[18]

Illusory association between events[]

Main article: Illusory correlation

Illusory correlation is the tendency to see non-existent correlations, in a set of data, that fit one's preconceptions.[41] This phenomenon was first demonstrated in a 1969 experiment involving the Rorschach inkblot test. The subjects in the experiment read a set of case studies, and reported that the homosexual men in the set were more likely to report seeing buttocks or anuses in the ambiguous figures. In fact the case studies were fictional and, in one version of the experiment, had been constructed so that the homosexual men were less likely to report such imagery.[41] Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a fifteen month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.[42]

This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior.[12] In judging whether two events (such as illness and bad weather) are correlated, people rely heavily on the number of positive-positive cases (in this example, instances of both pain and bad weather). They pay relatively little attention to the other kinds of observation (of no pain and/or good weather).[43] This parallels the reliance on positive tests in hypothesis testing.[12] It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.[12]

Example
Days Rain No rain
Arthritis 14 6
No arthritis 7 2

In the above fictional example, there is actually a slightly negative correlation between rain and arthritis symptoms, considering all four cells of the table. However, people are likely to focus on the relatively large number of positive-positive cases in the top-left cell (days with both rain and arthritic symptoms), and think they see a positive association.[44]

History[]

File:Francis Bacon.jpg

Francis Bacon wrote that biased assessment of evidence drove "all superstitions, whether in astrology, dreams, omens, divine judgments or the like."[45]

Informal observation[]

Prior to the psychological research on confirmation bias, the phenomenon had been observed anecdotally by writers including Thucydides (c. 460 BC – c. 395 BC), Francis Bacon (1561-1626)[46] and Leo Tolstoy (1828-1910).

Thucydides, in the History of the Peloponnesian War wrote,

...it is a habit of mankind (...) to use sovereign reason to thrust aside what they do not fancy.[47]

Bacon, in the Novum Organum wrote,

The human understanding when it has once adopted an opinion (...) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.][45]

Wason's research on hypothesis-testing[]

The first paper to use the term "confirmation bias" was Peter Wason's (1960) rule-discovery experiment.[4] He challenged subjects to identify a rule applying to triples of numbers, starting from the information that (2,4,6) fits the rule. Subjects could generate their own triples and the experimenter told them whether or not each triple conformed to the rule.[48]

While the actual rule was simply "any ascending sequence", the subjects had a great deal of difficulty in arriving at it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last".[48] The subjects seemed to test only positive examples—triples that obeyed their hypothesised rule. For example, if they thought the rule was, "Each number is two greater than its predecessor," they would offer a triple that fit this rule, such as (11,13,15) rather than a triple that violates it, such as (11,12,19).

The normative theory (of how people ought to test hypotheses) used by Wason was falsificationism, according to which a scientific test of a theory is a serious attempt to falsify it. Wason interpreted his results as showing a preference for confirmation over falsification, hence the term "confirmation bias".[4] He also used confirmation bias to explain the results of his selection task experiment.[49] In this task, subjects are given partial information about a set of objects, and have to specify what further information they would need to tell whether or not a conditional rule ("If A, then B") applies. It has been found repeatedly that people perform badly on various forms of this test, in most cases ignoring information that could potentially refute the rule.[15][50]

Klayman and Ha's critique[]

A 1987 paper by Klayman and Ha showed that the Wason experiments had demonstrated a positive test strategy rather than a true confirmation bias.[4] A positive test strategy is an example of a heuristic: a reasoning short-cut that is imperfect but easy to compute. Klayman and Ha used Bayesian probability and information theory as their normative standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, scientific tests of a hypothesis aim to maximise the expected information content. This in turn depends on the initial probabilities of the hypotheses, so a positive test can either be highly informative or uninformative, depending on the likelihood of the different possible outcomes. Klayman and Ha argued that in most real situations, targets are specific and have a small initial probability. In this case, positive tests are usually usually more informative than negative tests.[7] However, in Wason's rule discovery task the target rule was very broad, so positive tests are unlikely to yield informative answers. This interpretation was supported by a similar experiment that used the labels "DAX" and "MED" in place of "fits the rule" and "doesn't fit the rule". Subjects in this version of the experiment were much more successful at finding the correct rule.[51][2]

File:Klayman Ha1.svg

If the true rule (T) encompasses the current hypothesis (H), then positive tests (examining an H to see if it is T) will not show that the hypothesis is false.

File:Klayman Ha2.svg

If the true rule (T) overlaps the current hypothesis (H), then either a negative test or a positive test can potentially falsify H.

File:Klayman ha3 annotations.svg

When the working hypothesis (H) includes the true rule (T) then positive tests are the only way to falsify H.

In light of this and other critiques, the focus of research moved away from confirmation versus falsification to examine whether people test hypotheses in an informative way, or an uninformative but positive way. The search for "true" confirmation bias led psychologists to look at a wider range of effects in how people process information.[4]

Explanations[]

Confirmation biases are generally explained in terms of motivation and/or cognitive (information processing) errors. Ziva Kunda argues that these two effects work together, with motivation creating the bias, but cognitive factors determining the size of the effect.[18]

Motivational explanations involve an effect of desire on belief, sometimes called wishful thinking.[18] It is known that people prefer pleasant thoughts over unpleasant ones in a number of ways: this is called the Pollyanna principle.[52] Applied to arguments or sources of evidence, this could explain why desired conclusions are more likely to be believed true.[18] According to experiments that manipulate the desirability of the conclusion, people apply a high evidential standard ("Must I believe this?") to unpalatable ideas and a low standard ("Can I believe this?") to preferred ideas.[53][54] Although consistency is a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information.[18]

Trope and Liberman use cost-benefit analysis to explain the motivational effect. Their theory assumes that people unconsciously weigh the costs of different kinds of error. For instance, someone who underestimates a friend's honesty might treat them suspiciously and so undermine the friendship. Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way.[55]

The information-processing explanations are based on limitations in people's ability to handle complex tasks, and the heuristics (information-processing shortcuts) that they use. For example, judgments of the reliability of evidence may be based on the availability heuristic (how readily a particular idea comes to mind). Another possibility is that people can only focus on one thought at a time, so find it difficult to test altenative hypotheses in parallel.[18] Another heuristic is the positive test strategy identified by Klayman and Ha, according to which people test a hypothesis by examining cases when they expect a property or event to occur.[7] By using this heuristic, people avoid the difficult or impossible task of evaluating the informativeness of each possible question. However, the strategy is not universally reliable, so people can overlooking challenges to their existing beliefs.

Consequences[]

In physical and mental health[]

Raymond Nickerson blames confirmation bias for the ineffective medical procedures that were continued for centuries before the arrival of scientific medicine.[18] Medical authorities focused on positive instances (treatments followed by recovery) rather than looking for alternative explanations, such as that the disease had run its natural course. According to Ben Goldacre, biased assimilation is a factor in the modern appeal of alternative medicine, whose proponents are swayed by positive anecdotal evidence but treat scientific evidence hyper-critically.[56]

Aaron T. Beck describes the role of this type of bias in depressive patients.[57] He argues that depressive patients maintain their depressive state because they fail to recognize information that might make them happier, and only focus on evidence showing that their lives are unfulfilling. According to Beck, an important step in the cognitive treatment of these individuals is to overcome this bias, and to search and recognize information about their lives more impartially. Jonathan Baron points out that some forms of psychopathology, particularly delusion, are defined by irrational maintenance of a belief.[46]

In politics and law[]

File:Witness impeachment.jpg

Mock trials allow researchers to examine confirmation biases in a realistic setting

Nickerson also argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to.[18] Since the evidence in a jury trial can be complex, and jurors often form a decision about the outcome early on, it is reasonable to expect an attitude polarization effect. This prediction (that jurors will become more extreme in their views as they see more evidence) has been borne out in experiments with mock trials.[58][59]

Confirmation bias can be a factor in creating or extending conflicts, from emotionally-charged debates to wars, because each side may interpret the evidence to suggest that they are in a stronger position and will win.[46] On the other hand, confirmation bias can make people ignore or misinterpret the signs of an imminent conflict or other undesirable situation. For example, psychologists Stuart Sutherland and Thomas Kida have each argued that US Admiral Husband E. Kimmel's confirmation bias played a role in the success of the Japanese attack on Pearl Harbor.[14][15]

A two-decade study of political pundits by Philip E. Tetlock found they performed worse than chance when asked to make multiple-choice predictions. Tetlock divided the experts into "foxes" who maintained multiple hypotheses, and "hedgehogs" who were more dogmatic. He blamed the failure of the hedgehogs on confirmation bias; specifically, their inability to make use of new information that contradicted their existing theories.[60]

In the paranormal[]

One factor in the appeal of "readings" by psychics is that listeners apply a confirmation bias in fitting the psychic's statements to their own lives.[61] The technique of cold reading (giving a subjectively impressive reading without any prior information about the target) can be enhanced by making ambiguous statements and by "shotgunning" lots of statements so that the target has more opportunities to find a match.[61] Investigator James Randi compared the transcript of a reading to the client's report of what the psychic had said, and found that the client showed a strong selective memory for the "hits".[62]

Nickerson gives numerological pyramidology (the practice of finding meaning in the proportions of the Egyptian pyramids) as "a striking illustration" of confirmation bias in the real world.[18] There are many different length measurements that can be made of, for example, the Great Pyramid of Giza and many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.[18]

In scientific procedure[]

A distinguishing feature of scientific thinking is the search for falsifying as well as confirming evidence.[63] However, many times in the history of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data.[63] Previous research has shown that the assessment of the quality of scientific studies seems to be particularly vulnerable to confirmation bias. It has been found several times that scientists rate studies that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent with their previous beliefs.[64][65][66] However, assuming that the research question is relevant, the experimental design adequate and the data are clearly and comprehensively described, the found results should be of importance to the scientific community and should not be viewed prejudicially, regardless of whether they conform to current theoretical predictions.[66]

Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results since biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising their beliefs.[65] Scientific innovators often meet with resistance from the scientific community, and research presenting controversial results frequently receives harsh peer review.[67]

In the context of scientific research, confirmation biases can sustain theories or research programs in the face of inadequate or even contradictory evidence;[15][68] the field of parapsychology has been particularly affected.[69]

An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter's expectations may be more readily discarded as unreliable, producing the so-called file drawer effect. To combat this tendency, scientific training teaches ways to prevent bias.[70] For example, experimental design of randomized controlled trials (coupled with their systematic review) aims to minimize sources of bias.[70][71] The social process of peer review is thought to mitigate the effect of individual scientists' biases,[72] even though the peer review process itself may be susceptible to such biases.[66][73]

See also[]


Notes[]

  1. David Perkins, a geneticist, coined the term myside bias referring to a preference for "my" side of the issue under consideration. Baron 2000, p. 195
  2. 2.0 2.1 2.2 Lewicka, Maria (1998). "Confirmation Bias: Cognitive Error or Adaptive Strategy of Action Control?" Personal control in action: cognitive and motivational mechanisms, 233–255, Springer.
  3. Bensley, D. Alan (1998). Critical thinking in psychology: a unified skills approach, Brooks/Cole.
  4. 4.00 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.10 4.11 4.12 4.13 Oswald & Grosjean 2004, pp. 79–96
  5. Risen, Jane; Thomas Gilovich (2007). "Informal Logical Fallacies" Robert J. Sternberg, Henry L. Roediger III, Diane F. Halpern Critical Thinking in Psychology, 110–130, Cambridge University Press.
  6. 6.0 6.1 6.2 Baron 2000, pp. 162–164
  7. 7.0 7.1 7.2 7.3 Klayman, Joshua, Young-Won Ha (1987). Confirmation, Disconfirmation and Information in Hypothesis Testing. Psychological Review 94 (2): 211–228.
  8. 8.0 8.1 8.2 8.3 8.4 Taber, Charles S., Milton Lodge (July 2006). Motivated Skepticism in the Evaluation of Political Beliefs. American Journal of Political Science 50 (3): 755–769.
  9. 9.0 9.1 Baron 2000, p. 191
  10. 10.0 10.1 10.2 10.3 10.4 10.5 Ross, Lee; Craig A. Anderson (1982). "Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments" Daniel Kahneman, Paul Slovic, Amos Tversky Judgment under uncertainty: Heuristics and biases, 129–152, Cambridge University Press.
  11. 11.0 11.1 11.2 11.3 11.4 11.5 11.6 Lord, Charles G., Lee Ross, Mark R. Lepper (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology 37 (11): 2098–2109.
  12. 12.0 12.1 12.2 12.3 Kunda 1999, pp. 127–130
  13. Darley, John M.; Paget H. Gross (2000). "A Hypothesis-Confirming Bias in Labelling Effects" Charles Stangor Stereotypes and prejudice: essential readings, Psychology Press.
  14. 14.0 14.1 14.2 Kida, Thomas (2006). Don't Believe Everything You Think: The 6 Basic Mistakes We Make in Thinking, 155–165, Prometheus Books.
  15. 15.0 15.1 15.2 15.3 Sutherland, Stuart (2007), Irrationality (2nd ed.), London: Pinter and Martin, pp. 95–103, ISBN 978-1-905177-07-3, OCLC 72151566 
  16. Devine, Patricia G., Edward R. Hirt, Elizabeth M. Gehrke (1990). Diagnostic and confirmation strategies in trait hypothesis testing. Journal of Personality and Social Psychology 58 (6): 952–963.
  17. Trope, Yaacov, Miriam Bassok (1982). Confirmatory and diagnosing strategies in social information gathering. Journal of Personality and Social Psychology 43 (1): 22–34.
  18. 18.00 18.01 18.02 18.03 18.04 18.05 18.06 18.07 18.08 18.09 18.10 18.11 18.12 Nickerson, Raymond S. (1998). Confirmation Bias; A Ubiquitous Phenomenon in Many Guises. Review of General Psychology 2 (2): 175–220.
  19. 19.0 19.1 19.2 Kunda 1999, pp. 112–115
  20. Kunda, Ziva, G. T. Fong, R. Sanitoso, E. Reber (1993). Directional questions direct self-conceptions. Journal of Experimental Social Psychology 29: 62–63. via Fine 2006, pp. 63–65
  21. 21.0 21.1 Shafir, E. (1983). Choosing versus rejecting: why some options are both better and worse than others. Memory and Cognition 21 (4): 546–556. via Fine 2006, pp. 63–65
  22. Snyder, Mark, William B. Swann, Jr. (1978). Hypothesis-Testing Processes in Social Interaction. Journal of Personality and Social Psychology 36 (11): 1202–1212. via Poletiek, Fenna (2001). Hypothesis-testing behaviour, Hove, UK: Psychology Press.
  23. 23.0 23.1 Kunda 1999, pp. 117–118
  24. 24.0 24.1 Mynatt, Clifford R., Michael E. Doherty, Ryan D. Tweney (1978). Consequences of confirmation and disconfirmation in a simulated research environment. Quarterly Journal of Experimental Psychology 30 (3): 395–406.
  25. Vyse 1997, p. 122
  26. 26.0 26.1 Westen, Drew, Pavel S. Blagov, Keith Harenski, Clint Kilts, Stephan Hamann (2006). Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election. Journal of Cognitive Neuroscience 18 (11): 1947–1958.
  27. Gadenne, V., M. Oswald (1986). Entstehung und Veränderung von Bestätigungstendenzen beim Testen von Hypothesen [Formation and alteration of confirmatory tendencies during the testing of hypotheses]. Zeitschrift für experimentelle und angewandte Psychologie 33: 360–374. via Oswald & Grosjean 2004, p. 89
  28. Hastie, Reid; Bernadette Park (2005). "The Relationship Between Memory and Judgment Depends on Whether the Judgment Task is Memory-Based or On-Line" David L. Hamilton Social cognition: key readings, New York: Psychology Press.
  29. Stangor, Charles, David McMillan (1992). Memory for expectancy-congruent and expectancy-incongruent information: A review of the social and social developmental literatures. Psychological Bulletin 111 (1): 42–61.
  30. 30.0 30.1 Snyder, M., N. Cantor (1979). Testing hypotheses about other people: the use of historical knowledge. Journal of Experimental Social Psychology 15: 330–342. via Goldacre, Ben (2008). Bad Science, London: Fourth Estate.
  31. Kunda 1999, pp. 225–232
  32. Sanitioso, Rasyid, Ziva Kunda, G. T. Fong (1990). Motivated recruitment of autobiographical memories. Journal of Personality and Social Psychology 59 (2): 229–241.
  33. 33.0 33.1 Russell, Dan, Warren H. Jones (1980). When superstition fails: Reactions to disconfirmation of paranormal beliefs. Personality and Social Psychology Bulletin 6 (1): 83–88. via Vyse 1997, p. 121
  34. 34.0 34.1 34.2 Kuhn, Deanna, Joseph Lao (March 1996). Effects of Evidence on Attitudes: Is Polarization the Norm?. Psychological Science 7 (2): 115–120.
  35. Baron 2000, p. 201
  36. Miller, A. G., J. W. McHoskey, C. M. Bane, T. G. Dowd (1993). The attitude polarization phenomenon: Role of response measure, attitude extremity, and behavioral consequences of reported attitude change. Journal of Personality and Social Psychology 64: 561–574.
  37. Kunda 1999, p. 99
  38. Ross, Lee, Mark R. Lepper, Michael Hubbard (1975). Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm. Journal of Personality and Social Psychology 32 (5): 880–892. via Kunda 1999, p. 99
  39. 39.0 39.1 39.2 Anderson, Craig A., Mark R. Lepper, Lee Ross (1980). Perseverance of Social Theories: The Role of Explanation in the Persistence of Discredited Information. Journal of Personality and Social Psychology 39 (6): 1037–1049.
  40. 40.0 40.1 40.2 40.3 40.4 Baron 2000, pp. 197–200
  41. 41.0 41.1 Fine 2006, pp. 66–70
  42. Redelmeir, D. A., Amos Tversky (1996). On the belief that arthritis pain is related to the weather. Proceedings of the National Academy of Science 93: 2895–2896. via Kunda 1999, p. 127
  43. Plous, Scott (1993). The Psychology of Judgment and Decision Making, 162–164, McGraw-Hill.
  44. Adapted from Oswald & Grosjean 2004, p. 103
  45. 45.0 45.1 Bacon, Francis (1620). Novum Organum. reprinted in (1939) E. A. Burtt The English philosophers from Bacon to Mill, New York: Random House. via Nickerson, Raymond S. (1998). Confirmation Bias; A Ubiquitous Phenomenon in Many Guises. Review of General Psychology 2 (2): 175–220.
  46. 46.0 46.1 46.2 Baron 2000, pp. 195–196
  47. Thucydides, Richard Crawley (trans) The History of the Peloponnesian War http://classics.mit.edu/Thucydides/pelopwar.mb.txt
  48. 48.0 48.1 Wason, Peter C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology 12: 129–140.
  49. Wason, Peter C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology 20: 273–28.
  50. Barkow, Jerome H.; Leda Cosmides, John Tooby (1995). The adapted mind: evolutionary psychology and the generation of culture, 181–184, Oxford University Press US.
  51. Tweney, Ryan D., Michael E. Doherty, Winifred J. Worner, Daniel B. Pliske, Clifford R. Mynatt, Kimberly A. Gross, Daniel L. Arkkelin (1980). Strategies of rule discovery in an inference task. The Quarterly Journal of Experimental Psychology 32 (1): 109–123. (Experiment IV)
  52. Matlin, Margaret W. (2004). "Pollyanna Principle" Rüdiger F. Pohl Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, 255–272, Hove: Psychology Press.
  53. Dawson, Erica, Thomas Gilovich, Dennis T. Regan (October 2002). Motivated Reasoning and Performance on the Wason Selection Task. Personality and Social Psychology Bulletin 28 (10): 1379–1387.
  54. Ditto, Peter H., David F. Lopez (1992). Motivated skepticism : use of differential decision criteria for preferred and nonpreferred conclusions. Journal of personality and social psychology 63 (4): 568–584.
  55. Trope, Y.; A. Liberman (1996). "Social hypothesis testing: cognitive and motivational mechanisms" E. Tory Higgins, Arie W. Kruglanski Social Psychology: Handbook of basic principles, New York: Guilford Press. via Oswald & Grosjean 2004, pp. 91–93
  56. Goldacre, Ben (2008). Bad Science, London: Fourth Estate.
  57. Beck, Aaron T. (1976). Cognitive therapy and the emotional disorders, New York: International Universities Press.
  58. Myers, D. G., H. Lamm (1976). The group polarization phenomenon. Psychological Bulletin 83: 602–627. via Nickerson, Raymond S. (1998). Confirmation Bias; A Ubiquitous Phenomenon in Many Guises. Review of General Psychology 2 (2): 193–194.
  59. Halpern, Diane F. (1987). Critical thinking across the curriculum: a brief edition of thought and knowledge, Lawrence Erlbaum Associates.
  60. Tetlock, Philip E. (2005). Expert Political Judgment: How Good Is It? How Can We Know?, 125–128, Princeton, N.J.: Princeton University Press.
  61. 61.0 61.1 Smith, Jonathan C. (2009). Pseudoscience and Extraordinary Claims of the Paranormal: A Critical Thinker's Toolkit, 149–151, John Wiley and Sons.
  62. Randi, James (1991). James Randi: psychic investigator, 58–62, Boxtree.
  63. 63.0 63.1 Nickerson 1998, pp. 192–194
  64. Cite error: Invalid <ref> tag; no text was provided for refs named Hergovich 2010
  65. 65.0 65.1 Koehler 1993
  66. 66.0 66.1 66.2 Mahoney 1977
  67. Horrobin 1990
  68. Proctor, Robert W.; Capaldi, E. John (2006), Why science matters: understanding the methods of psychological research, Wiley-Blackwell, p. 68, ISBN 978-1-4051-3049-3, OCLC 318365881 
  69. Sternberg, Robert J. (2007), "Critical Thinking in Psychology: It really is critical", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, p. 292, ISBN 0-521-60834-1, OCLC 69423179, "Some of the worst examples of confirmation bias are in research on parapsychology ... Arguably, there is a whole field here with no powerful confirming data at all. But people want to believe, and so they find ways to believe." 
  70. 70.0 70.1 Shadish, William R. (2007), "Critical Thinking in Quasi-Experimentation", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, p. 49, ISBN 978-0-521-60834-3 
  71. PMID 11440947 (PMID 11440947)
    Citation will be completed automatically in a few minutes. Jump the queue or expand by hand
  72. Shermer, Michael (July 2006), "The Political Brain", Scientific American, ISSN 0036-8733, http://www.scientificamerican.com/article.cfm?id=the-political-brain, retrieved on 2009-08-14 
  73. PMID 21098355 (PMID 21098355)
    Citation will be completed automatically in a few minutes. Jump the queue or expand by hand

References[]

  • Baron, Jonathan (2000). Thinking and deciding, 3rd, New York: Cambridge University Press.
  • Fine, Cordelia (2006). A Mind of its Own: how your brain distorts and deceives, Cambridge, UK: Icon books.
  • Kunda, Ziva (1999). Social Cognition: Making Sense of People, MIT Press.
  • Oswald, Margit E. (2004). "Confirmation Bias" Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press.
  • Vyse, Stuart A. (1997). Believing in magic: The psychology of superstition, New York: Oxford University Press.

Further reading[]

  • Bell, Robert (1992). Impure Science: fraud, compromise, and political influence in scientific research, New York: John Wiley & Sons.
  • Westen, Drew (2007). The political brain: the role of emotion in deciding the fate of the nation, PublicAffairs.

External links[]




This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement