Wikia

Psychology Wiki

Experimenter bias

Talk0
34,135pages on
this wiki

Redirected from Experimenter's bias

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


This article is in need of attention from a psychologist/academic expert on the subject.
Please help recruit one, or improve this page yourself if you are qualified.
This banner appears on articles that are weak and whose contents should be approached with academic caution
.

Experimenter's bias is the phenomenon in experimental science by which the outcome of an experiment tends to be biased towards a result expected by the human experimenter. The inability of a human being to remain completely objective is the ultimate source of this bias. It occurs more often in sociological and medical sciences, for which reason double blind techniques are often employed to combat the bias. But experimenter's bias can also be found in some physical sciences, where the experimenter rounds-off measurements. If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus would conclusively show no such signal).

In principle, if a measurement has a resolution of R, then if the experimenter averages N independent measurements the average will have a resolution of R/\sqrt{N} (this is the central limit theorem of statistics). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. But note that this requires that the measurements be statistically independent, and there are several reasons why that independence may fail. If it does then the average may not actually be a better measurement but may merely reflect the correlations among the individual measurements and their non-independent nature.

The most common cause of non-independence is systematic errors (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). But another cause can be due to the inability of a human observer to round off measurements in a truly random manner. If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded-off by a human who knows the sidereal time of the measurement, and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.

Note that modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly-designed analysis technique. Experimenter's bias was not well recognized until the 1950's and 60's, and then it was primarily in medical experiments and studies. Its effects on experiments in the physical sciences have not always been fully recognized.

In experimental science, experimenter's bias is bias towards a result expected by the human experimenter. David Sackett[1], in a useful review of biases in clinical studies, states that biases can occur in any one of seven stages of research:

  1. in reading-up on the field,
  2. in specifying and selecting the study sample,
  3. in executing the experimental manoeuvre (or exposure),
  4. in measuring exposures and outcomes,
  5. in analyzing the data,
  6. in interpreting the analysis, and
  7. in publishing the results.

The inability of a human being to remain completely objective is the ultimate source of this bias. It occurs more often in sociological and medical sciences, for which reason double blind techniques are often employed to combat the bias. But experimenter's bias can also be found in some physical sciences, where the experimenter rounds off measurements.

Classification of experimenter's biasesEdit

Modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly-designed analysis technique. Experimenter's bias was not well recognized until the 1950's and 60's, and then it was primarily in medical experiments and studies. Sackett (1979) catalogued 56 biases that can arise in sampling and measurement in clinical research, among the above-stated first six stages of research. These are as follows:

  1. In reading-up the field
    1. the biases of rhetoric
    2. the all's well literature bias
    3. one-sided reference bias
    4. positive results bias
    5. hot stuff bias
  2. In specifying nd selecting the study sample
    1. popularity bias
    2. centripetal bias
    3. referral filter bias
    4. diagnostic access bias
    5. diagnostic suspicion bias
    6. unmasking (detection signal) bias
    7. mimicry bias
    8. previous opinion bias
    9. wrong sample size bias
    10. admission rate (Berkson) bias
    11. prevalence-incidence (Neyman) bias
    12. diagnostic vogue bias
    13. diagnostic purity bias
    14. procedure selection bias
    15. missing clinical data bias
    16. non-contemporaneous control bias
    17. starting time bias
    18. unacceptable disease bias
    19. migrator bias
    20. membership bias
    21. non-respondent bias
    22. volunteer bias
  3. In executing the experimental manuoeuvre (or exposure)
    1. contamination bias
    2. withdrawal bias
    3. compliance bias
    4. therapeutic personality bias
    5. bogus control bias
  4. In measuring exposures and outcomes
    1. insensitive measure bias
    2. underlying cause bias (rumination bias)
    3. end-digit preference bias
    4. apprehension bias
    5. unacceptability bias
    6. obsequiousness bias
    7. expectation bias
    8. substitution game
    9. family information bias
    10. exposure suspicion bias
    11. recall bias
    12. attention bias
    13. instrument bias
  5. In analyzing the data
    1. post-hoc significance bias
    2. data dredging bias (looking for the pony)
    3. scale degradation bias
    4. tidying-up bias
    5. repeated peeks bias
  6. In interpreting the analysis
    1. mistaken identity bias
    2. cognitive dissonance bias
    3. magnitude bias
    4. significance bias
    5. correlation bias
    6. under-exhaustion bias

The effects of bias on experiments in the physical sciences have not always been fully recognized.

Statistical backgroundEdit

In principle, if a measurement has a resolution of R, then if the experimenter averages N independent measurements the average will have a resolution of R/\sqrt{N} (this is the central limit theorem of statistics). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. This requires that the measurements be statistically independent; there are several reasons why they may not be. If independence is not satisfied, then the average may not actually be a better statistic but may merely reflect the correlations among the individual measurements and their non-independent nature.

The most common cause of non-independence is systematic errors (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). Experimenter bias is another potential cause of non-independence.

Biological and medical sciencesEdit

The complexity of living systems and the ethical impossibility of performing fully-controlled experiments with certain species of animals and humans provide a rich, and difficult to control, source of experimental bias. The scientific knowledge about the phenomenon under study, and the systematic elimination of probable causes of bias, by detecting confounding factors, is the only way to isolate true cause-effect relationships. Is is also in epidemiology that experimenter bias has been better studied than in other sciences.

Physical sciencesEdit

If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus would conclusively show no such signal). If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded-off by a human who knows the sidereal time of the measurement, and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.

Social sciencesEdit

The experimenter may introduce cognitive bias into a study in several ways. First, in what is called the observer-expectancy effect, the experimenter may subtly communicate their expectations for the outcome of the study to the participants, causing them to alter their behavior to conform to those expectations. After the data is collected, bias may be introduced during data interpretation and analysis.

Forensic sciencesEdit

Observer effects are rooted in the universal human tendency to interpret data in a manner consistent with one’s expectations [2]. This tendency is particularly likely to distort the results of a scientific test when the underlying data are ambiguous and the scientist is exposed to domain-irrelevant information that engages emotions or desires[3]. Despite impressions to the contrary, forensic DNA analysts often must resolve ambiguities, particularly when interpreting difficult evidence samples such as those that contain mixtures of DNA from two or more individuals, degraded or inhibited DNA, or limited quantities of DNA template. The full potential of forensic DNA testing can only be realized if observer effects are minimized.[4]


See alsoEdit


ReferencesEdit

  1. Sackett, D.L. Bias in analytic research. Journal of Chronic Diseases, 1979; 32: 51-63.
  2. R. Rosenthal, Experimenter Effects in Behavioral Research (NY: Appleton-Century-Crofts 1966).
  3. D. M. Risinger, M. J. Saks, W. C. Thompson, R. Rosenthal, Calif. L. Rev. (January, 2002).
  4. D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, I. Kornfield, D. Risinger, N. Rudin, M. Taylor, W.C. Thompson (2008). Sequential unmasking: A means of minimizing observer effects in forensic DNA interpretation. Journal of Forensic Sciences 53 (4): 1006–1007.
This page uses Creative Commons Licensed content from Wikipedia (view authors).

Around Wikia's network

Random Wiki