Psychology Wiki
No edit summary
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
{{ProfPsy}}
 
{{ProfPsy}}
  +
'''Peer review''' (known as '''refereeing''' in some academic fields) is a scholarly process used in the publication of manuscripts and in the awarding of funding for research. Publishers and funding agencies use peer review to select and to screen submissions. The process also forces authors to meet the standards of their discipline and thus achieve scientific objectivity. Publications and awards that have not undergone peer review are likely to be regarded with suspicion by scholars and professionals in many fields.
 
  +
This article is about the peer review process of scholarly work. For the general social approval by ones peers see: [[Peer evaluation]]
   
 
[[Image:ScientificReview.jpg|thumbnail|300px|A reviewer at the [[National Institutes of Health]] evaluates a grant proposal.]]
 
[[Image:ScientificReview.jpg|thumbnail|300px|A reviewer at the [[National Institutes of Health]] evaluates a grant proposal.]]
  +
'''Peer review''' (known as '''refereeing''' in some [[academic]] fields) is a process of subjecting an author's [[Scholarly method|scholarly]] work or [[idea]]s to the scrutiny of others who are [[expert]]s in the field. It is used primarily by editors to select and to screen submitted [[Manuscript#Manuscripts today|manuscript]]s, and by funding agencies, to decide the awarding of grants. The peer review process is aims to make [[author]]s meet the standards of their discipline, and of science in general. Publications and awards that have not undergone peer review are likely to be regarded with suspicion by scholars and professionals in many fields. Even refereed journals, however, have been shown to contain errors, fraud and other flaws that undermine their claims to publish sound science.
  +
  +
In the case of manuscripts, the [[editor]] will pass manuscripts that are accepted for publication to a [[publisher]] who will be responsible for organizing [[redactory services]], printing and distribution of the publication. In specialist academic (scholarly) journals, the editor (or increasingly group of editors) is normally a well-respected academic in the field, and edits the journal on behalf of a learned society or a commercial publisher. Some journals have professional editors employed by the publisher (e.g. ''[[Nature (journal)|Nature]]'') or the charity (e.g. ''[[Science (journal)|Science]]'') owning the journal. An editor is ultimately responsible for the quality and selection of manuscripts chosen to be published, usually basing their decision on peer review, although the authors are always responsible for the content of each manuscript. The editor does not revise and correct spelling, grammar and formatting - that process is carried out by a [[Copy Editing|Copy Editor]], although the editor controls the quality of the process.
   
 
==Reasons for peer review==
 
==Reasons for peer review==
A rationale for peer review is that it is rare for an individual author or research team to spot every mistake or flaw in a complicated piece of work. This is not because deficiencies represent needles in a haystack, but because in a new and perhaps eclectic intellectual product, an opportunity for improvement may stand out only to someone with special expertise or experience. Therefore showing work to others increases the probability that weaknesses will be identified, and with advice and encouragement, fixed. The anonymity and independence of reviewers is intended to foster unvarnished criticism and discourage [[cronyism]] in funding and publication decisions. However, there is great cost to the process - and though many journals are 'peer reviewed', they are not freely accessible to all peers.
+
A rationale for peer review is that it is rare for an individual author or research team to spot every mistake or flaw in a complicated piece of work. This is not because deficiencies represent needles in a haystack, but because in a new and perhaps eclectic intellectual product, an opportunity for improvement may stand out only to someone with special expertise or experience. For both grant-funding and publication in a scholarly journal, it is also normally a requirement that the work is both novel and substantial. Therefore showing work to others increases the probability that weaknesses will be identified, and with advice and encouragement, fixed. The [[anonymity]] and [[independence]] of reviewers is intended to foster unvarnished criticism and discourage [[cronyism]] in funding and publication decisions. However, as discussed below under the next section, US government guidelines governing peer review for federal regulatory agencies require that reviewer identity be disclosed under some circumstances.
   
In addition, since the reviewers are normally selected from subject experts in the fields discussed in the article, the process of peer review is considered critical to establishing a reliable body of research and knowledge. Scholars reading the published articles can only be expert in a limited area; they rely to some degree on the peer-review process to provide reliable and credible research which they can build upon for subsequent or related research. As a result, significant scandal ensues when an author is found to have falsified the research included in an article, as many other scholars, and the field of study itself, has relied upon that research. (Also see [[Peer_review#Peer_review_and_fraud|peer review and fraud]].)
+
In addition, since the reviewers are normally selected from experts in the fields discussed in the article, the process of peer review is considered critical to establishing a reliable body of research and knowledge. Scholars reading the published articles can only be expert in a limited area; they rely to some degree on the peer-review process to provide reliable and credible research that they can build upon for subsequent or related research. As a result, significant scandal ensues when an author is found to have falsified the research included in an article, as many other scholars, and the field of study itself, may have relied upon that research (see [[Peer review#Peer review and fraud|Peer review and fraud]] below).
   
 
==How it works==
 
==How it works==
Peer review subjects an author's work or ideas to the scrutiny of one or more others who are experts in the field. These referees each return an evaluation of the work, including suggestions for improvement, to an editor or other intermediary (typically, most of the referees' comments are eventually seen by the author as well). Evaluations usually include an explicit recommendation of what to do with the manuscript or proposal, often chosen from a menu provided by the journal or funding agency. Most recommendations are along the lines of the following:
+
In the case of proposed publications, an editor sends advance copies of an author's work or [[idea]]s to researchers or scholars who are [[expert]]s in the field (known as "referees" or "reviewers"), normally by e-mail or through a web-based manuscript processing system. Usually, there are two or three referees. These referees each return an evaluation of the work to the editor, including suggestions for improvement. Typically, most of the referees' comments are eventually seen by the author. [[Scientific journal]]s observe this convention universally. The editor, usually themselves understanding the field of the manuscript (although not in as much depth as the referees who are specialists), then evaluates the referees' comments, their own opinion of the manuscript, and the context of the scope of the journal or level of the book and readership, before passing a decision back to the author(s), usually with the referees' comments.
  +
  +
Referees' evaluations usually include an explicit recommendation of what to do with the manuscript or proposal, often chosen from a menu provided by the journal or funding agency. Most recommendations are along the lines of the following:
   
 
* to unconditionally accept the manuscript or proposal,
 
* to unconditionally accept the manuscript or proposal,
 
* to accept it in the event that its authors improve it in certain ways,
 
* to accept it in the event that its authors improve it in certain ways,
* to reject it, but encourage revision and invite resubmission
+
* to reject it, but encourage revision and invite resubmission,
 
* to reject it outright.
 
* to reject it outright.
   
During this process, the role of the referees is advisory, and the editor is under no formal obligation to accept the opinions of the referees. Furthermore, in scientific publication, the referees do not act as a group, do not communicate with each other, and typically are not aware of each other's identities. There is usually no requirement that the referees achieve [[consensus]]. Thus the group dynamics is substantially different from that of a jury. In situations where the referees disagree about the quality of a work, there are a number of strategies for reaching a decision.
+
During this process, the role of the referees is advisory, and the editor is under no formal obligation to accept the opinions of the referees. Furthermore, in scientific publication, the referees do not act as a group, do not communicate with each other, and typically are not aware of each other's identities. There is usually no requirement that the referees achieve [[consensus]]. Thus the group dynamics are substantially different from that of a [[jury]]. In situations where the referees disagree about the quality
  +
of a work, there are a number of strategies for reaching a decision.
   
 
When an editor receives very positive and very negative reviews for the same manuscript, the editor often will solicit one or more additional reviews as a tie-breaker. As another strategy in the case of ties, editors may invite authors to reply to a referee's [[criticism]]s and permit a compelling rebuttal to break the tie. If an editor does not feel confident to weigh the persuasiveness of a rebuttal, the editor may solicit a response from the referee who made the original criticism. In rare instances, an editor will convey communications back and forth between authors and a referee, in effect allowing them to debate a point. Even in these cases, however, editors do not allow referees to confer with each other, and the goal of the process is explicitly not to reach consensus or to convince anyone to change their opinions. Some medical journals, however (usually following the [[open access]] model), have begun posting on the Internet the pre-publication history of each individual article, from the original submission to reviewers' reports, authors' comments, and revised manuscripts.
Traditionally reviewers would remain anonymous to the authors, but this is slowly changing. In some academic fields most journals now offer the reviewer the option of remaining anonymous or not; papers sometimes contain, in the acknowledgments section, thanks to (anonymous or named) referees who helped improve the paper.
 
   
 
Traditionally reviewers would remain anonymous to the authors, but this is slowly changing. In some academic fields most journals now offer the reviewer the option of remaining anonymous or not, or a referee may opt to sign a review, thereby relinquishing anonymity. Published papers sometimes contain, in the acknowledgements section, thanks to anonymous or named referees who helped improve the paper.
At a journal or book publisher, the task of picking reviewers typically falls to an [[editor]]. When a manuscript arrives, an editor solicits reviews from [scholars or other experts who may or may not have already expressed a willingness to referee for that journal or book division Granting agencies typically recruit a panel or committee of reviewers in advance of the arrival of applications.
 
   
 
==Recruiting referees==
In some disciplines, such as computer science, there exist refereed venues (such as conferences and workshops). To be admitted to speak, scientists must submit a scientific paper (generally short, often 15 pages or less) in advance. This paper is reviewed by a "program committee" (the equivalent of an editorial board), who generally requests inputs from referees. The hard deadlines set by the conferences tend to limit the options to either accept or reject the paper.
 
   
 
At a journal or book publisher, the task of picking reviewers typically falls to an [[editing|editor]]. When a manuscript arrives, an editor solicits reviews from [[scholar]]s or other experts who may or may not have already expressed a willingness to referee for that [[journal]] or [[book division]]. Granting agencies typically recruit a [[panel]] or [[committee]] of reviewers in advance of the arrival of applications.
Typically referees are not selected from among the authors' close colleagues, relatives, or friends. Referees are supposed to inform the editor of any conflict of interests that might arise.
 
Journals or individual editors often invite a manuscript's authors to name people whom they consider qualified to referee their work. Authors are sometimes also invited to name natural candidates who should be ''disqualified'', in which case they may be asked to provide justification (typically expressed in terms of conflict of interest).
 
   
 
In some disciplines there exist refereed venues (such as [[academic conference|conferences]] and workshops). To be admitted to speak, scholars and scientists must submit papers (generally short, often 15 pages or less) in advance. These papers are reviewed by a "program committee" (the equivalent of an editorial board), who generally requests inputs from referees. The hard deadlines set by the conferences tend to limit the options to either accept or reject the paper.
Editors solicit author input in selecting referees because academic writing typically is very specialized. Editors often oversee many specialties, and may not be experts in any of them, since editors may be full time professionals with no time for scholarship. But after an editor selects referees from the pool of candidates, the editor typically is obliged not to disclose the referees' identities to the authors, and in scientific journals, to each other. Policies on such matters differ between academic disciplines.
 
   
 
Typically referees are not selected from among the authors' close [[Collegiality|colleague]]s, students, or friends. Referees are supposed to inform the editor of any [[conflict of interests]] that might arise. Journals or individual editors often invite a manuscript's authors to name people whom they consider qualified to referee their work. Authors are sometimes also invited to name natural candidates who should be ''disqualified'', in which case they may be asked to provide justification (typically expressed in terms of conflict of interest). In some disciplines, scholars listed in an "acknowledgements" section are not allowed to serve as referees (hence the occasional practice of using this section to disqualify potentially negative reviewers).
Scientific journals observe this convention universally. The two or three chosen referees report their evaluation of the article and suggestions for improvement to the editor. The editor then relays the bulk of these comments to the author (some comments may be designated as confidential to the editor), meanwhile basing on them his or her decision whether to publish the manuscript. When an editor receives very positive and very negative reviews for the same manuscript, the editor often will solicit one or more additional reviews as a tie-breaker.
 
   
 
Editors solicit author input in selecting referees because [[academia|academic]] writing typically is very specialized. Editors often oversee many specialities, and may not be experts in any of them, since editors may be full time professionals with no time for [[scholarly method|scholarship]]. But after an editor selects referees from the pool of candidates, the editor typically is obliged not to disclose the referees' identities to the authors, and in scientific journals, to each other. Policies on such matters differ between academic disciplines.
As another strategy in the case of ties, editors may invite authors to reply to a referee's criticisms and permit a compelling rebuttal to break the tie. If an editor does not feel confident to weigh the persuasiveness of a rebuttal, the editor may solicit a response from the referee who made the original criticism. In rare instances, an editor will convey communications back and forth between authors and a referee, in effect allowing them to debate a point. Even in these cases, however, editors do not allow referees to confer with each other, and the goal of the process is explicitly not to reach consensus or to convince anyone to change their opinions. Some medical journals, however, (usually following the [[open access]] model) have begun posting on the Internet the pre-publication history of each individual article, from the original submission to reviewers' reports, authors' comments, and revised manuscripts.
 
   
 
Recruiting [[referee]]s is a political art, because referees, and often editors, are usually not paid, and reviewing takes time away from the referee's main activities, such as his or her own research. To the would-be recruiter's advantage, most potential referees are [[author]]s themselves, or at least [[reader]]s, who know that the publication system requires that [[expert]]s donate their time. Referees also have the opportunity to prevent work that does not meet the standards of the field from being published, which is a position of some responsibility. Editors are at a special advantage in recruiting a [[scholar]] when they have overseen the publication of his or her work, or if the scholar is one who hopes to submit manuscripts to that editor's publication in the future. Granting agencies, similarly, tend to seek referees among their present or former grantees. Serving as a referee can even be a condition of a grant, or professional association membership.
After reviewing and resolving any potential ties, there may be one of three possible outcomes for the article. The two simplest are outright rejection and unconditional acceptance. In most cases, the authors may be given a chance to revise, with or without specific recommendations or requirements from the reviewers.
 
 
==Recruiting referees==
 
Recruiting [[referee]]s is a political art, because referees are not paid, and reviewing takes time away from the referee's main activities, such as his or her own research. To the would-be recruiter's advantage, most potential referees are author]]s themselves, or at least readers, who know that the publication system requires that experts donate their time. Editors are at a special advantage in recruiting a scholar when they have overseen the publication of his or her work, or if the scholar is one who hopes to submit manuscripts to that editor's publication in the future. Granting agencies, similarly, tend to seek referees among their present or former grantees. Serving as a referee can even be a condition of a grant, or professional association membership.
 
   
Another difficulty that peer-review organizers face is that, with respect to some manuscripts or proposals, there may be few scholars who truly qualify as experts. Such a circumstance often frustrates the goals of reviewer anonymity and the avoidance of conflicts of interest. It also increases the chances that an organizer will not be able to recruit true experts – people who have themselves done work like that under review, and who can read between the lines. Low-prestige journals and granting agencies that award little money are especially handicapped with regard to recruiting experts.
+
Another difficulty that peer-review organizers face is that, with respect to some manuscripts or proposals, there may be few scholars who truly qualify as experts. Such a circumstance often frustrates the goals of reviewer anonymity and the avoidance of conflicts of interest. It also increases the chances that an organizer will not be able to recruit true experts – people who have themselves done work like that under review, and who can read between the lines. Low-prestige or local journals and granting agencies that award little money are especially handicapped with regard to recruiting experts.
   
Finally, anonymity adds to the difficulty in finding reviewers in another way. In scientific circles, credentials and reputation are important, and while being a referee for a prestigious journal is considered an honor, the anonymity restrictions make it impossible to publicly state that one was a referee for a particular article. However, credentials and reputation are principally established by publications, not by refereeing; and in some fields refereeing may not be anonymous.
+
Finally, [[anonymity]] adds to the difficulty in finding reviewers in another way. In scientific circles, [[credentials]] and [[reputation]] are important, and while being a referee for a prestigious journal is considered an honor, the anonymity restrictions make it impossible to publicly state that one was a referee for a particular article. However, credentials and reputation are principally established by publications, not by refereeing; and in some fields refereeing may not be anonymous.
   
The process of peer review does not end after a paper completes the peer review process. After being put to press, and after 'the ink is dry', the process of peer review continues in [[Journal club|journal clubs]]. Here groups of colleagues review literature and discuss the value and implications it presents. Journal clubs will often send letters to the editor of a journal, or correspond with the editor via an [http://www.JournalReview.org on-line journal club]. In this way, all 'peers' may offer review and critique of published literature.
+
The process of peer review does not end after a paper completes the peer review process. After being put to press, and after 'the ink is dry', the process of peer review continues in [[journal club]]s. Here groups of colleagues review literature and discuss the value and implications it presents. Journal clubs will often send letters to the editor of a journal, or correspond with the editor via an [http://www.JournalReview.org on-line journal club]. In this way, all 'peers' may offer review and critique of published literature.
   
 
==Different styles of review==
 
==Different styles of review==
Line 50: Line 53:
 
Often the decision of what counts as "good enough" falls entirely to the editor or organizer of the review. In other cases, referees will each be asked to make the call, with only general guidance from the coordinator on what stringency to apply.
 
Often the decision of what counts as "good enough" falls entirely to the editor or organizer of the review. In other cases, referees will each be asked to make the call, with only general guidance from the coordinator on what stringency to apply.
   
Very general journals such as ''[[Science (journal)|Science]]'', ''[[Nature (journal)|Nature]]'' have extremely stringent standards for publication, and will reject papers which report good quality scientific work that they feel are not breakthroughs in the field.
+
Very general journals such as ''[[Science (journal)|Science]]'' and ''[[Nature (journal)|Nature]]'' have extremely stringent standards for publication, and will reject papers that report good quality scientific work, which they feel are not breakthroughs in the field.
Such journals generally have a two-tier reviewing system. In the first stage, members of the editorial board verify that the paper's findings -- if correct -- would be ground-breaking enough to warrant publication in Science or Nature. Most papers are rejected at this stage. Papers that do pass this 'pre-reviewing' are sent out for in-depth review to outside referees. Even after all reviewers recommend publication and all reviewer criticisms/suggestions for changes have been met, papers may still be returned to the authors for shortening to meet the journal's length limits. With the advent of electronic journal editions, overflow material may be stored in the journals online Electronic Supporting Information archive.
+
Such journals generally have a two-tier reviewing system. In the first stage, members of the editorial board verify that the paper's findings -- if correct -- would be ground-breaking enough to warrant publication in ''Science'' or ''Nature''. Most papers are rejected at this stage. Papers that do pass this 'pre-reviewing' are sent out for in-depth review to outside referees. Even after all reviewers recommend publication and all reviewer criticisms/suggestions for changes have been met, papers may still be returned to the authors for shortening to meet the journal's length limits. With the advent of electronic journal editions, overflow material may be stored in the journals online Electronic Supporting Information archive.
   
A similar emphasis on novelty exists in general area journals such as the [[Journal of the American Chemical Society]] (JACS). However, these journals generally send out all papers (except blatantly inappropriate ones) for peer reviewing to multiple reviewers. The reviewers are specifically queried not just on the scientific quality and correctness, but also on whether the findings are of interest to the general area readership or only to a specialist subgroup. In the latter case, the recommendation is usually for publication in a more specialized journal. The editor may offer to authors the option of having the manuscript and reviews forwarded to such a journal with the same publishers if the reviewer reports warrant such a decision , the editor of such a journal may accept the forwarded manuscript without further reviewing.
+
A similar emphasis on novelty exists in general area journals such as the ''[[Journal of the American Chemical Society]]'' (''JACS''). However, these journals generally send out all papers (except blatantly inappropriate ones) for peer reviewing to multiple reviewers. The reviewers are specifically queried not just on the scientific quality and correctness, but also on whether the findings are of interest to the general area readership (chemists of all disciplines, in the case of ''JACS'') or only to a specialist subgroup. In the latter case, the recommendation is usually for publication in a more specialized journal. The editor may offer to authors the option of having the manuscript and reviews forwarded to such a journal with the same publishers (e.g., in the example given, ''Journal of Organic Chemistry'', ''Journal of Physical Chemistry'', ''Inorganic Chemistry'',...). if the reviewer reports warrant such a decision (i.e., they boil down to "Great work, but too specialized for JACS: publish in ..."), the editor of such a journal may accept the forwarded manuscript without further reviewing.
   
Some general area journals have strict length limitations. Others, have Letters and Full Papers sections: the Letters sections have strict length limits and special novelty requirements.
+
Some general area journals, such as ''[[Physical Review Letters]]'', have strict length limitations. Others, such as ''JACS'', have Letters and Full Papers sections: the Letters sections have strict length limits (two journal pages in the case of ''JACS'') and special novelty requirements. In constrast, online-only journals may have no space limitations [http://www.biomedcentral.com/info/authors/reasons].
   
More specialized scientific journals use peer review primarily to filter out obvious mistakes and incompetence, as well as (borderline) plagiarism, overly derivative work, and straightforward applications of known methods. Different publication rates reflect these different criteria: ''Nature'' publishes about 5 percent of received papers, while ''Astrophysical Journal'' publishes about 70 percent. The different publication rates are also reflected in the size of the journals.
+
More specialized scientific journals such as the aforementioned chemistry journals, ''[[Astrophysical Journal]]'', and the ''[[Physical Review]]'' series use peer review primarily to filter out obvious mistakes and incompetence, as well as plagiarism, overly derivative work, and straightforward applications of known methods. Different publication rates reflect these different criteria: ''Nature'' publishes about 5 percent of received papers, while ''Astrophysical Journal'' publishes about 70 percent. The different publication rates are also reflected in the size of the journals. ''[[PLoS ONE]]'' was launched by the [[Public Library of Science]] in 2006 with the aim to "concentrate on technical rather than subjective concerns", and to publish articles from across science, regardless of the field [http://www.plosone.org/static/information.action].
   
Screening by peers may be more or less laissez-faire depending on the discipline. Physicists, for example, tend to think that decisions about the worthiness of an article are best left to the marketplace. Yet even within such a culture peer review serves to ensure high standards in what is published. Outright errors are detected and authors receive both edits and suggestions.
+
Screening by peers may be more or less [[laissez-faire]] depending on the discipline. [[Physicists]], for example, tend to think that decisions about the worthiness of an article are best left to the marketplace. Yet even within such a culture peer review serves to ensure high standards in what is published. Outright errors are detected and authors receive both edits and suggestions.
   
To preserve the integrity of the peer-review process, submitting authors may not be informed of who reviews their papers; sometimes, they might not even know the identity of the associate editor who is responsible for the paper. In many cases, alternatively called "masked" or "double-masked" review, the identity of the authors is concealed from the reviewers, lest the knowledge of authorship bias their review; in such cases, however, the associate editor responsible for the paper does know who the author is. Sometimes the scenario where the reviewers do know who the authors are is called "single-masked" to distinguish it from the "double-masked" process. In double-masked review, the authors are required to remove any reference that may point to them as the authors of the paper.
+
To preserve the integrity of the peer-review process, submitting authors may not be informed of who reviews their papers; sometimes, they might not even know the identity of the associate editor who is responsible for the paper. In many cases, alternatively called "masked" or "double-masked" review (or "blind" or "double-blind" review), the identity of the authors is concealed from the reviewers, lest the knowledge of authorship bias their review; in such cases, however, the associate editor responsible for the paper does know who the author is. Sometimes the scenario where the reviewers do know who the authors are is called "single-masked" to distinguish it from the "double-masked" process. In double-masked review, the authors are required to remove any reference that may point to them as the authors of the paper.
   
While the anonymity of reviewers is almost universally preserved, double-masked review (where authors are also anonymous to reviewers) is not always employed. Critics of the double-masked process point out that, despite the extra editorial effort to ensure anonymity, the process often fails to do so, since certain approaches, methods, notations, etc., may point to a certain group of people in a research stream, and even to a particular person. Proponents of the single-masked process argue that if the reviewers of a paper are unknown to each other, the associate editor responsible for the paper can easily verify the objectivity of the reviews. Single-masked review is thus strongly dependent upon the goodwill of the participants.
+
While the anonymity of reviewers is almost universally preserved, double-masked review (where authors are also anonymous to reviewers) is rarely employed.
  +
Critics of the double-masked process point out that, despite the extra editorial effort to ensure anonymity, the process often fails to do so, since certain approaches, methods, notations, etc., may point to a certain group of people in a research stream, and even to a particular person [http://blogs.nature.com/nn/actionpotential/2005/12/doubleblind_peer_review.html].
  +
Proponents of double-masked review argue that it performs at least as well as the traditional one and that it generates a better perception of fairness and equality in global scientific funding and publishing <ref>"[http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1201308 Peer Review—The Newcomers' Perspective]" (2004) PLoS Biol. 2005 September; 3(9): e326 doi: 10.1371/journal.pbio.0030326.</ref>
   
  +
Proponents of the single-masked process argue that if the reviewers of a paper are unknown to each other, the associate editor responsible for the paper can easily verify the objectivity of the reviews. Single-masked review is thus strongly dependent upon the goodwill of the participants.
==Structure of a peer reviewed paper==
 
First is the ''abstract'' which is a one paragraph summary of the findings of the study. Unlike the rest of the article, the abstract is often free and can be read in online databases like [[Medline]]. The article itself starts with an ''introduction'' that describes earlier relevant research and explains the purpose of the current study. Next is section called ''material & methods'' (or something similar) that describes exactly how the study was conducted. The aim is that other researchers should be able to duplicate the study using this information and get the same results. The findings are described in the ''results'' section. Finally, there is a ''discussion'' (or ''conclusion'') that interprets the results and may compare them to earlier findings.
 
   
 
==Criticisms of peer review==
 
==Criticisms of peer review==
 
One of the most common complaints about the peer review process is that it is slow, and that it typically takes several months or even several years in some fields for a submitted paper to appear in print. In practice, much of the communication about new results in some fields such as [[astronomy]] no longer takes place through peer reviewed papers, but rather through [[preprint]]s submitted onto electronic servers such as [[ArXiv.org e-print archive|arXiv.org]].
 
One of the most common complaints about the peer review process is that it is slow, and that it typically takes several months or even several years in some fields for a submitted paper to appear in print. In practice, much of the communication about new results in some fields such as [[astronomy]] no longer takes place through peer reviewed papers, but rather through [[preprint]]s submitted onto electronic servers such as [[ArXiv.org e-print archive|arXiv.org]].
   
 
While passing the peer-review process is often considered in the [[scientific community]] to be a certification of validity, it is not without its problems. Drummond Rennie, deputy editor of ''[[Journal of the American Medical Association]]'' is an organizer of the International Congress on Peer Review and Biomedical Publication, which has been held every four years since 1986 [http://jama.ama-assn.org/cgi/content/full/289/11/1438]. He remarks, "There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print." [http://www.aaskolnick.com/naswmav.htm]
In addition, some [[Science and technology studies|sociologists of science]] argue that peer review makes the ability to publish susceptible to control by [[elite]]s and to personal jealousy. The peer review process may [[Suppression of dissent|suppress dissent]] against "[[mainstream]]'" theories. Reviewers tend to be especially critical of conclusions that contradict their own views, and lenient towards those that accord with them. At the same time, elite scientists are more likely than less established ones to be sought out as referees, particularly by high-prestige journals or publishers. As a result, it has been argued, ideas that harmonize with the elite's are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones, which accords with [[Thomas Kuhn]]'s well-known observations regarding scientific revolutions.
 
   
  +
===Allegations of bias and suppression===
However, others have pointed out that there is a very large number of scientific journals in which one can publish, making control of information difficult. In addition, the decision-making process of peer review, in which each referee gives his opinions separately and without consultation with the other members, is intended to mitigate some of these problems.
 
   
 
In addition, some [[Science and technology studies|sociologists of science]] argue that peer review makes the ability to publish susceptible to control by [[elite]]s and to personal jealousy.<ref>"[http://www.eurekalert.org/pub_releases/2004-08/cu-bse081204.php British scientists exclude 'maverick' colleagues, says report]" (2004) EurekAlert Public release date: 16-Aug-2004</ref> The peer review process may [[Suppression of dissent|suppress dissent]] against "[[mainstream]]" theories.<ref>Brian Martin, "[http://www.uow.edu.au/arts/sts/bmartin/dissent/documents/ss/ Suppression Stories]" (1997) in ''Fund for Intellectual Dissent'' ISBN 0-646-30349-X</ref><ref>See also Juan Miguel Campanario, "[http://www2.uah.es/jmc/nobel.html Rejecting Nobel class articles and resisting Nobel class discoveries]", cited in ''Nature'', 16-Oct-2003, Vol 425, Issue 6959, p.645</ref><ref>Juan Miguel Campanario and Brian Martin, "[http://www.uow.edu.au/arts/sts/bmartin/pubs/04jse.html Challenging dominant physics paradigms]" (2004) ''[[Journal of Scientific Exploration]]'', vol. 18, no. 3, Fall 2004, pp. 421-438</ref> Reviewers tend to be especially critical of [[conclusion]]s that contradict their own [[view]]s, and lenient towards those that accord with them. At the same time, elite scientists are more likely than less established ones to be sought out as referees, particularly by high-prestige journals or [[publisher]]s. As a result, it has been argued, ideas that harmonize with the elite's are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones, which accords with [[Thomas Kuhn]]'s well-known observations regarding [[The Structure of Scientific Revolutions|scientific revolutions]]. <ref>See also: Sophie Petit-Zeman, "[http://www.guardian.co.uk/print/0,3858,4583809-111019,00.html Trial by peers comes up short]" (2003) The Guardian, Thursday January 16, 2003</ref>
While some believe passing the peer-review process is a certification of validity, those who study that process often hold a far more skeptical view. Drummond Rennie, deputy editor of [[Journal of the American Medical Association]] is an organizer of the International Congress on Peer Review and Biomedical Publication, which has been held every four years since 1986. [http://jama.ama-assn.org/cgi/content/full/289/11/1438]. We still don't know how well the peer-review process works, he says, although one thing is clear: "There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print." [http://www.aaskolnick.com/naswmav.htm]
 
   
 
However, others have pointed out that there is a very large number of [[scientific journal]]s in which one can publish, making control of [[information]] difficult. In addition, the decision-making process of peer review, in which each referee gives their opinion separately and without consultation with the other referees, is intended to mitigate some of these problems. Others have pointed out that:
Because the majority of peer review is conducted anonymously and authors do not know the identity of referees some scientists believe that anonymity has negative consequences eg providing an opportunity for settling old scores and burying rival research. Critics argue that an open peer review system where referees’identities are disclosed to researchers would be preferable. A recent UK Parliamentary briefing paper listed the following arguments for and against an open system :
 
*Junior scientists may be unwilling to give an unfavourable review to a senior scientist.
 
*Referees are less likely to provide critical reviews.
 
*It may be difficult to recruit referees to an open system.
 
Arguments supporting an open system include:
 
*Reduces abuses of the system.
 
*Renders referees more accountable for their comments.
 
*Increases the credit given to referees.
 
   
  +
:"... peer review does not thwart new ideas. Journal editors and the 'scientific establishment' are not hostile to new discoveries. Science thrives on discovery and scientific journals compete to publish new breakthroughs"<ref>Ayala, F.J. "On the scientific methods, its practice and pitfalls", (1994) ''[http://www.history-journals.de/index2.html History and Philosophy of Life Sciences]'' 16, 205-240.</ref>
==History of peer review==
 
Peer review of scholars' work has existed at least since it was known as "The Inquisition of the Holy Roman and Catholic Church"; a principle task of which was to investigate "suspected novelties". Although the Inquisition's reach did not extend to protestant England at the time, The Royal Society was founded in 1660 to escape similar peer review from established University [[Aristotelianism|Aristotelian]] scholars by substituting experiment for such authority. As the motto of the society stated in flat repudiation of such peer review, its members were to be beholden to "the words of no one", but only to the evidence.
 
   
  +
==Peer review failures==
Peer review has been a touchstone of modern scientific method apparently only since in the middle of the twentieth century.[http://www.designinference.com/documents/05.02.resp_to_wein.htm] Before then, its application was lax. For example, Albert Einstein's revolutionary "Annus Mirabilis" papers in the 1905 issue of ''Annalen der Physik'' were not peer-reviewed. The journal's editor in chief (and father of quantum theory), Max Planck, recognized the virtue of publishing such outlandish ideas and simply had the papers published; none of the papers were sent to reviewers. The decision to publish was made exclusively by either the editor in chief, or the co-editor Wilhelm Wien—both certainly "peers" (who were later to win the Nobel prize in physics), but this does not meet the definition of "peer review" as it is currently understood. At the time there was a policy that allowed authors much latitude after their first publication. In a recent editorial in [[Nature (journal)|Nature]], it was stated that "in journals in those days, the burden of proof was generally on the opponents rather than the proponents of new ideas."
 
  +
Peer review failures occur when a peer-reviewed article contains obvious fundamental errors that undermines at least one of its main conclusions. Peer review is not considered a failure in cases of deliberate fraud by authors. Letters-to-the-editor that correct major errors in articles are a common indication of peer review failures. Many journals have no procedure to deal with peer review failures beyond publishing letters. Some do not even publish letters. The author of a disputed article is allowed a published reply to a critical letter. Neither the letter nor the reply is usually peer-reviewed, and typically the author rebuts the corrections. Thus, the readers are left to decide for themselves if there was a peer review failure. However, the International Committee for Medical Journal Editors' [http://www.icmje.org/index.html#top [[Uniform Requirements for Manuscripts Submitted to Biomedical Journals]]] states that "if a fraudulent paper has been published, the journal must print a retraction" [http://www.icmje.org/#correct], and gives guidelines on investigating alleged fraud. Members of the UK-based [http://www.publicationethics.org.uk/[[Committee on Publication Ethics]]] (COPE) have a duty to investigate allegations of [[Scientific misconduct|misconduct]] [http://www.publicationethics.org.uk/guidelines/code].
   
  +
[http://www.thecre.com The Center for Regulatory Effectiveness] attempted to use the Letter-to-the-editor process when they found what they believed to be numerous factual errors in a [http://jama.ama-assn.org/cgi/content/extract/295/20/2407 Commentary] published by the ''Journal of the American Medical Association'' (''JAMA''), a prominent peer reviewed journal. The CRE sent ''JAMA'' a letter that purported to [http://www.thecre.com/pdf/CRE%20JAMA%20Response.pdf correct all of the factual errors]. ''JAMA'' refused to publish the letter as written. The ''JAMA'' editors insisted on changes based on length and content constraints. The CRE claimed that compliance with these constraints precluded correction of all factual errors. Consequently, the CRE withdrew its correction letter.
==Famous papers which were not peer-reviewed==
 
Because of its relatively recent status as a fixture in the scientific enterprise, many of the major breakthroughs in the history of science ironically were published without having undergone peer review. However, even after peer review had become common practice, some famous papers have been published without review. These include:
 
   
  +
The factual errors claimed by CRE include the following. CRE claims the Commentary misstates the basis for the EU's ban on the herbicide atrazine, which was politics not science. CRE claims the Commentary misstates the IARC classification of atrazine with regard to carcinogenicity. CRE claims the Commentary misstates that atrazine tests performed by Dr. Tyrone Hayes were accurate and reliable, when in fact Dr. Hayes’ tests [http://thecre.com/pdf/20051222_hayes_white.pdf failed peer review.] CRE claims the Commentary misstates that the IQA has no legislative history, when in fact it has [http://www.thecre.com/quality/20041010_regweek.htm substantial legislative history.] CRE claims that Commentary misrepresents several Data Quality Act requests for correction filed by CRE.
# Publication of [[James D. Watson|Watson]] and [[Francis Crick|Crick's]] [[1951]] paper on the structure of [[DNA]] in ''[[Nature (journal)|Nature]]''. This paper was not sent out for peer review. John Maddox, the editor stated that &ldquo;the Watson and Crick paper was not peer-reviewed by ''Nature''... the paper could not have been refereed: its correctness is self-evident. No referee working in the field (Linus Pauling?) could have kept his mouth shut once he saw the structure&rdquo; (Nature 426:119 (2003)). The editors accepted the paper upon receipt of a &ldquo;Publish&rdquo; covering letter from influential physicist [[William Lawrence Bragg]].
 
# Abdus Salam's paper "Weak and electromagnetic interactions", which elucidated the unification of the weak nuclear force with the electromagnetic force into an electroweak force. It was originally published in ''Svartholm: Elementary Particle Theory, Proceedings Of The Nobel Symposium Held 1968 At Lerum, Sweden'' (Stockholm, 1968, 367&ndash;77).
 
   
  +
CRE claims that ''JAMA'''s use of non-peer reviewed Commentaries, when coupled with length and page constraints on correction letters, can cause the publication of biased and incorrect scientific information in peer reviewed journals.
{{Listdev}}
 
  +
  +
An alternative method of dealing with peer review failures is correction via another peer-reviewed article. For example, a claim that the plant hormone, [[ethylene]], increased plant membrane permeability<ref>Poovaiah, B.W. 1979. Effects of inorganic cations on Ethephon-induced increases in membrane permeability. ''J. Amer. Soc. Hort. Sci.'' 104: 164-166.</ref> was shown to be an artifact caused by the low pH of the ethylene-releasing chemical, (2-chloroethyl)-phosphonic acid, employed.<ref>Reid, M.S., Paul, J.L. and Young, R.E. 1980. Effects of pH and ethephon on betacyanin leakage from beet root discs. ''Plant Physiology'' 66: 1015-1016. [http://www.plantphysiol.org/cgi/content/abstract/66/5/1015?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&author1=paul%2C+j&andorexactfulltext=and&searchid=1&FIRSTINDEX=0&sortspec=relevance&resourcetype=HWCIT]</ref> One disadvantage of this approach is that a reader who spots major flaws in an article may not have the time or resources to do the research and writing required for a peer-reviewed rebuttal article.
  +
  +
A famous peer review failure was the 1977 ''[[Science (journal)|Science]]'' article on the [[dodo]] and seed germination<ref>Temple, S.A. 1977. Plant-animal mutualism: Coevolution with dodo leads to near extinction of plant. ''Science'' 197: 885-886.</ref> that lacked the required control treatment for its main experiment among other major flaws.<ref>Hershey, D.R. 2004. The widespread misconception that the tambalacoque or calvaria tree absolutely required the dodo bird for its seeds to germinate ''Plant Science Bulletin'' 50: 105-108. [http://www.botany.org/PlantScienceBulletin/psb-2004-50-4.php#Dodo]</ref> Another glaring peer review failure involved a 1993 ''Bioscience'' article<ref>Allchin, D. 1993. Reassessing van Helmont, reassessing history. ''Bioscience: Journal of College Biology Teaching'' 19(2):3-5.[http://papa.indstate.edu/amcbt/volume_19/v19-2p3-5.pdf]</ref> on [[Jean Baptist van Helmont]]. It had several major factual errors and no references for those supposed facts.<ref> Hershey, D.R. 2003. Misconceptions about Helmont's willow experiment. ''Plant Science Bulletin'' 49:78-84. [http://www.botany.org/bsa/psb/2003/psb49-3.html#Misconceptions]</ref> ''Bioscience'' refused to publish a letter pointing out the factual errors and would not consider publishing a peer-reviewed article correcting the original article.
  +
  +
Acknowledged deviations from the idealized outcome of the peer review process are readily observable at both extremes: successful without peer review prior to publication on the one hand; and unsuccessful despite peer review on the other extreme. Among the widely known examples of work later acknowledged to be successful without peer review prior to publication is that of [[Watson and Crick]]'s 1953 paper on the structure of DNA published in ''[[Nature (journal)|Nature]]''<ref>Watson J.D. and Crick, F.H.C. 1953. A structure for Deoxyribose Nucleic Acid. ''Nature'' 171: 737-738.
  +
[http://www.nature.com/nature/dna50/watsoncrick.pdf]</ref>. It also served as a rebuttal to a peer review failure.<ref>Pauling, L. and Corey, R. B. 1953. A proposed structure for the nucleic acids. ''Proc Natl. Acad. Sci. U.S.A." 39(2): 84-97.
  +
[http://www.pubmedcentral.gov/articlerender.fcgi?tool=pmcentrez&artid=1063734]
  +
</ref> A widely known example of the other extreme is the [[Jacques Benveniste]] affair, where peer review was exercised prior to publication in the journal ''Nature'' and the published results were unable to be replicated by other researchers.
  +
  +
== Dynamic and open peer review ==
  +
It has been suggested that traditional anonymous peer review lacks accountability, can lead to abuse by reviewers, and may be biased and inconsistent [http://brain.oxfordjournals.org/cgi/content/full/123/9/1964], alongside other flaws [http://www.jisc.ac.uk/uploaded_documents/rowland.pdf] [http://www.the-scientist.com/article/display/23061/]. In response to these criticisms, other systems of peer review have been suggested.
  +
  +
In 1996, the ''[http://www-jime.open.ac.uk/ Journal of Interactive Media in Education]'' launched using open peer review [http://www-jime.open.ac.uk/about.html#lifecycle]. Reviewers' names are made public and they are therefore accountable for their review, but they also have their contribution acknowledged. Authors have the right of reply, and other researchers have the chance to comment prior to publication. In 1999, the ''[http://www.bmj.com/ [[British Medical Journal]]]'' moved to an open peer review system, revealing reviewers' identities to the authors [http://www.bmj.com/cgi/content/full/318/7175/4], and in 2000, the medical journals in the [[open access]] [http://www.biomedcentral.com/info/authors/bmcseries BMC series], published by [[BioMed Central]], launched using open peer review. As with the ''[[British Medical Journal|BMJ]]'', the reviewers' names are included on the peer review reports. In addition, if the article is published the reports are made available online as part of the 'pre-publication history'.
  +
  +
Several of the other journals published by the [http://www.bmjgroup.com/ BMJ group] allow optional open peer review [http://ard.bmj.com/ifora/peer_rev.dtl] [http://jme.bmj.com/ifora/peer_rev.dtl] [http://emj.bmj.com/ifora/peer_rev.dtl], as do ''[[PLoS Medicine]]'', published by the [http://www.plos.org/ [[Public Library of Science]]] [http://journals.plos.org/plosmedicine/reviewer_guidelines.php#anonymity], and the ''[http://www.jmir.org/ [[Journal of Medical Internet Research]]]''.
  +
  +
The evidence of the effect of open peer review upon the quality of reviews, the tone and the time spent on reviewing is mixed, although it does seem that under open peer review, more of those who are invited to review decline to do so [http://www.bmj.com/cgi/content/abstract/318/7175/23?ijkey=8feca9dda2f29a07ec06f70a661120c97578c339&keytype2=tf_ipsecsha] [http://bjp.rcpsych.org/cgi/content/abstract/176/1/47].
  +
  +
In June 2006, the high impact journal ''[[Nature (journal)|Nature]]'' launched an experiment in parallel open peer review - some articles that had been submitted to the regular anonymous process were also available online for open, identified public comment [http://blogs.nature.com/nature/peerreview/trial/]. The results were less than encouraging - only 5% of authors agreed to participate in the experiment, and only 54% of those articles received comments [http://www.nature.com/nature/peerreview/debate/nature05535.html] [http://www.nature.com/nature/journal/v444/n7122/full/444971b.html]. The editors have suggested that researchers may have been too busy to take part and were reluctant to make their names public. The knowledge that articles were simultaneously being subjected to anonymous peer review may also have affected the uptake.
  +
  +
In 2006, a group of UK academics launched the online journal ''[http://www.philica.com/ [[Philica]]]'', which tries to redress many of the problems of traditional peer review. Unlike in a normal journal, all articles submitted to ''Philica'' are published immediately and the review process takes place afterwards. Reviews are still anonymous, but instead of reviewers being chosen by an editor, any researcher who wishes to review an article can do so. Reviews are displayed at the end of each article, and so are used to give the reader criticism or guidance about the work, rather than to decide whether it is published or not. This means that reviewers cannot suppress ideas if they disagree with them. Readers use reviews to guide what they read, and particularly popular or unpopular work is easy to identify.
  +
  +
Another approach that is similar in spirit to ''Philica'' is that of a dynamical peer review site, [http://www.naboj.com Naboj]. Unlike ''Philica'', Naboj is not a full-fledged online journal, but rather it provides an opportunity for users to write peer reviews of [[preprints]] at [[ArXiv.org e-print archive|arXiv.org]]. The review system is modeled on [http://www.amazon.com [[Amazon.com|Amazon]]] and users have an opportunity to evaluate the reviews as well as the articles. That way, with a sufficient number of users and reviewers, there should be a convergence towards a higher quality review process. A site that is similar to Naboj, but applied to the biological and medical literature, is [http://www.journalreview.org/ JournalReview.org].
  +
  +
In February 2006, the journal ''[http://www.biology-direct.com Biology Direct]'' was launched by [[Eugene Koonin]], [http://www.princeton.edu/~lfl/research.html Laura Landweber], and [[David Lipman]], providing another alternative to the traditional model of peer review. If authors can find three members of the Editorial Board who will each return a report or will themselves solicit an external review, then the article will be published. As with ''[[Philica]]'', reviewers cannot suppress publication, but in contrast to ''Philica'', no reviews are anonymous and no article is published without being reviewed. Authors have the opportunity to withdraw their article, to revise it in response to the reviews, or to publish it without revision. If the authors proceed with publication of their article despite critical comments, readers can clearly see any negative comments along with the names of the reviewers [http://www.biology-direct.com/info/about/].
  +
  +
An extension of peer review beyond the date of publication is [[Open Peer Commentary]], whereby expert commentaries are solicited on published articles, and the authors are encouraged to respond. The ''[[British Medical Journal|BMJ]]'''s [http://www.bmj.com/cgi/eletters?lookup=by_date&days=1 Rapid Responses] allow ongoing debate and criticism following publication [http://www.bmj.com/cgi/content/full/324/7347/1171]. By 2005, the editors found it necessary to more rigorously enforce the criteria for acceptance of Rapid Responses, to weed out the "bores" [http://www.bmj.com/cgi/content/full/330/7503/1284].
  +
 
==History of peer review==
  +
 
Peer review has been a touchstone of modern scientific method only since in the middle of the twentieth century.[http://www.designinference.com/documents/05.02.resp_to_wein.htm] Before then, its application was lax. For example, [[Albert Einstein]]'s revolutionary "Annus Mirabilis" papers in the [[1905]] issue of ''[[Annalen der Physik]]'' were not peer-reviewed. The journal's editor in chief (and father of quantum theory), [[Max Planck]], recognized the virtue of publishing such outlandish ideas and simply had the papers published; none of the papers were sent to reviewers. The decision to publish was made exclusively by either the editor in chief, or the co-editor [[Wilhelm Wien]]&mdash;both certainly &lsquo;peers&rsquo; (who were later to win the [[Nobel prize]] in [[physics]]), but this does not meet the definition of "peer review" as it is currently understood. At the time there was a policy that allowed authors much latitude after their first publication. In a recent editorial in ''Nature'', it was stated that "in journals in those days, the burden of proof was generally on the opponents rather than the proponents of new ideas."<ref>Coping with peer rejection. ''Nature'' 425 (6959), 645 (16 Oct 2003). [http://dx.doi.org/10.1038/425645a doi:10.1038/425645a]</ref>
   
 
==Peer review and fraud==
 
==Peer review and fraud==
Peer review, in scientific journals, assumes that the article reviewed has been honestly written, and the process is not designed to detect fraud. The reviewers usually do not have full access to the data from which the paper has been written and some elements have to be taken on trust (except perhaps in subjects such as mathematics).
+
Peer review, in scientific journals, assumes that the article reviewed has been honestly written, and the process is not designed to detect fraud. The reviewers usually do not have full access to the data from which the paper has been written and some elements have to be taken on trust. It is not usually practical for the reviewer to reproduce the author's work, unless the paper deals with purely theoretical problems which the reviewer can follow in a step-by-step manner.
   
The number and proportion of articles which are detected as fraudulent at review stage is unknown. Some instances of outright [[scientific fraud]] and [[scientific misconduct]] have got through review and were detected only after other groups tried and failed to replicate the published results.
+
The number and proportion of articles which are detected as fraudulent at review stage is unknown. Some instances of outright [[scientific fraud]] and [[scientific misconduct]] have gone through review and were detected only after other groups tried and failed to replicate the published results. An example is the case of [[Jan Hendrik Schön]], in which a total of fifteen papers were accepted for publication in the top ranked journals ''[[Nature (journal)|Nature]]'' and ''[[Science (journal)|Science]]'' following the usual peer review process. All fifteen were found to be fraudulent and were subsequently withdrawn. The fraud was eventually detected, not by peer review, but after publication when other groups tried and failed to reproduce the results of the paper.
   
  +
More recently the Norwegian scientist [[Jon Sudbø]] published fraudulent articles in [[The Lancet]]. He is currently under investigation.
An example is the case of [[Jan Hendrik Schön]], in which a total of fifteen papers were accepted for publication in the top ranked journals ''[[Nature (journal)|Nature]]'' and ''[[Science (journal)|Science]]'' following the usual peer review process. All fifteen were found to be fraudulent and were subsequently withdrawn. The fraud was eventually detected, not by peer review, but after publication when other groups tried and failed to reproduce the results of the paper.
 
   
  +
Although it is often argued that fraud cannot be detected during peer review, the ''Journal of Cell Biology'' uses an [http://en.wikipedia.org/wiki/Scientific_misconduct#Photo_Manipulation image screening process] that it claims could have identified the apparently manipulated figures published in ''[[Science (journal)|Science]]'' by [[Woo-Suk Hwang]] [http://www.the-scientist.com/article/display/23156/].
An example of what can happen within academic publications without peer-review is that of New York University Physics Professor Alan Sokal's publication of [http://www.physics.nyu.edu/faculty/sokal/transgress_v2/transgress_v2_singlefile.html ''Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity''] in the journal [http://muse.jhu.edu/journals/social_text/ ''Social Text'']. The submission for publication by Sokal was a hoax that became known as the [[Sokal Affair]].
 
  +
  +
===Peer review and plagiarism===
  +
A few cases of plagiarism by historians have been widely publicized.<ref>Historians on the Hot Seat [http://hnn.us/articles/1081.html]</ref> A poll of 3,247 scientists funded by the U.S. [[National Institutes of Health]] found 0.3% admitted faking data, 1.4% admitted plagiarism, and 4.7% admitted to autoplagiarism.<ref>Weiss, Rick. 2005. Many scientists admit to misconduct: Degrees of deception vary in poll. Washington Post. June 9, 2005. page A03. [http://www.washingtonpost.com/wp-dyn/content/article/2005/06/08/AR2005060802385.html]</ref> Autoplagiarism involves an author republishing the same material or data without citing their earlier work. An author often uses autoplagiarism to pad their list of publications. Sometimes reviewers detect cases of likely plagiarism and bring them to the attention of the editor. Reviewers generally lack access to raw data, but do see the full text of the manuscript. Thus, they are in a better position to detect plagiarism or autoplagiarism of prose than fraudulent data.
  +
  +
Although more common than plagiarism, journals and employers often do not punish authors for autoplagiarism. Autoplagiarism is against the rules of most peer-reviewed journals, which usually require that only unpublished material be submitted.
  +
  +
===Abuse of inside information by reviewers===
  +
A related form of professional misconduct that is sometimes reported is a reviewer using the not-yet-published information from a manuscript or grant application for personal or professional gain. The frequency with which this happens is of course unknown, but the [[United States Office of Research Integrity]] has sanctioned reviewers who have been caught exploiting knowledge they gained as reviewers.
   
 
==Peer review and software development==
 
==Peer review and software development==
 
{{main|Software peer review}}
A variety of kinds of peer review are used in various software development processes at several stages of the development process, including requirements definition, preliminary design, detailed design, and coding. Some of the more formal and rigorous approaches are termed [[Software inspection|software inspection]]. In the [[open source]] movement, something like peer review has taken place in the engineering and evaluation of computer software. In this context, the rationale for peer review has its equivalent in [[Linus's law]], often phrased: "Given enough eyeballs, all bugs are shallow", meaning "If there are enough reviewers, all problems are easy to solve." [[Eric S. Raymond]] has written influentially about peer review in [[software development]], for example in the essay ''[[The Cathedral and the Bazaar]].'' The value of peer review is largely that it identifies issues earlier than they would otherwise be identified (by testing or by users), which minimizes the amount of effort and cost associated.
 
   
  +
==Peer review of policy==
  +
The technique of peer review is also used to improve government policy. In particular, the [[European Union]] uses it as a tool in the 'Open Method of Co-ordination' of policies in the fields of employment and social inclusion.
   
  +
A programme of peer reviews in [http://www.almp.org active labour market policy] started in 1999, and was followed in 2004 by one in [http://www.peer-review-social-inclusion.net social inclusion]. Each programme sponsors about eight peer review meetings in each year, in which a 'host country' lays a given policy or initiative open to examination by half a dozen other countries and relevant European-level NGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of an expert report on which participating 'peer countries' submit comments. The results are published on the web.
  +
  +
==U.S. government peer review policies==
  +
{{cleanup-section|October 2006}}
  +
Most federal regulatory agencies in the [[United States]] government must comply with specific peer review requirements before the agencies publicly disseminate certain scientific information. These requirements were published in a Peer Review Bulletin issued by the [[White House]] [[Office of Management and Budget]] ("OMB"), which establishes "government-wide standards concerning when peer review is required and, if required, what type of per review processes are appropriate."
  +
  +
OMB’s peer review bulletin requires that US federal regulatory agencies submit all "influential scientific information" to peer review before the information is publicly disseminated. The Bulletin defines "scientific information" as:
  +
  +
:"factual inputs, data, models, analyses, technical information, or scientific assessments related to such disciplines as the behavioral and social sciences, public health and medical sciences, life and earth sciences, engineering, or physical sciences."
  +
  +
The OMB peer review Bulletin defines "influential scientific information" as
  +
  +
:"scientific information the agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions. In the term 'influential scientific information,' the term 'influential' should be interpreted consistently with [http://www.whitehouse.gov/omb/fedreg/reproducible2.pdf OMB's government-wide information quality guidelines] and the information quality guidelines of the agency."
  +
  +
As noted in the preceding quotation, the Peer Review Bulletin must be read in conjunction with "OMB's government-wide information quality guidelines and the information quality guidelines of the agency." These guidelines govern the quality of all information disseminated by most US government regulatory agencies. These guidelines are required by a US statute enacted in 2001 called the [[Data Quality Act]] and also known as the Information Quality Act ("IQA"). OMB states that it prepared the peer review Bulletin pursuant to OMB's authority under the DQA.
  +
  +
The peer review Bulletin provides detailed guidelines for peer review of influential scientific information. The Bulletin applies more stringent peer review requirements to "highly influential scientific assessments,"
  +
  +
:"which are a subset of influential scientific information. A scientific assessment is an evaluation of a body of scientific or technical knowledge that typically synthesizes multiple factual inputs, data, models, assumptions, and/or applies best professional judgment to bridge uncertainties in the available information."
  +
  +
While the peer review Bulletin's specific guidelines will not be discussed here in detail, one should note that the guidelines differ in several respects from traditional peer review practices at most journals. For example, the Bulletin requires public disclosure of peer reviewers' identities when they are reviewing highly influential scientific assessments. The Bulletin's summary of some of these requirements is set forth below:
  +
  +
:"In general, an agency conducting a peer review of a highly influential scientific assessment must ensure that the peer review process is transparent by making available to the public the written charge to the peer reviewers, the peer reviewers’ names, the peer reviewers’ report(s), and the agency’s response to the peer reviewers’ report(s). ... This Bulletin requires agencies to adopt or adapt the [http://www.nationalacademies.org/coi/index.html committee selection policies] employed by the [[National Academy of Sciences]](NAS)."
  +
  +
The peer review Bulletin specifically addresses the effect of publication in a refereed scientific journal as well the variations and limitations with peer review:
  +
  +
:"Publication in a refereed scientific journal may mean that adequate peer review has been performed. However, the intensity of peer review is highly variable across journals. There will be cases in which an agency determines that a more rigorous or transparent review process is necessary. For instance, an agency may determine a particular journal review process did not address questions (e.g., the extent of uncertainty inherent in a finding) that the agency determines should be addressed before disseminating that information. As such, prior "peer review and publication is not by itself sufficient grounds for determining that no further review is necessary." [Emphasis added]
  +
  +
==See also==
  +
*[[Peer review system for the Psychology Wiki]]
 
**[[Peer review groups]]
  +
**[[Psychology Wiki peer review manual]]
  +
 
==References==
  +
<references/>
   
 
==See also==
 
==See also==
Line 118: Line 190:
 
*[[Abstract management]]
 
*[[Abstract management]]
 
*[[Adversarial review]]
 
*[[Adversarial review]]
*[[Code review]]
+
*[[Article validation]]
  +
*[[Content management system on Psychology Wiki]]
*[[Journal club | Journal Club]]
+
*[[Journal club|Journal Club]]
 
*[[Objectivity (philosophy)|Objectivity]]
 
*[[Objectivity (philosophy)|Objectivity]]
  +
*[[Open Peer Commentary]]
 
*[[Publication bias]]
 
*[[Publication bias]]
 
*[[Scholarly method]]
 
*[[Scholarly method]]
 
*[[Sokal affair]]
 
*[[Sokal affair]]
*[[Wikipedia:Peer review]]
+
*[[Sternberg peer review controversy]]
  +
*[[SWoRD System]] (Scaffolded Writing and Rewriting in the Discipline)
 
==References==
 
{{unreferenced}}
 
   
 
==External links==
 
==External links==
  +
*[http://www.JournalReview.org Post Publication Peer Review of all medical literature (via JournalReview.org)]
{{Spoken Wikipedia|Peer review.ogg|2005-04-02}}
 
  +
* [http://www.retrovirology.com/content/3/1/55 Beyond Open Access: Open Discourse, the next great equalizer], ''Retrovirology'' 2006, 3:55
* [http://www.OpenConf.org OpenConf Peer-Review & Conference Management System]
 
  +
*[http://www.nature.com/nature/peerreview/debate/ Nature peer review debate] June 2006
*[http://jama.ama-assn.org/cgi/content/full/289/11/1438 Fifth International Congress on Peer Review and Biomedical Publication: Call for Research]
+
*[http://www.ama-assn.org/public/peer/peerhome.htm Fifth International Congress on Peer Review and Biomedical Publication]
 
*[http://www.aaskolnick.com/naswmav.htm The Maharishi Caper: Or How to Hoodwink Top Medical Journals, The Newsletter of the National Association of Science Writers]
 
*[http://www.aaskolnick.com/naswmav.htm The Maharishi Caper: Or How to Hoodwink Top Medical Journals, The Newsletter of the National Association of Science Writers]
*[http://www.senseaboutscience.org.uk/PDF/peerReview.pdf Peer review and the acceptance of new scientific ideas] (''Warning:'' 469 [[kilobyte|kB]] [[Portable Document Format|PDF]])
+
*[http://www.senseaboutscience.org.uk/pdf/PeerReview.pdf Peer review and the acceptance of new scientific ideas] (''Warning:'' 469 [[kilobyte|kB]] [[Portable Document Format|PDF]])
 
*[http://www.senseaboutscience.org.uk/peerreview/ Sense About Science: Peer Review] Features the [[Portable Document Format|PDF]] pamphlet "I don't know what to believe..."
 
*[http://www.senseaboutscience.org.uk/peerreview/ Sense About Science: Peer Review] Features the [[Portable Document Format|PDF]] pamphlet "I don't know what to believe..."
  +
*[http://jama.ama-assn.org/cgi/content/abstract/287/21/2786 "Measuring the quality of peer review"] ''Journal of the American Medical Association'' 287: 2786&ndash;2790 (2002).
 
*[http://www.jpgmonline.com/article.asp?issn=0022-3859;year=2001;volume=47;issue=3;spage=210;epage=4;aulast=Gitanjali Peer review – process, perspectives and the path ahead]
 
*[http://www.jpgmonline.com/article.asp?issn=0022-3859;year=2001;volume=47;issue=3;spage=210;epage=4;aulast=Gitanjali Peer review – process, perspectives and the path ahead]
 
*[http://www.digibio.com/archive/SomethingRotten.htm Something Rotten at the Core of Science? ]
 
*[http://www.digibio.com/archive/SomethingRotten.htm Something Rotten at the Core of Science? ]
 
*[http://post.queensu.ca/~forsdyke/peerrev1.htm Malice's Wonderland: Research Funding and Peer Review]
 
*[http://post.queensu.ca/~forsdyke/peerrev1.htm Malice's Wonderland: Research Funding and Peer Review]
 
*[http://brain.oxfordjournals.org/cgi/content/full/123/9/1964 Is agreement between reviewers any greater than would be expected by chance alone? ]
 
*[http://brain.oxfordjournals.org/cgi/content/full/123/9/1964 Is agreement between reviewers any greater than would be expected by chance alone? ]
*[http://wikipediareview.com The Wikipedia Review]
 
 
*[http://www.uow.edu.au/arts/sts/bmartin/dissent/documents/ss/ss5.html Peer Review as Scholarly Conformity]
 
*[http://www.uow.edu.au/arts/sts/bmartin/dissent/documents/ss/ss5.html Peer Review as Scholarly Conformity]
 
*[http://www.geosociety.org/science/csf/0407gt.htm Science and Politics: An Uneasy Mix]
 
*[http://www.geosociety.org/science/csf/0407gt.htm Science and Politics: An Uneasy Mix]
 
*[http://slate.msn.com/id/2116244 The case against peer-review]
 
*[http://slate.msn.com/id/2116244 The case against peer-review]
  +
*[http://gv.agora.eu.org/article.php3?id_article=934 The Peer-Review Cartel] Rajiv Malhotra (Outlook India, 2004)
 
*[http://www.allmedmd.com Medical Peer Review]
 
*[http://www.allmedmd.com Medical Peer Review]
 
*[http://naturalscience.com/ns/articles/01-02/ns_mh.html Peer review: the Holy Office of modern science]
 
*[http://naturalscience.com/ns/articles/01-02/ns_mh.html Peer review: the Holy Office of modern science]
Line 156: Line 230:
 
*[http://www.mindfully.org/Reform/Suppression-Of-Dissent.htm Suppression of Dissent in Science ]
 
*[http://www.mindfully.org/Reform/Suppression-Of-Dissent.htm Suppression of Dissent in Science ]
 
*[http://www.iscid.org/boards/ubb-get_topic-f-10-t-000059.html Refereed Journals: Do They Insure Quality or Enforce Orthodoxy?] [[Frank J. Tipler]]
 
*[http://www.iscid.org/boards/ubb-get_topic-f-10-t-000059.html Refereed Journals: Do They Insure Quality or Enforce Orthodoxy?] [[Frank J. Tipler]]
  +
*[http://www.int-res.com/discussion-forums/meps-discussion-forum-2/ Peer-review system] Discussion forum
*[[http://www.parliament.uk/post/pn182.pdf UK parliamentary briefing]]
 
  +
*[http://www.int-res.com/abstracts/meps/v192/p305-313/ The peer-review system: time for re-assessment?]
  +
* [[Philip E. Bourne]], [[Alon Korngreen]], [http://dx.doi.org/10.1371/journal.pcbi.0020110 "Ten Simple Rules for Reviewers"], ''[[PLoS Computational Biology]]'', 2(9):e110, 2006 September. General guidelines for reviewing.
  +
*[[Stevan Harnad]]:
  +
**2003: [http://www.ecs.soton.ac.uk/~harnad/Temp/peerev.pdf PostGutenberg Peer Review]
  +
**2002: [http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2341.html Self-Selected Vetting vs. Peer Review: Supplement or Substitute?]
  +
**2001: [http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1170.html A Note of Caution About "Reforming the System"]
  +
**1999: [http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0480.html Peer Review Reform Hypothesis-Testing]
  +
**1998: [http://www.nature.com/nature/webmatters/invisible/invisible.html The Invisible Hand of Peer Review] [http://en.wikipedia.org/wiki/Nature_magazine Nature] version; [http://www.exploit-lib.org/issue5/peer-review/ Exploit Interactive] version
  +
**1997: [http://eprints.ecs.soton.ac.uk/2633/ Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright]
  +
**1996: [http://eprints.ecs.soton.ac.uk/2900/ Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals]
  +
**1985: [http://eprints.ecs.soton.ac.uk/3397/ Rational disagreement in peer review]
  +
**1984: [http://eprints.ecs.soton.ac.uk/3395/ Commentaries, opinions and the growth of scientific knowledge]
  +
**1982: [http://eprints.ecs.soton.ac.uk/3389/ Peer commentary on peer review: A case study in scientific quality control]
  +
**1979: [http://eprints.ecs.soton.ac.uk/3387/ Creative disagreement]
  +
* 1978 [http://www.ecs.soton.ac.uk/%7Eharnad/Temp/Kata/bbs.editorial.html Behavioral and Brain Sciences (BBS) editorial]
  +
* Ross, Paul F.(2007) [http://home.att.net/~pfrswr/valid_07.doc The validity of peer review of psychology manuscripts]
  +
   
 
[[Category:Academic publishing]]
 
[[Category:Academic publishing]]
 
[[Category:Scientific method]]
 
[[Category:Scientific method]]
 
[[de:Peer-Review]]
 
[[es:Revisión por pares]]
 
[[it:Revisione paritaria]]
 
[[he:ביקורת עמיתים]]
 
[[hu:Kollegiális lektorálás]]
 
[[nl:Collegiale toetsing]]
 
[[ja:査読]]
 
[[pl:Recenzja naukowa]]
 
[[simple:Peer review]]
 
{{enWP|Peer review}}
 

Revision as of 07:23, 15 April 2010

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Professional Psychology: Debating Chamber · Psychology Journals · Psychologists


This article is about the peer review process of scholarly work. For the general social approval by ones peers see: Peer evaluation

ScientificReview

A reviewer at the National Institutes of Health evaluates a grant proposal.

Peer review (known as refereeing in some academic fields) is a process of subjecting an author's scholarly work or ideas to the scrutiny of others who are experts in the field. It is used primarily by editors to select and to screen submitted manuscripts, and by funding agencies, to decide the awarding of grants. The peer review process is aims to make authors meet the standards of their discipline, and of science in general. Publications and awards that have not undergone peer review are likely to be regarded with suspicion by scholars and professionals in many fields. Even refereed journals, however, have been shown to contain errors, fraud and other flaws that undermine their claims to publish sound science.

In the case of manuscripts, the editor will pass manuscripts that are accepted for publication to a publisher who will be responsible for organizing redactory services, printing and distribution of the publication. In specialist academic (scholarly) journals, the editor (or increasingly group of editors) is normally a well-respected academic in the field, and edits the journal on behalf of a learned society or a commercial publisher. Some journals have professional editors employed by the publisher (e.g. Nature) or the charity (e.g. Science) owning the journal. An editor is ultimately responsible for the quality and selection of manuscripts chosen to be published, usually basing their decision on peer review, although the authors are always responsible for the content of each manuscript. The editor does not revise and correct spelling, grammar and formatting - that process is carried out by a Copy Editor, although the editor controls the quality of the process.

Reasons for peer review

A rationale for peer review is that it is rare for an individual author or research team to spot every mistake or flaw in a complicated piece of work. This is not because deficiencies represent needles in a haystack, but because in a new and perhaps eclectic intellectual product, an opportunity for improvement may stand out only to someone with special expertise or experience. For both grant-funding and publication in a scholarly journal, it is also normally a requirement that the work is both novel and substantial. Therefore showing work to others increases the probability that weaknesses will be identified, and with advice and encouragement, fixed. The anonymity and independence of reviewers is intended to foster unvarnished criticism and discourage cronyism in funding and publication decisions. However, as discussed below under the next section, US government guidelines governing peer review for federal regulatory agencies require that reviewer identity be disclosed under some circumstances.

In addition, since the reviewers are normally selected from experts in the fields discussed in the article, the process of peer review is considered critical to establishing a reliable body of research and knowledge. Scholars reading the published articles can only be expert in a limited area; they rely to some degree on the peer-review process to provide reliable and credible research that they can build upon for subsequent or related research. As a result, significant scandal ensues when an author is found to have falsified the research included in an article, as many other scholars, and the field of study itself, may have relied upon that research (see Peer review and fraud below).

How it works

In the case of proposed publications, an editor sends advance copies of an author's work or ideas to researchers or scholars who are experts in the field (known as "referees" or "reviewers"), normally by e-mail or through a web-based manuscript processing system. Usually, there are two or three referees. These referees each return an evaluation of the work to the editor, including suggestions for improvement. Typically, most of the referees' comments are eventually seen by the author. Scientific journals observe this convention universally. The editor, usually themselves understanding the field of the manuscript (although not in as much depth as the referees who are specialists), then evaluates the referees' comments, their own opinion of the manuscript, and the context of the scope of the journal or level of the book and readership, before passing a decision back to the author(s), usually with the referees' comments.

Referees' evaluations usually include an explicit recommendation of what to do with the manuscript or proposal, often chosen from a menu provided by the journal or funding agency. Most recommendations are along the lines of the following:

  • to unconditionally accept the manuscript or proposal,
  • to accept it in the event that its authors improve it in certain ways,
  • to reject it, but encourage revision and invite resubmission,
  • to reject it outright.

During this process, the role of the referees is advisory, and the editor is under no formal obligation to accept the opinions of the referees. Furthermore, in scientific publication, the referees do not act as a group, do not communicate with each other, and typically are not aware of each other's identities. There is usually no requirement that the referees achieve consensus. Thus the group dynamics are substantially different from that of a jury. In situations where the referees disagree about the quality of a work, there are a number of strategies for reaching a decision.

When an editor receives very positive and very negative reviews for the same manuscript, the editor often will solicit one or more additional reviews as a tie-breaker. As another strategy in the case of ties, editors may invite authors to reply to a referee's criticisms and permit a compelling rebuttal to break the tie. If an editor does not feel confident to weigh the persuasiveness of a rebuttal, the editor may solicit a response from the referee who made the original criticism. In rare instances, an editor will convey communications back and forth between authors and a referee, in effect allowing them to debate a point. Even in these cases, however, editors do not allow referees to confer with each other, and the goal of the process is explicitly not to reach consensus or to convince anyone to change their opinions. Some medical journals, however (usually following the open access model), have begun posting on the Internet the pre-publication history of each individual article, from the original submission to reviewers' reports, authors' comments, and revised manuscripts.

Traditionally reviewers would remain anonymous to the authors, but this is slowly changing. In some academic fields most journals now offer the reviewer the option of remaining anonymous or not, or a referee may opt to sign a review, thereby relinquishing anonymity. Published papers sometimes contain, in the acknowledgements section, thanks to anonymous or named referees who helped improve the paper.

Recruiting referees

At a journal or book publisher, the task of picking reviewers typically falls to an editor. When a manuscript arrives, an editor solicits reviews from scholars or other experts who may or may not have already expressed a willingness to referee for that journal or book division. Granting agencies typically recruit a panel or committee of reviewers in advance of the arrival of applications.

In some disciplines there exist refereed venues (such as conferences and workshops). To be admitted to speak, scholars and scientists must submit papers (generally short, often 15 pages or less) in advance. These papers are reviewed by a "program committee" (the equivalent of an editorial board), who generally requests inputs from referees. The hard deadlines set by the conferences tend to limit the options to either accept or reject the paper.

Typically referees are not selected from among the authors' close colleagues, students, or friends. Referees are supposed to inform the editor of any conflict of interests that might arise. Journals or individual editors often invite a manuscript's authors to name people whom they consider qualified to referee their work. Authors are sometimes also invited to name natural candidates who should be disqualified, in which case they may be asked to provide justification (typically expressed in terms of conflict of interest). In some disciplines, scholars listed in an "acknowledgements" section are not allowed to serve as referees (hence the occasional practice of using this section to disqualify potentially negative reviewers).

Editors solicit author input in selecting referees because academic writing typically is very specialized. Editors often oversee many specialities, and may not be experts in any of them, since editors may be full time professionals with no time for scholarship. But after an editor selects referees from the pool of candidates, the editor typically is obliged not to disclose the referees' identities to the authors, and in scientific journals, to each other. Policies on such matters differ between academic disciplines.

Recruiting referees is a political art, because referees, and often editors, are usually not paid, and reviewing takes time away from the referee's main activities, such as his or her own research. To the would-be recruiter's advantage, most potential referees are authors themselves, or at least readers, who know that the publication system requires that experts donate their time. Referees also have the opportunity to prevent work that does not meet the standards of the field from being published, which is a position of some responsibility. Editors are at a special advantage in recruiting a scholar when they have overseen the publication of his or her work, or if the scholar is one who hopes to submit manuscripts to that editor's publication in the future. Granting agencies, similarly, tend to seek referees among their present or former grantees. Serving as a referee can even be a condition of a grant, or professional association membership.

Another difficulty that peer-review organizers face is that, with respect to some manuscripts or proposals, there may be few scholars who truly qualify as experts. Such a circumstance often frustrates the goals of reviewer anonymity and the avoidance of conflicts of interest. It also increases the chances that an organizer will not be able to recruit true experts – people who have themselves done work like that under review, and who can read between the lines. Low-prestige or local journals and granting agencies that award little money are especially handicapped with regard to recruiting experts.

Finally, anonymity adds to the difficulty in finding reviewers in another way. In scientific circles, credentials and reputation are important, and while being a referee for a prestigious journal is considered an honor, the anonymity restrictions make it impossible to publicly state that one was a referee for a particular article. However, credentials and reputation are principally established by publications, not by refereeing; and in some fields refereeing may not be anonymous.

The process of peer review does not end after a paper completes the peer review process. After being put to press, and after 'the ink is dry', the process of peer review continues in journal clubs. Here groups of colleagues review literature and discuss the value and implications it presents. Journal clubs will often send letters to the editor of a journal, or correspond with the editor via an on-line journal club. In this way, all 'peers' may offer review and critique of published literature.

Different styles of review

Peer review can be rigorous, in terms of the skill brought to bear, without being highly stringent. An agency may be flush with money to give away, for example, or a journal may have few impressive manuscripts to choose from, so there may be little incentive for selection. Conversely, when either funds or publication space is limited, peer review may be used to select an extremely small number of proposals or manuscripts.

Often the decision of what counts as "good enough" falls entirely to the editor or organizer of the review. In other cases, referees will each be asked to make the call, with only general guidance from the coordinator on what stringency to apply.

Very general journals such as Science and Nature have extremely stringent standards for publication, and will reject papers that report good quality scientific work, which they feel are not breakthroughs in the field. Such journals generally have a two-tier reviewing system. In the first stage, members of the editorial board verify that the paper's findings -- if correct -- would be ground-breaking enough to warrant publication in Science or Nature. Most papers are rejected at this stage. Papers that do pass this 'pre-reviewing' are sent out for in-depth review to outside referees. Even after all reviewers recommend publication and all reviewer criticisms/suggestions for changes have been met, papers may still be returned to the authors for shortening to meet the journal's length limits. With the advent of electronic journal editions, overflow material may be stored in the journals online Electronic Supporting Information archive.

A similar emphasis on novelty exists in general area journals such as the Journal of the American Chemical Society (JACS). However, these journals generally send out all papers (except blatantly inappropriate ones) for peer reviewing to multiple reviewers. The reviewers are specifically queried not just on the scientific quality and correctness, but also on whether the findings are of interest to the general area readership (chemists of all disciplines, in the case of JACS) or only to a specialist subgroup. In the latter case, the recommendation is usually for publication in a more specialized journal. The editor may offer to authors the option of having the manuscript and reviews forwarded to such a journal with the same publishers (e.g., in the example given, Journal of Organic Chemistry, Journal of Physical Chemistry, Inorganic Chemistry,...). if the reviewer reports warrant such a decision (i.e., they boil down to "Great work, but too specialized for JACS: publish in ..."), the editor of such a journal may accept the forwarded manuscript without further reviewing.

Some general area journals, such as Physical Review Letters, have strict length limitations. Others, such as JACS, have Letters and Full Papers sections: the Letters sections have strict length limits (two journal pages in the case of JACS) and special novelty requirements. In constrast, online-only journals may have no space limitations [9].

More specialized scientific journals such as the aforementioned chemistry journals, Astrophysical Journal, and the Physical Review series use peer review primarily to filter out obvious mistakes and incompetence, as well as plagiarism, overly derivative work, and straightforward applications of known methods. Different publication rates reflect these different criteria: Nature publishes about 5 percent of received papers, while Astrophysical Journal publishes about 70 percent. The different publication rates are also reflected in the size of the journals. PLoS ONE was launched by the Public Library of Science in 2006 with the aim to "concentrate on technical rather than subjective concerns", and to publish articles from across science, regardless of the field [10].

Screening by peers may be more or less laissez-faire depending on the discipline. Physicists, for example, tend to think that decisions about the worthiness of an article are best left to the marketplace. Yet even within such a culture peer review serves to ensure high standards in what is published. Outright errors are detected and authors receive both edits and suggestions.

To preserve the integrity of the peer-review process, submitting authors may not be informed of who reviews their papers; sometimes, they might not even know the identity of the associate editor who is responsible for the paper. In many cases, alternatively called "masked" or "double-masked" review (or "blind" or "double-blind" review), the identity of the authors is concealed from the reviewers, lest the knowledge of authorship bias their review; in such cases, however, the associate editor responsible for the paper does know who the author is. Sometimes the scenario where the reviewers do know who the authors are is called "single-masked" to distinguish it from the "double-masked" process. In double-masked review, the authors are required to remove any reference that may point to them as the authors of the paper.

While the anonymity of reviewers is almost universally preserved, double-masked review (where authors are also anonymous to reviewers) is rarely employed. Critics of the double-masked process point out that, despite the extra editorial effort to ensure anonymity, the process often fails to do so, since certain approaches, methods, notations, etc., may point to a certain group of people in a research stream, and even to a particular person [11]. Proponents of double-masked review argue that it performs at least as well as the traditional one and that it generates a better perception of fairness and equality in global scientific funding and publishing [1]

Proponents of the single-masked process argue that if the reviewers of a paper are unknown to each other, the associate editor responsible for the paper can easily verify the objectivity of the reviews. Single-masked review is thus strongly dependent upon the goodwill of the participants.

Criticisms of peer review

One of the most common complaints about the peer review process is that it is slow, and that it typically takes several months or even several years in some fields for a submitted paper to appear in print. In practice, much of the communication about new results in some fields such as astronomy no longer takes place through peer reviewed papers, but rather through preprints submitted onto electronic servers such as arXiv.org.

While passing the peer-review process is often considered in the scientific community to be a certification of validity, it is not without its problems. Drummond Rennie, deputy editor of Journal of the American Medical Association is an organizer of the International Congress on Peer Review and Biomedical Publication, which has been held every four years since 1986 [12]. He remarks, "There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print." [13]

Allegations of bias and suppression

In addition, some sociologists of science argue that peer review makes the ability to publish susceptible to control by elites and to personal jealousy.[2] The peer review process may suppress dissent against "mainstream" theories.[3][4][5] Reviewers tend to be especially critical of conclusions that contradict their own views, and lenient towards those that accord with them. At the same time, elite scientists are more likely than less established ones to be sought out as referees, particularly by high-prestige journals or publishers. As a result, it has been argued, ideas that harmonize with the elite's are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones, which accords with Thomas Kuhn's well-known observations regarding scientific revolutions. [6]

However, others have pointed out that there is a very large number of scientific journals in which one can publish, making control of information difficult. In addition, the decision-making process of peer review, in which each referee gives their opinion separately and without consultation with the other referees, is intended to mitigate some of these problems. Others have pointed out that:

"... peer review does not thwart new ideas. Journal editors and the 'scientific establishment' are not hostile to new discoveries. Science thrives on discovery and scientific journals compete to publish new breakthroughs"[7]

Peer review failures

Peer review failures occur when a peer-reviewed article contains obvious fundamental errors that undermines at least one of its main conclusions. Peer review is not considered a failure in cases of deliberate fraud by authors. Letters-to-the-editor that correct major errors in articles are a common indication of peer review failures. Many journals have no procedure to deal with peer review failures beyond publishing letters. Some do not even publish letters. The author of a disputed article is allowed a published reply to a critical letter. Neither the letter nor the reply is usually peer-reviewed, and typically the author rebuts the corrections. Thus, the readers are left to decide for themselves if there was a peer review failure. However, the International Committee for Medical Journal Editors' Uniform Requirements for Manuscripts Submitted to Biomedical Journals states that "if a fraudulent paper has been published, the journal must print a retraction" [14], and gives guidelines on investigating alleged fraud. Members of the UK-based Committee on Publication Ethics (COPE) have a duty to investigate allegations of misconduct [15].

The Center for Regulatory Effectiveness attempted to use the Letter-to-the-editor process when they found what they believed to be numerous factual errors in a Commentary published by the Journal of the American Medical Association (JAMA), a prominent peer reviewed journal. The CRE sent JAMA a letter that purported to correct all of the factual errors. JAMA refused to publish the letter as written. The JAMA editors insisted on changes based on length and content constraints. The CRE claimed that compliance with these constraints precluded correction of all factual errors. Consequently, the CRE withdrew its correction letter.

The factual errors claimed by CRE include the following. CRE claims the Commentary misstates the basis for the EU's ban on the herbicide atrazine, which was politics not science. CRE claims the Commentary misstates the IARC classification of atrazine with regard to carcinogenicity. CRE claims the Commentary misstates that atrazine tests performed by Dr. Tyrone Hayes were accurate and reliable, when in fact Dr. Hayes’ tests failed peer review. CRE claims the Commentary misstates that the IQA has no legislative history, when in fact it has substantial legislative history. CRE claims that Commentary misrepresents several Data Quality Act requests for correction filed by CRE.

CRE claims that JAMA's use of non-peer reviewed Commentaries, when coupled with length and page constraints on correction letters, can cause the publication of biased and incorrect scientific information in peer reviewed journals.

An alternative method of dealing with peer review failures is correction via another peer-reviewed article. For example, a claim that the plant hormone, ethylene, increased plant membrane permeability[8] was shown to be an artifact caused by the low pH of the ethylene-releasing chemical, (2-chloroethyl)-phosphonic acid, employed.[9] One disadvantage of this approach is that a reader who spots major flaws in an article may not have the time or resources to do the research and writing required for a peer-reviewed rebuttal article.

A famous peer review failure was the 1977 Science article on the dodo and seed germination[10] that lacked the required control treatment for its main experiment among other major flaws.[11] Another glaring peer review failure involved a 1993 Bioscience article[12] on Jean Baptist van Helmont. It had several major factual errors and no references for those supposed facts.[13] Bioscience refused to publish a letter pointing out the factual errors and would not consider publishing a peer-reviewed article correcting the original article.

Acknowledged deviations from the idealized outcome of the peer review process are readily observable at both extremes: successful without peer review prior to publication on the one hand; and unsuccessful despite peer review on the other extreme. Among the widely known examples of work later acknowledged to be successful without peer review prior to publication is that of Watson and Crick's 1953 paper on the structure of DNA published in Nature[14]. It also served as a rebuttal to a peer review failure.[15] A widely known example of the other extreme is the Jacques Benveniste affair, where peer review was exercised prior to publication in the journal Nature and the published results were unable to be replicated by other researchers.

Dynamic and open peer review

It has been suggested that traditional anonymous peer review lacks accountability, can lead to abuse by reviewers, and may be biased and inconsistent [16], alongside other flaws [17] [18]. In response to these criticisms, other systems of peer review have been suggested.

In 1996, the Journal of Interactive Media in Education launched using open peer review [19]. Reviewers' names are made public and they are therefore accountable for their review, but they also have their contribution acknowledged. Authors have the right of reply, and other researchers have the chance to comment prior to publication. In 1999, the British Medical Journal moved to an open peer review system, revealing reviewers' identities to the authors [20], and in 2000, the medical journals in the open access BMC series, published by BioMed Central, launched using open peer review. As with the BMJ, the reviewers' names are included on the peer review reports. In addition, if the article is published the reports are made available online as part of the 'pre-publication history'.

Several of the other journals published by the BMJ group allow optional open peer review [21] [22] [23], as do PLoS Medicine, published by the Public Library of Science [24], and the Journal of Medical Internet Research.

The evidence of the effect of open peer review upon the quality of reviews, the tone and the time spent on reviewing is mixed, although it does seem that under open peer review, more of those who are invited to review decline to do so [25] [26].

In June 2006, the high impact journal Nature launched an experiment in parallel open peer review - some articles that had been submitted to the regular anonymous process were also available online for open, identified public comment [27]. The results were less than encouraging - only 5% of authors agreed to participate in the experiment, and only 54% of those articles received comments [28] [29]. The editors have suggested that researchers may have been too busy to take part and were reluctant to make their names public. The knowledge that articles were simultaneously being subjected to anonymous peer review may also have affected the uptake.

In 2006, a group of UK academics launched the online journal Philica, which tries to redress many of the problems of traditional peer review. Unlike in a normal journal, all articles submitted to Philica are published immediately and the review process takes place afterwards. Reviews are still anonymous, but instead of reviewers being chosen by an editor, any researcher who wishes to review an article can do so. Reviews are displayed at the end of each article, and so are used to give the reader criticism or guidance about the work, rather than to decide whether it is published or not. This means that reviewers cannot suppress ideas if they disagree with them. Readers use reviews to guide what they read, and particularly popular or unpopular work is easy to identify.

Another approach that is similar in spirit to Philica is that of a dynamical peer review site, Naboj. Unlike Philica, Naboj is not a full-fledged online journal, but rather it provides an opportunity for users to write peer reviews of preprints at arXiv.org. The review system is modeled on Amazon and users have an opportunity to evaluate the reviews as well as the articles. That way, with a sufficient number of users and reviewers, there should be a convergence towards a higher quality review process. A site that is similar to Naboj, but applied to the biological and medical literature, is JournalReview.org.

In February 2006, the journal Biology Direct was launched by Eugene Koonin, Laura Landweber, and David Lipman, providing another alternative to the traditional model of peer review. If authors can find three members of the Editorial Board who will each return a report or will themselves solicit an external review, then the article will be published. As with Philica, reviewers cannot suppress publication, but in contrast to Philica, no reviews are anonymous and no article is published without being reviewed. Authors have the opportunity to withdraw their article, to revise it in response to the reviews, or to publish it without revision. If the authors proceed with publication of their article despite critical comments, readers can clearly see any negative comments along with the names of the reviewers [30].

An extension of peer review beyond the date of publication is Open Peer Commentary, whereby expert commentaries are solicited on published articles, and the authors are encouraged to respond. The BMJ's Rapid Responses allow ongoing debate and criticism following publication [31]. By 2005, the editors found it necessary to more rigorously enforce the criteria for acceptance of Rapid Responses, to weed out the "bores" [32].

History of peer review

Peer review has been a touchstone of modern scientific method only since in the middle of the twentieth century.[33] Before then, its application was lax. For example, Albert Einstein's revolutionary "Annus Mirabilis" papers in the 1905 issue of Annalen der Physik were not peer-reviewed. The journal's editor in chief (and father of quantum theory), Max Planck, recognized the virtue of publishing such outlandish ideas and simply had the papers published; none of the papers were sent to reviewers. The decision to publish was made exclusively by either the editor in chief, or the co-editor Wilhelm Wien—both certainly ‘peers’ (who were later to win the Nobel prize in physics), but this does not meet the definition of "peer review" as it is currently understood. At the time there was a policy that allowed authors much latitude after their first publication. In a recent editorial in Nature, it was stated that "in journals in those days, the burden of proof was generally on the opponents rather than the proponents of new ideas."[16]

Peer review and fraud

Peer review, in scientific journals, assumes that the article reviewed has been honestly written, and the process is not designed to detect fraud. The reviewers usually do not have full access to the data from which the paper has been written and some elements have to be taken on trust. It is not usually practical for the reviewer to reproduce the author's work, unless the paper deals with purely theoretical problems which the reviewer can follow in a step-by-step manner.

The number and proportion of articles which are detected as fraudulent at review stage is unknown. Some instances of outright scientific fraud and scientific misconduct have gone through review and were detected only after other groups tried and failed to replicate the published results. An example is the case of Jan Hendrik Schön, in which a total of fifteen papers were accepted for publication in the top ranked journals Nature and Science following the usual peer review process. All fifteen were found to be fraudulent and were subsequently withdrawn. The fraud was eventually detected, not by peer review, but after publication when other groups tried and failed to reproduce the results of the paper.

More recently the Norwegian scientist Jon Sudbø published fraudulent articles in The Lancet. He is currently under investigation.

Although it is often argued that fraud cannot be detected during peer review, the Journal of Cell Biology uses an image screening process that it claims could have identified the apparently manipulated figures published in Science by Woo-Suk Hwang [34].

Peer review and plagiarism

A few cases of plagiarism by historians have been widely publicized.[17] A poll of 3,247 scientists funded by the U.S. National Institutes of Health found 0.3% admitted faking data, 1.4% admitted plagiarism, and 4.7% admitted to autoplagiarism.[18] Autoplagiarism involves an author republishing the same material or data without citing their earlier work. An author often uses autoplagiarism to pad their list of publications. Sometimes reviewers detect cases of likely plagiarism and bring them to the attention of the editor. Reviewers generally lack access to raw data, but do see the full text of the manuscript. Thus, they are in a better position to detect plagiarism or autoplagiarism of prose than fraudulent data.

Although more common than plagiarism, journals and employers often do not punish authors for autoplagiarism. Autoplagiarism is against the rules of most peer-reviewed journals, which usually require that only unpublished material be submitted.

Abuse of inside information by reviewers

A related form of professional misconduct that is sometimes reported is a reviewer using the not-yet-published information from a manuscript or grant application for personal or professional gain. The frequency with which this happens is of course unknown, but the United States Office of Research Integrity has sanctioned reviewers who have been caught exploiting knowledge they gained as reviewers.

Peer review and software development

Main article: Software peer review

Peer review of policy

The technique of peer review is also used to improve government policy. In particular, the European Union uses it as a tool in the 'Open Method of Co-ordination' of policies in the fields of employment and social inclusion.

A programme of peer reviews in active labour market policy started in 1999, and was followed in 2004 by one in social inclusion. Each programme sponsors about eight peer review meetings in each year, in which a 'host country' lays a given policy or initiative open to examination by half a dozen other countries and relevant European-level NGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of an expert report on which participating 'peer countries' submit comments. The results are published on the web.

U.S. government peer review policies

Most federal regulatory agencies in the United States government must comply with specific peer review requirements before the agencies publicly disseminate certain scientific information. These requirements were published in a Peer Review Bulletin issued by the White House Office of Management and Budget ("OMB"), which establishes "government-wide standards concerning when peer review is required and, if required, what type of per review processes are appropriate."

OMB’s peer review bulletin requires that US federal regulatory agencies submit all "influential scientific information" to peer review before the information is publicly disseminated. The Bulletin defines "scientific information" as:

"factual inputs, data, models, analyses, technical information, or scientific assessments related to such disciplines as the behavioral and social sciences, public health and medical sciences, life and earth sciences, engineering, or physical sciences."

The OMB peer review Bulletin defines "influential scientific information" as

"scientific information the agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions. In the term 'influential scientific information,' the term 'influential' should be interpreted consistently with OMB's government-wide information quality guidelines and the information quality guidelines of the agency."

As noted in the preceding quotation, the Peer Review Bulletin must be read in conjunction with "OMB's government-wide information quality guidelines and the information quality guidelines of the agency." These guidelines govern the quality of all information disseminated by most US government regulatory agencies. These guidelines are required by a US statute enacted in 2001 called the Data Quality Act and also known as the Information Quality Act ("IQA"). OMB states that it prepared the peer review Bulletin pursuant to OMB's authority under the DQA.

The peer review Bulletin provides detailed guidelines for peer review of influential scientific information. The Bulletin applies more stringent peer review requirements to "highly influential scientific assessments,"

"which are a subset of influential scientific information. A scientific assessment is an evaluation of a body of scientific or technical knowledge that typically synthesizes multiple factual inputs, data, models, assumptions, and/or applies best professional judgment to bridge uncertainties in the available information."

While the peer review Bulletin's specific guidelines will not be discussed here in detail, one should note that the guidelines differ in several respects from traditional peer review practices at most journals. For example, the Bulletin requires public disclosure of peer reviewers' identities when they are reviewing highly influential scientific assessments. The Bulletin's summary of some of these requirements is set forth below:

"In general, an agency conducting a peer review of a highly influential scientific assessment must ensure that the peer review process is transparent by making available to the public the written charge to the peer reviewers, the peer reviewers’ names, the peer reviewers’ report(s), and the agency’s response to the peer reviewers’ report(s). ... This Bulletin requires agencies to adopt or adapt the committee selection policies employed by the National Academy of Sciences(NAS)."

The peer review Bulletin specifically addresses the effect of publication in a refereed scientific journal as well the variations and limitations with peer review:

"Publication in a refereed scientific journal may mean that adequate peer review has been performed. However, the intensity of peer review is highly variable across journals. There will be cases in which an agency determines that a more rigorous or transparent review process is necessary. For instance, an agency may determine a particular journal review process did not address questions (e.g., the extent of uncertainty inherent in a finding) that the agency determines should be addressed before disseminating that information. As such, prior "peer review and publication is not by itself sufficient grounds for determining that no further review is necessary." [Emphasis added]

See also

References

  1. "Peer Review—The Newcomers' Perspective" (2004) PLoS Biol. 2005 September; 3(9): e326 doi: 10.1371/journal.pbio.0030326.
  2. "British scientists exclude 'maverick' colleagues, says report" (2004) EurekAlert Public release date: 16-Aug-2004
  3. Brian Martin, "Suppression Stories" (1997) in Fund for Intellectual Dissent ISBN 0-646-30349-X
  4. See also Juan Miguel Campanario, "Rejecting Nobel class articles and resisting Nobel class discoveries", cited in Nature, 16-Oct-2003, Vol 425, Issue 6959, p.645
  5. Juan Miguel Campanario and Brian Martin, "Challenging dominant physics paradigms" (2004) Journal of Scientific Exploration, vol. 18, no. 3, Fall 2004, pp. 421-438
  6. See also: Sophie Petit-Zeman, "Trial by peers comes up short" (2003) The Guardian, Thursday January 16, 2003
  7. Ayala, F.J. "On the scientific methods, its practice and pitfalls", (1994) History and Philosophy of Life Sciences 16, 205-240.
  8. Poovaiah, B.W. 1979. Effects of inorganic cations on Ethephon-induced increases in membrane permeability. J. Amer. Soc. Hort. Sci. 104: 164-166.
  9. Reid, M.S., Paul, J.L. and Young, R.E. 1980. Effects of pH and ethephon on betacyanin leakage from beet root discs. Plant Physiology 66: 1015-1016. [1]
  10. Temple, S.A. 1977. Plant-animal mutualism: Coevolution with dodo leads to near extinction of plant. Science 197: 885-886.
  11. Hershey, D.R. 2004. The widespread misconception that the tambalacoque or calvaria tree absolutely required the dodo bird for its seeds to germinate Plant Science Bulletin 50: 105-108. [2]
  12. Allchin, D. 1993. Reassessing van Helmont, reassessing history. Bioscience: Journal of College Biology Teaching 19(2):3-5.[3]
  13. Hershey, D.R. 2003. Misconceptions about Helmont's willow experiment. Plant Science Bulletin 49:78-84. [4]
  14. Watson J.D. and Crick, F.H.C. 1953. A structure for Deoxyribose Nucleic Acid. Nature 171: 737-738. [5]
  15. Pauling, L. and Corey, R. B. 1953. A proposed structure for the nucleic acids. Proc Natl. Acad. Sci. U.S.A." 39(2): 84-97. [6]
  16. Coping with peer rejection. Nature 425 (6959), 645 (16 Oct 2003). doi:10.1038/425645a
  17. Historians on the Hot Seat [7]
  18. Weiss, Rick. 2005. Many scientists admit to misconduct: Degrees of deception vary in poll. Washington Post. June 9, 2005. page A03. [8]

See also

External links