Peer Review is the Worst Form of Scientific Scrutiny
Except for the Alternative
One of the first questions most ask about a study is, “was it peer-reviewed?” For the majority of non-scientists, if the answer is yes, then the study is automatically “credible.” It then becomes confusing when others criticize their view when their view is based on a “peer-reviewed study,” as they believe the views to be validated by the said study. The question, “is it peer-reviewed?” is becoming more and more meaningless as there are very few “studies” that are not peer-reviewed. Second, the quality of the peer review process for any individual study is a serious issue. Each year the sheer number of available journals of all sizes and qualities is increasing, and even the processes of the top journals in the world are in question; retractions are not uncommon.
The peer-review process has come under serious scrutiny. It is largely ineffective, very costly, and seriously slows down scientific advancements. It is also potentially far more harmful to young scientists starting out their careers, particularly when considering the rapid inflation of educational costs and living expenses in many major cities. An inability to publish enough work fast enough can seriously impact a young scientist’s ability to gain grants to continue their research - and continue working, and living. My views are not exactly controversial. A study from the 1980s found that 90% of editors reviewing articles resubmitted to the very journal they were originally published, after changing the authors and institutions were recommended for rejection. To clarify, the study did not let the reviewers know the paper had already been approved and published in the very journal. Media articles detailing issues with peer review have been published long ago in The Scientist and Vox. Even the Editor’s in Chief of some of the world’s most prestigious journals have vocalized scathing indictments of the practice. Richard Horton, current Editor in Chief of The Lancet said in regards to the peer-review process,
"Unjust, unaccountable ... often insulting, usually ignorant, occasionally foolish, and frequently wrong."
While Richard Smith, the former editor of the BMJ said,
“We have little or no evidence that peer review 'works,' but we have lots of evidence of its downside.”
So, what exactly are the issues with peer review? What are the issues with the entire scientific journal structure?
Time to Publish
Publication wait times can completely stall out advancements. Due to the availability of reviewers, or “referees,” some journals can take months, or even years to give initial feedback. Once feedback is given and assuming some changes are needed, it could take significantly longer to wait for the referees to review the changes. Reviewing articles are often not a high priority for referees, as they are not paid to do this and typically have their own projects and priorities to consider. Once “accepted”, a paper may be “in-press” for months before it is published. All of this time is time that other scientists in the field do not have access to the results. It is time during which authors cannot feasibly ramp up their projects and request funding to expand their research. It is simply time wasted. Months, or years wasted from each expert’s ability to expand their understanding of their field. One must consider the fact that with so much waiting, many become jaded. How much potential are we really losing? Likely beyond what is calculable.
Quality of the Reviewers
As mentioned, the referee’s conduct this work for free. They are often overworked and cannot properly review each and every paper appropriately. This may be a contributing factor for the massive discrepancy in how strict referees are within a journal. Was a particular referee tired or overworked? Or was a new referee looking to make a point of how thorough they are? The question also becomes once a referee reviews a paper and gives feedback, how thorough are they on the second draft? Do they remember what their criticisms were when there has been a significant time-lapse? If they were not thorough the first review, are they going to re-review points they didn’t have an issue with, or simply presume they had done a stand-up job on the first go? Will they spend time evaluating the author’s response to their objections, or just note that they’ve been acknowledged and responded to and pass the onus on to the editor? None of these questions are consistent throughout the process. Then there is the question of the competence of the reviewer. Do they understand the topic well enough to properly critique it? Or have the authors submitted to a journal that is relevant enough to not be suspicious, but off-topic enough to mean that the editor, and potentially referees, are open to missing glaring flaws?
Like clinical trials themselves, there are different structures within the peer review process, such as:
Realistically, all journals should employ a double-blind, or preferably triple-blind review process. This means that neither the papers’ authors know who the referees are nor do the referees know who the reviewers are. In a solution that could solve the issue of referees who are either corrupt, or vicious, as detailed later in the article, this would be the fairest solution, provided the current peer review process stands, and even then they should be opting for a “triple-blind” review process, meaning that the editor is also ‘blind’ to whom the authors are.
Even double-blinding has its flaws because if the reviewer is sufficiently qualified, there is a good chance that they recognize the paper without needing to know who the authors are. This recognition can come down to knowing what their colleagues have been working on, often for years, or even recognizing the writing style of authors they know. Recognizing who the author is effectively removes the blinding process, making many double-blind reviews as troublesome as single-blind reviews.
Single Blind Reviews
Single-blind reviews can lead to significant bias. When you recognize (or do not) who the authors are, and as importantly what institution they are involved in, it becomes impossible to review the paper without bias. To paraphrase a former Professor I am acquainted with, who previously was attached to a top 25 University as defined by the Shanghai University Ranking System:
“Most scientists can devote their lives to putting out solid, careful research and if they’re lucky they may get a paper into a journal like Nature. That is, unless they’re from somewhere like Harvard, in which case they can churn out a pile of crap in a half a day and have it published in Nature, or another top journal, in short order.”
The former Professor is now a private researcher, having left academia due to serious concerns about the speed of scientific advancement and research integrity. To summarize, you go into academia for the pursuit of truth. If you are unconvinced that it is a path to truth, and are concerned both about the integrity of research, and the ability to conduct research in a timely manner, what purpose would you remain in public academia over the higher-paying private options?
Surely, the best, or most respected, journals must all be a double-blind review process then, right? Surely… not. Most top journals are single-blind. Most journals under the Nature Publishing Group do offer a double-blind process upon request, but the default is single-blind. Data suggests only 1 in 8 authors opted for the double-blind option after 2 years of Nature rolling out the option. The Lancet also opts for a single-blind policy, as does the New England Journal of Medicine. The BMJ defends its open review process as superior, meaning that the author knows who the reviewer is, and the reviewer knows who the authors are.
When the editors know the authors and their institutions, bias will immediately set in. We are all human, and even the most reasonable and logical individuals will succumb to emotions and preformed biases not just occasionally, but often. How much harder is it for a bright scientist from a low-ranking institution to get published than a disinterested one from a top school? How much does this play a role in the perception of the quality of work? Surely, a paper published in a top journal by an author from a prestigious school must be of a higher calibre than one published in an average journal from an author whose affiliation is with an unremarkable institution… right? Or did this bias play into the very decision on where to publish, and if it is accepted in the first place?
The BMJ has a point when they defend their open-review process. It holds reviewers accountable for their work to the authors. This can mitigate the issues I will discuss shortly, such as reviewer corruption and viciousness of tone in reviews. That said, does the benefit outweigh the damages? How can reviewers properly critique a paper when they may personally know the person they are critiquing? Can they be as hard on them as they’d like? Or, alternatively, are they swayed by knowing that in short order the author they are reviewing may be reviewing them? If they give a glowing review of their colleague’s paper, this colleague is bound to reciprocate in the future when they will undoubtedly be a reviewer of their work. Open review has some benefits, but the downsides almost remove the entire purpose of peer review itself. An opinion piece in Massive Science further details the issues with an unblinded review, such as senior scientists spitefully attacking young scientists after negative reviews (even justly negative reviews) and mitigating racism. Open peer review has some benefits, but the downsides seem too great to overcome.
Reviewer Corruption & Bias
In perhaps the only way a reviewer can actually benefit from the unjust, slave-like institution of for-profit journals acquiring free work out of referees, many referees have taken it upon themselves to use this position to further their own positions. The method, by requiring authors to cite papers, often superfluous, written by said referees, effectively increasing their total citations (and potentially leading to larger grants, more esteem and promotions down the road. While not the norm, this practice is not entirely uncommon, either. In a survey published in Science in 2012, 1 in 5 academics stated that they had been asked, or required, to add in a superfluous citation in order to have their work published. Additionally, Elsevier reported they are investigating hundreds of peer reviewers for manipulating citations. Punishment is difficult, as the addition of a citation does not undermine the work of the actual authors, but as stated:
“One idea that Elsevier is considering is the retraction of individual references in studies, a move that would be unprecedented. Another option, Fennell says, might be to issue corrections. We’re still working out the best way forward.”
This punishment would be akin to simply taking back stolen property without any further punishment and would not plausibly act as a deterrent for those making this practice a habit. The unprecedented part is the fact that most journals do not even have policies pertaining to this type of corruption. As noted in the previously mentioned piece from The Scientist:
“The literature is also full of reports highlighting reviewers' potential limitations and biases. An abstract presented at the 2005 Peer Review Congress, held in Chicago in September, suggested that reviewers were less likely to reject a paper if it cited their work, although the trend was not statistically significant. Another paper at the same meeting showed that many journals lack policies on reviewer conflicts of interest; less than half of 91 biomedical journals say they have a policy at all, and only three percent say they publish conflict disclosures from peer reviewers. Still another study demonstrated that only 37% of reviewers agreed on the manuscripts that should be published. Peer review is a "lottery to some extent," says Smith.”
Another major issue facing the peer review process is plain old viciousness. Reviewer viciousness may arise from jealousy, condescension allowed due to anonymity, and a temporary position of authority, or perhaps out of design to stymie a competitor’s work. In a quite hilarious blog piece, the author (listed as Prof. Wilford C. Terrace) tells the story of a referee assigned to review a paper that his team should have thought of first as it is so close to their own work and so obvious in hindsight, at first making excuses and then detailing all of the ways in which the paper can be derailed, or delayed long enough for the referee’s own team to unethically “scoop” the idea and publish first (changed just enough, of course). The author notes that usually there are points that can be made to quite easily delay an article for weeks or months. Sometimes, an article is so polished and well done this becomes troublesome, as quoted:
“Sometimes, however, more drastic measures are called for. Maybe you’re dealing with a third or fourth revision or one of those rare papers that is truly excellent and so thorough that only a fool would disagree with its conclusions. That’s when the Artistry is called for. You’ve only got one chance to derail this thing, so you’ll have to aim for strategic targets in a way that has a devastating impact on the paper, while seemingly going about the referee business as usual.”
Prof. Terrace goes on 22 different strategies that can all delay publication by at least 6 weeks, including tactics as asinine as arguing over comma placement. While the article is a parody, it may not be too far from the truth, and the researchers I have shared it with have all quipped that it is true. The notion of reviewer viciousness and/or incompetence seems all too common. Elsevier has a page dedicated to the top 10 ways to give a terrible review, with the aim of encouraging reviewers to give better reviews and help science advance. Another blog article opines that reviewers get stuck in a circular habit of being nasty when refereeing papers as a learned behaviour due to their experiences as young authors, attributing this to the fact that authors aren’t actually formally taught how to review articles. Another opinion piece on The Chronicle of Higher Education also details the commonality of vicious reviews, similarly opine on the never-ending loop of reviewers being nasty as payback to those that were nasty to them, suggesting it is amplified by anonymity.
Vicious feedback isn’t particularly constructive or conducive to growth or improvement; the entire purpose of the peer-review process. Furthermore, editors need to take charge as invigilators of the process and task themselves at understanding when referees are asking for unnecessary experiments or anything beyond the scope of the study. Additionally, they need to actively question not just the authors, but the concerns raised by the referees to ensure that peer review is working as it is intended. Based on these aspects, the current peer-review process should be questioned, with at least editors and journals adopting practices to publicly admonish reviewers found to be unfairly vicious (or those caught being corrupt), banning said reviewers from refereeing future articles and sharing this reprimand with other journals. Of course, this would cause a massive issue in terms of reviewer shortages, an issue that will be addressed in the following weeks/parts of this series.
Is it Even Effective?
The entire purpose of peer review is to ensure that papers that should never see the light of day due to fraud or serious errors, do not. The system can take solace in the fact that retractions, the practice of removing a papers publication status, remains relatively rare at only 4 cases per every 10,000 (up from 2 in 10,000 before the year 2000); despite the total volume of retractions rising from about 100 per year prior to the year 2000, to 1000 a year in 2014. As noted in the article, the number of journals reporting retractions has increased by 10 fold, with the amount of retractions per journal remaining consistent. This is good news, indicating more journals are doing more to ensure papers of a high quality, even after publication.
How effective is the initial peer review process? In the previous linked article from The Scientist, it is noted that,
“An abundance of data from a range of journals suggests peer review does little to improve papers. In one 1998 experiment designed to test what peer review uncovers, researchers intentionally introduced eight errors into a research paper. More than 200 reviewers identified an average of only two errors. That same year, a paper in the Annals of Emergency Medicine showed that reviewers couldn't spot two-thirds of the major errors in a fake manuscript. In July 2005, an article in JAMA showed that among recent clinical research articles published in major journals, 16% of the reports showing an intervention was effective were contradicted by later findings, suggesting reviewers may have missed major flaws.”
If there is a fraud, how likely is it to be caught during review, or post review? Fraud can be difficult to catch if data is manipulated properly or a mistake is made that is not obvious, it is unlikely to be detected. Finding errors in the data can be as time-consuming as conducting the original data analysis, something not typical in the peer-review process. Further, unless subsequent articles directly challenge the results of a publication, there is little reason to ever analyse the data of an already published paper. Those that are skeptical are left without the tools to do the work, as in many journals the raw, or “source” data is not available to the public post-publication, with little consistency between journal and field of study. If questions of integrity arise, the journal must review the questions, which often does not happen.
All of this is on top of the widespread reports of reviewers simply being lazy and ineffective. There is even a twitter hashtag called “6-word review”, in which academics humorously (or depressingly?) post examples of “6-word reviews”, or less, they have received on their articles. A fantastic new concept for a journal in which F1000 fits the model has a blog article touching base on this hashtag, and on how they encourage better reviews that are accountable for improvements in science.