Supplementary Written Evidence Submitted by Retraction Watch and The Center for Scientific Integrity (RRE0097)
Founded in 2010, Retraction Watch1 is a blog devoted to covering retractions, corrections and other events in scholarly publishing. Its parent non-profit organization is The Center For Scientific Integrity. The work of Retraction Watch is cited frequently in the mainstream media and has been central to, or the basis of, scores of peer-reviewed studies of retractions and scientific misconduct.2-4 In 2018, Retraction Watch launched the world’s most comprehensive database of retractions, the Retraction Watch Database (retractiondatabase.org), which is used by both publishers and reference managers to maintain the integrity of the scientific literature.5-7
We are submitting this evidence as longtime observers of scientific misconduct.
Executive Summary
● The apparent rate of research misconduct derived by counting retractions is likely a significant undercount of the true figure
● Use of the word “crisis” incorrectly implies that research misconduct is a new problem. It also tends to needlessly polarize.
● Whether a particular finding, paper or other material was peer-reviewed should not be viewed as a binary measure of quality versus unreliability for published papers. There is a wide range of competency, depth, and rigor.
● Authors with UK affiliations have retracted more than 1,100 papers as of December 1, 2021, according to the Retraction Watch Database.
● A growing group of “sleuths” has found thousands of problematic papers, most of which have yet to be corrected or retracted.
● Preprints are drawing more attention and broadening access to findings, which can allow for more transparent peer review. At the same time, mischaracterization of the status of preprinted research by the media and others can create issues of public confusion about the scientific process.
● Multiple solutions are absolutely necessary to resolve these problems.
The current environment
Since our previous testimony to this Committee in 2017,8 there have been a number of developments.
First, in 2018 the Retraction Watch Database was released to the public. As noted above, the Database is the most comprehensive resource of retractions and, with more than 32,000 retractions (and counting), contains three to five times as many entries as are available in any other database. Each entry is made manually, and includes a reason for retraction arrived at through careful analysis based on more than a decade of experience. The launch of the Database has made it possible for researchers, policymakers and others to develop a more nuanced understanding of the rate of retractions – which continues to rise – and the reasons for them.
Retractions of a given year’s publications as a percentage of papers published in science and engineering. Retraction data from Retraction Watch Database, overall publication figures via U.S. National Science Foundation.
Consistent with this growth, retractions by authors at UK institutions have risen dramatically. In 2017, the number of retractions by authors affiliated with UK institutions was 147, as found using Clarivate’s Web of Science.8 As of December 1, 2021, the Retraction Watch Database includes 1,144 retractions for articles having any authors showing UK affiliation. Almost half of the retracted articles (574) include only authors from the UK.
Second, PubPeer.com has become a much more frequently used post-publication review platform.9 Launched in 2012, PubPeer allows comments – including anonymous comments – on the vast majority of the scientific literature, in public ways that journals have typically not made possible.10 Those comments have led to discussions with authors and to corrections and retractions, some of which credit PubPeer threads.11 Consistent with surveys of researchers,12,13 these comments have also made it clear that the falsification of images is far more common than retraction data would suggest, and have provided evidence of how quickly – or slowly – journals are to retract problematic papers.14
Some of those who post comments on PubPeer are members of an unaffiliated group of researchers and others who find problems in the literature and try to make them public.15 Some, like Nick Brown, John Carlisle, and James Heathers focus on statistical issues,16,17 while others, including Michael Dougherty, look for plagiarism.18,19 Elisabeth Bik has become quite well known for her work on image manipulation,20 and others including Guillaime Cabanac, Cyril Labbé, and Alexander Magazinov, have found hundreds of cases of “tortured phrases” in the literature that strongly suggest the use of random paper generators.21 Jennifer Byrne, working with Labbé and others, has discovered hundreds of papers with genetic “typos” that can have serious effects on the conclusions.22
The investigations by these and other “sleuths” has shown that errors and misconduct are far more common than many would like to admit, and that many journals are often unwilling to correct the record. Many of these sleuths have faced legal, personal, and professional threats from aggrieved authors or the authors’ supporters,23 which we condemn. We believe that the sleuths should have support, including financial resources.
In that vein – and also part of the explanation for the growth in retractions – some journals and publishers, facing a spike in comments about papers, have hired research integrity managers who sift through allegations and take action as necessary. Much of this focus has been on published papers, but some of the work has shifted to screening submitted manuscripts for issues.24
Is peer review fit for purpose?
The growth in retractions – and at least as important, the evidence of numerous other problems in the literature – continues to shine a spotlight on peer review practices. Editors and publishers are fond of saying that many of the issues that lead to retraction could not have been caught in pre-publication peer review as it is currently practiced, but they also often say that their journals are trustworthy because of peer review. This mantra has become particularly common as preprints – which are not peer-reviewed – rise in prominence and, some would argue, threaten traditional publishers’ business models.
Perhaps it is true that pre-publication peer review could not have caught serious problems, but that is more of an admission of the flaws of pre-publication peer review than an endorsement, particularly when one considers that many of these problems were caught by sleuths within months or even days of publication. It suggests instead that peer review is an overloaded system in which expertise is spread too thin to be sufficient for the number of manuscripts being submitted.
All of this is also a reminder that peer review is not monolithic, nor consistent. Some journals use formal peer review only for certain types of articles (e.g., “Original Research Article,” “Clinical Study”), and skip it for others. For example, some “Letters to the Editor” are merely shortened forms of research articles and may even have assigned digital object identifiers (DOIs) making them citable, yet they may not have been subject to peer review. Some journals will invite peer reviewers from various fields to properly evaluate the whole manuscript (e.g., a statistician, an epidemiologist and an infection control specialist to review an article about flu outbreaks), while others will merely use whatever two or three names their computer algorithm spits out.
One of us (IO) was, for example, asked by four different Elsevier journals to peer review five manuscripts on COVID-19, despite having no relevant expertise.25 It is very likely that the requests came because he was listed in a database of potential reviewers as an expert in the subject because the three of us had co-authored a brief letter on retractions of COVID-19 papers.26
Demand for reviewers has only grown. When one considers that a quality review can take between four and eight hours, multiplying those hours by the two or three reviewers per paper, and by the approximately 3 million papers published each year, results in an enormous figure. Reviewers are generally established in a field, in which case they are likely to receive more requests. With rare exception, peer review for journals remains an uncompensated activity. All of these factors make it unlikely that reviewers will pay attention to details, for example cross-checking references for applicability or retraction status. And journals further dissuade peer reviewers from future participation by ignoring the recommendations.
The rise of preprints, particularly during the COVID-19 pandemic, has led to more transparency but also concerns that non-peer reviewed material has been cited by journalists and others along with peer-reviewed literature, without any distinction.27 But some publishers and advocates for pre-publication peer review have pointed to problematic or withdrawn preprints as evidence that preprints should not be cited, while neglecting to mention the many peer-reviewed papers that are retracted – or more importantly, should be retracted but still have the “stamp” of peer review.
Recommendations
No single solution will tackle all of the problems in the scientific literature. We will focus here on sanctions, incentives, and transparency in peer review.
1. Sanctions. While sanctions including retractions, loss of employment, or even criminal prosecution are available, they are unevenly enforced at best. In many countries, including the UK, central investigation bodies like the U.S.’s Office of Research Integrity and National Science Foundation’s Office of the Inspector General do not exist. Following the last inquiry into this matter by this Committee, the UK has created an office which is a step toward such a body, which we commend.
We would also point to the experience of ClinicalTrials.gov in the United States as an example of delayed sanctions. Investigators have been required under penalty of fines to post their data on clinical trial registries for many years, but it was not until April 2021 that the FDA fined any responsible parties for failing to do so.28
2. Incentives. “Publish or perish” remains a negative force in academia and makes authors and institutions reluctant to correct the scholarly record. In countries where promotions, tenure and degrees are intrinsically are or have been tied to the number of publications, paper mills, poor quality analyses and plagiarism have appeared to flourish.29 Of note, China recently banned such incentives.30
Some countries have already taken steps to reduce their reliance on publications,31,32 and we would recommend that the UK consider the same for its Research Excellence Framework.
Incentives might also be considered for peer reviewers. While some publishers offer small financial rewards33 or reductions on costs of publishing future works,34 most do not. This step should not be taken without due consideration, but neither should it be dismissed out of hand.
3. Peer review transparency. Rather than perpetuate a false binary of peer-reviewed or not peer-reviewed, as some publishers continue to do in order to differentiate their product from preprints, we would recommend that consideration be given to transparency in peer review. For example, publishing peer reviews – even without reviewers’ names, to limit the real phenomenon of retaliation for negative reviews – would allow readers to understand the level of rigor at a particular journal. And reporting the number of reviewers, along with their credentials and experience, would provide far more useful information than simply saying something was peer reviewed.
Declaration of Interests: Our organization has in the past received funding from the John D. and Catherine T. MacArthur Foundation, the Laura and John Arnold Foundation and the Leona and Harry Helmsley Charitable Trust. The Center for Scientific Integrity is a subcontractor, through the University of Illinois, on a research integrity project funded by the Howard Hughes Medical Institute. The Center licenses its dataset to publishers, reference management software companies, and related organizations to support its efforts. Our executive director, Ivan Oransky, is a volunteer member of the PubPeer Foundation’s board of directors.
● Alison J. Abritis, PhD, wrote her dissertation on the relationship between retractions and research misconduct and works as a researcher at Retraction Watch and The Center For Scientific Integrity. She also has a faculty appointment at the College of Public Health, University of South Florida, Tampa, Florida, USA.
● Adam Marcus, MA, is co-founder of Retraction Watch and the Center for Scientific Integrity, and editorial director for primary care at Medscape, in New York, USA.
● Ivan Oransky, MD, is co-founder of Retraction Watch and the Center For Scientific Integrity, Distinguished Writer In Residence New York University’s Arthur Carter Journalism Institute, USA, and editor in chief of Spectrum, New York, USA.
January 2022
References