Supplementary Written Evidence Submitted by Retraction Watch and The Center for Scientific Integrity (RRE0097)

Founded in 2010, Retraction Watch1 is a blog devoted to covering retractions, corrections and other events in scholarly publishing. Its parent non-profit organization is The Center For Scientific Integrity. The work of Retraction Watch is cited frequently in the mainstream media and has been central to, or the basis of, scores of peer-reviewed studies of retractions and scientific misconduct.2-4 In 2018, Retraction Watch launched the world’s most comprehensive database of retractions, the Retraction Watch Database (, which is used by both publishers and reference managers to maintain the integrity of the scientific literature.5-7

We are submitting this evidence as longtime observers of scientific misconduct.

Executive Summary

        The apparent rate of research misconduct derived by counting retractions is likely a significant undercount of the true figure

        Use of the word “crisis” incorrectly implies that research misconduct is a new problem. It also tends to needlessly polarize.

        Whether a particular finding, paper or other material was peer-reviewed should not be viewed as a binary measure of quality versus unreliability for published papers. There is a wide range of competency, depth, and rigor.

        Authors with UK affiliations have retracted more than 1,100 papers as of December 1, 2021, according to the Retraction Watch Database.

        A growing group of “sleuths” has found thousands of problematic papers, most of which have yet to be corrected or retracted.

        Preprints are drawing more attention and broadening access to findings, which can allow for more transparent peer review. At the same time, mischaracterization of the status of preprinted research by the media and others can create issues of public confusion about the scientific process.

        Multiple solutions are absolutely necessary to resolve these problems.

The current environment

Since our previous testimony to this Committee in 2017,8 there have been a number of developments.

First, in 2018 the Retraction Watch Database was released to the public. As noted above, the Database is the most comprehensive resource of retractions and, with more than 32,000 retractions (and counting), contains three to five times as many entries as are available in any other database. Each entry is made manually, and includes a reason for retraction arrived at through careful analysis based on more than a decade of experience. The launch of the Database has made it possible for researchers, policymakers and others to develop a more nuanced understanding of the rate of retractions – which continues to rise – and the reasons for them.

Retractions of a given year’s publications as a percentage of papers published in science and engineering. Retraction data from Retraction Watch Database, overall publication figures via U.S. National Science Foundation.

Consistent with this growth, retractions by authors at UK institutions have risen dramatically. In 2017, the number of retractions by authors affiliated with UK institutions was 147, as found using Clarivate’s Web of Science.8 As of December 1, 2021, the Retraction Watch Database includes 1,144 retractions for articles having any authors showing UK affiliation. Almost half of the retracted articles (574) include only authors from the UK.

Second, has become a much more frequently used post-publication review platform.9 Launched in 2012, PubPeer allows comments – including anonymous comments – on the vast majority of the scientific literature, in public ways that journals have typically not made possible.10 Those comments have led to discussions with authors and to corrections and retractions, some of which credit PubPeer threads.11 Consistent with surveys of researchers,12,13 these comments have also made it clear that the falsification of images is far more common than retraction data would suggest, and have provided evidence of how quickly – or slowly – journals are to retract problematic papers.14

Some of those who post comments on PubPeer are members of an unaffiliated group of researchers and others who find problems in the literature and try to make them public.15 Some, like Nick Brown, John Carlisle, and James Heathers focus on statistical issues,16,17 while others, including Michael Dougherty, look for plagiarism.18,19 Elisabeth Bik has become quite well known for her work on image manipulation,20 and others including Guillaime Cabanac, Cyril Labbé, and Alexander Magazinov, have found hundreds of cases of “tortured phrases” in the literature that strongly suggest the use of random paper generators.21 Jennifer Byrne, working with Labbé and others, has discovered hundreds of papers with genetic “typos” that can have serious effects on the conclusions.22

The investigations by these and other “sleuths” has shown that errors and misconduct are far more common than many would like to admit, and that many journals are often unwilling to correct the record. Many of these sleuths have faced legal, personal, and professional threats from aggrieved authors or the authors’ supporters,23 which we condemn. We believe that the sleuths should have support, including financial resources.

In that vein – and also part of the explanation for the growth in retractions – some journals and publishers, facing a spike in comments about papers, have hired research integrity managers who sift through allegations and take action as necessary. Much of this focus has been on published papers, but some of the work has shifted to screening submitted manuscripts for issues.24

Is peer review fit for purpose?

The growth in retractions – and at least as important, the evidence of numerous other problems in the literature – continues to shine a spotlight on peer review practices. Editors and publishers are fond of saying that many of the issues that lead to retraction could not have been caught in pre-publication peer review as it is currently practiced, but they also often say that their journals are trustworthy because of peer review. This mantra has become particularly common as preprints – which are not peer-reviewed – rise in prominence and, some would argue, threaten traditional publishers’ business models.

Perhaps it is true that pre-publication peer review could not have caught serious problems, but that is more of an admission of the flaws of pre-publication peer review than an endorsement, particularly when one considers that many of these problems were caught by sleuths within months or even days of publication. It suggests instead that peer review is an overloaded system in which expertise is spread too thin to be sufficient for the number of manuscripts being submitted.

All of this is also a reminder that peer review is not monolithic, nor consistent. Some journals use formal peer review only for certain types of articles (e.g., “Original Research Article,” “Clinical Study”), and skip it for others. For example, some “Letters to the Editor” are merely shortened forms of research articles and may even have assigned digital object identifiers (DOIs) making them citable, yet they may not have been subject to peer review. Some journals will invite peer reviewers from various fields to properly evaluate the whole manuscript (e.g., a statistician, an epidemiologist and an infection control specialist to review an article about flu outbreaks), while others will merely use whatever two or three names their computer algorithm spits out.

One of us (IO) was, for example, asked by four different Elsevier journals to peer review five manuscripts on COVID-19, despite having no relevant expertise.25 It is very likely that the requests came because he was listed in a database of potential reviewers as an expert in the subject because the three of us had co-authored a brief letter on retractions of COVID-19 papers.26

Demand for reviewers has only grown. When one considers that a quality review can take between four and eight hours, multiplying those hours by the two or three reviewers per paper, and by the approximately 3 million papers published each year, results in an enormous figure. Reviewers are generally established in a field, in which case they are likely to receive more requests. With rare exception, peer review for journals remains an uncompensated activity. All of these factors make it unlikely that reviewers will pay attention to details, for example cross-checking references for applicability or retraction status. And journals further dissuade peer reviewers from future participation by ignoring the recommendations.

The rise of preprints, particularly during the COVID-19 pandemic, has led to more transparency but also concerns that non-peer reviewed material has been cited by journalists and others along with peer-reviewed literature, without any distinction.27 But some publishers and advocates for pre-publication peer review have pointed to problematic or withdrawn preprints as evidence that preprints should not be cited, while neglecting to mention the many peer-reviewed papers that are retracted – or more importantly, should be retracted but still have the “stamp” of peer review.


No single solution will tackle all of the problems in the scientific literature. We will focus here on sanctions, incentives, and transparency in peer review.

1. Sanctions. While sanctions including retractions, loss of employment, or even criminal prosecution are available, they are unevenly enforced at best. In many countries, including the UK, central investigation bodies like the U.S.’s Office of Research Integrity and National Science Foundation’s Office of the Inspector General do not exist. Following the last inquiry into this matter by this Committee, the UK has created an office which is a step toward such a body, which we commend.

We would also point to the experience of in the United States as an example of delayed sanctions. Investigators have been required under penalty of fines to post their data on clinical trial registries for many years, but it was not until April 2021 that the FDA fined any responsible parties for failing to do so.28

2. Incentives. “Publish or perish” remains a negative force in academia and makes authors and institutions reluctant to correct the scholarly record. In countries where promotions, tenure and degrees are intrinsically are or have been tied to the number of publications, paper mills, poor quality analyses and plagiarism have appeared to flourish.29 Of note, China recently banned such incentives.30

Some countries have already taken steps to reduce their reliance on publications,31,32 and we would recommend that the UK consider the same for its Research Excellence Framework.

Incentives might also be considered for peer reviewers. While some publishers offer small financial rewards33 or reductions on costs of publishing future works,34 most do not. This step should not be taken without due consideration, but neither should it be dismissed out of hand.

3. Peer review transparency. Rather than perpetuate a false binary of peer-reviewed or not peer-reviewed, as some publishers continue to do in order to differentiate their product from preprints, we would recommend that consideration be given to transparency in peer review. For example, publishing peer reviews – even without reviewers’ names, to limit the real phenomenon of retaliation for negative reviews – would allow readers to understand the level of rigor at a particular journal. And reporting the number of reviewers, along with their credentials and experience, would provide far more useful information than simply saying something was peer reviewed.

Declaration of Interests: Our organization has in the past received funding from the John D. and Catherine T. MacArthur Foundation, the Laura and John Arnold Foundation and the Leona and Harry Helmsley Charitable Trust. The Center for Scientific Integrity is a subcontractor, through the University of Illinois, on a research integrity project funded by the Howard Hughes Medical Institute. The Center licenses its dataset to publishers, reference management software companies, and related organizations to support its efforts. Our executive director, Ivan Oransky, is a volunteer member of the PubPeer Foundation’s board of directors.

        Alison J. Abritis, PhD, wrote her dissertation on the relationship between retractions and research misconduct and works as a researcher at Retraction Watch and The Center For Scientific Integrity. She also has a faculty appointment at the College of Public Health, University of South Florida, Tampa, Florida, USA.

        Adam Marcus, MA, is co-founder of Retraction Watch and the Center for Scientific Integrity, and editorial director for primary care at Medscape, in New York, USA.

        Ivan Oransky, MD, is co-founder of Retraction Watch and the Center For Scientific Integrity, Distinguished Writer In Residence New York University’s Arthur Carter Journalism Institute, USA, and editor in chief of Spectrum, New York, USA.

January 2022


  1. Retraction Watch (website).
  2. Yan W. 2020. Coronavirus Tests Science’s Need for Speed Limits. The New York Times. Accessed on 1/4/2022 from
  3. Dal- R, Bouter LM, Moher D, Marušić A. 2020. Mandatory disclosure of financial interests of journals and editors. BMJ. 370:m2872. doi: 10.1136/bmj.m2872.
  4. Gaudino M, Robinson NB, Audisio K, et al. 2021. Trends and Characteristics of Retracted Articles in the Biomedical Literature, 1971 to 2020. JAMA Internal Medicine. 181(8):1118–1121. doi:10.1001/jamainternmed.2021.1807.
  5. Stillman D. 2019. Retracted item notifications with Retraction Watch integration. Zotero (website). Accessed on 1/4/2022 from
  6. Podbelski V. 2021. Papers Announces Expanded Retraction Support. Papers (website). Accessed on 1/4/2022 from
  7. Price G. 2021. EndNote Adds Retraction Watch Notification Integration, Similar Service Available For Zotero and Papers. Infodocket. Accessed on 1/4/2022 from
  8. Science and Technology Committee. 2018. Research integrity: Sixth Report of Session 2017-19. House of Commons. pp.Q277-310. Retrieved from
  9. Bik E. 2019. PubPeer – a website to comment on scientific papers. Science Integrity Digest. Accessed on 1/4/2022 from
  10. PubPeer. Frequently asked questions. PubPeer. Accessed on 1/3/2022 from
  11. Editorial Office. 2021. Retraction notice regarding several articles published in Tumor Biology. Tumor Biology. 43:351-354. doi: 10.3233/TUB-219010.
  12. Fanelli D. 2009. How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE. 4:e5738. doi: 10.1371/journal.pone.0005738.
  13. Gopalakrishna G, Riet G, Vink G, Stoop I, Wicherts J, Bouter L. 2021. Prevalence of questionable research practices, research misconduct and their potential explanatory factors: a survey among academic researchers in The Netherlands (Preprint). MetaArxiv Preprints. doi: 10.31222/
  14. Marcus A. 2021. “Yep, pretty slow”: Nutrition researchers lose six papers. Retraction Watch. Accessed on 1/4/2022 from
  15. Oransky I. 2018.  Meet the scientific sleuths: More than two dozen who’ve had an impact on the scientific literature. Retraction Watch.  Accessed on 1/3/2022 from
  16. Carlisle JB, Loadsman JA. 2016. Evidence for non-random sampling in randomised, controlled trials by Yuhji Saitoh. Anaesthesia. 72:17-27. doi: 10.1111/anae.13650.
  17. Marcus A, Oransky I. 2018. Meet the ‘data thugs' out to expose shoddy and questionable research. Science. doi: 10.1126/science.aat3133.
  18. McCook A. 2018. Philosophers, meet the plagiarism police. His name is Michael Dougherty. Retraction Watch.  Accessed on 1/3/2022 from
  19. Dougherty MV. 2020. Plagiarism in the Sacred Sciences: Three Impediments to Institutional Reform.  Philosophy and Theology. 32:27-61. doi: 10.5840/philtheol2021622134.
  20. Marcus A, Oransky I. 2019. Eye for Manipulation: A Profile of Elisabeth Bik. The Scientist. Accessed on 1/3/2022 from
  21. Cabanac G, Labbe C, Magazinov A. 2021. Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals. (Preprint) Accessed on 1/3/2022 from
  22. Park Y, West RA, Pathmendra P, et al. 2021. Human gene function publications that describe wrongly identified nucleotide sequence reagents are unacceptably frequent within the genetics literature. (Preprint) bioRxiv. 07.29.453321. doi:10.1101/2021.07.29.453321.
  23. O’Grady C. 2021. Image Sleuth faces legal threats. Science. 372:1021-1022. doi: 10.1126/science.372.6546.1021.
  24. Oransky I, Marcus A. 2018. To catch misconduct, journals are hiring research integrity czars. STAT. Accessed on 1/3/2022 from
  25. Oransky I. 2021. Elsevier journals ask Retraction Watch to review COVID-19 papers. Retraction Watch.  Accessed on 1/4/2022 from
  26. Abritis A, Marcus A, Oransky I. 2020. An “alarming” and “exceptionally high” rate of COVID-19 retractions? Accountability in Research. 28:58-59. doi: 10.1080/08989621.2020.1793675
  27. Fleerackers A, Riedlinger M, Moorhead L, Ahmed R, Alperin JP. 2021. Communicating Scientific Uncertainty in an Age of COVID-19: An Investigation into the Use of Preprints by Digital Media Outlets. Health Communications. doi:10.1080/10410236.2020.1864892.
  28. Woodcock J. 2021. FDA Takes Action For Failure to Submit Required Clinical Trial Results Information to ClinicalTrials.Gov. United States Food and Drug Administration. Accessed on 1/4/2022 from
  29. Chawla, D. 2020. A single ‘paper mill’ appears to have churned out 400 papers. Science. doi:10.1126/science.abb4930.
  30. Mallapaty S. 2020. China bans cash rewards for publishing papers.  Nature. 579:18. doi: 10.1038/d41586-020-00574-8.
  31. Sharma Y. 2020. China shifts from reliance on international publications. University World News. Accessed on 1/4/2022 from
  32. Vaidyanathan G. 2019, May 31. No paper, no PhD? India rethinks graduate student policy. Nature. doi:10.1038/d41586-019-01692-8.
  33. Anonymous. n.d. Frequently Asked Questions. Gartner peerinsights. Accessed on 1/4/2022 from
  34. Anonymous. n.d. For Reviewers. F1000Research. Accessed on 1/4/2022 from