Written Evidence Submitted by Dr Tom Stafford and Dr Charlotte Brand, University of Sheffield

(RRE0032)

Dr Tom Stafford is Senior Lecturer in Psychology and Cognitive Science at the University of Sheffield. He is academic lead for Research Practice at the University, institutional representative for the UK Reproducibility Network (UKRN) and a member of the Research on Research Institute (RoRi).

Dr Charlotte Brand is a postdoctoral research associate on the EPSRC Funded project “Opening Up Minds” at the University of Sheffield, Department of Psychology. She has previously worked at the University of Exeter, and the University of St Andrews.

This submission reflects our personal views.

The issues in academia that have led to the reproducibility crisis:

A key issue is the lack of transparency of process - researchers often share the outputs of their research without making the data, materials or protocols fully available. This divorces the written report of their work from full, and open, archiving of the project. It blocks the possibility of audit of the work by readers and hampers reproduction, reuse and extension of research by third parties both within academia and in industry.

There is a norm for researchers to publish with a declaration along the lines of “data is available upon request”. Studies have shown that this, in the majority of cases, means the data is actually not available in practice [1].

The fundamental unit of research knowledge is the published journal article which reports a finding or claim. Researchers are not paid to publish, but compete to publish in more prestigious journals. In turn, journals compete to publish eye-catching findings. Both of these factors present a moral hazard to the rigour of published work, which is supposedly counterweighted by the scrutiny journals give articles before publication.

Scrutiny of journal articles cannot be effective without full reporting of the work to be scrutinised. A key stage of journal publication is peer review, in which the journal recruits domain experts to review work submitted for publication. Peer review is demonstrably not a guarantee of research reliability - as the number of unreliable findings published attests - but it supports research reliability in two important ways. First, anticipating the possibility of expert audit, authors prepare their work to be robust to criticism, imagining what the full potential of expert critique may be. Second, and less importantly, reviewers may actually discover flaws or potential improvements in submitted work. Both these aspects are undermined by limits to research transparency. If parts of the work are hidden, it cannot be fully scrutinised.

In addition, the institution of peer review is undermined by the increasing difficulty in finding expert reviews (which also slows down the research publication process), and idiosyncrasy and variability in the content of reviews. Despite the fact that the peer review system is the key difference between published academic articles and all other published work, and that this is often invoked as a reason only to trust “published research”, peer reviewers themselves are neither rewarded nor recognised for this crucial work. Indeed, the entire peer review process relies on a voluntary system of reciprocity, one that works in theory but is currently falling apart in practice (and is widely recognised to be in crisis [2]).

 

The role of publishers in addressing the reproducibility crisis:

The publishers of scholarly journals, particularly the large for-profit publishers, have failed to ensure journal publication is transparent in process and output. They have failed to sustain and invigorate peer review. By relying on the print tradition of publishing, which focuses on the short research report, they have failed to sufficiently support innovations which take advantage of digital platforms for archiving, distributing and critiquing scholarly work.

 

Where possible, data from research studies should be archived in a way that the data is Findable, Accessible, Interoperable and Reusable - the FAIR principles [3] which support common standards and permissive licenses for data. Although an increasing number of journals ask for data sharing on publication, they do not mandate data sharing, nor do they audit this (which is the step required for compliance, and for peer reviewers to be genuinely able to check the reliability of the work). Data archives are typically established and maintained by independent organisations and institutions like universities. This allows researchers to maintain control (and ownership of their work), but it has a side-effect of encouraging publishers in avoiding responsibility for making these aspects of publishing scientific work mandatory.

 

Critiques, commentary and retractions of papers should be integrated with the original work and available to readers. Publishers take advantage of expert reviewers for peer review, who donate their time for free, and then bury this expertise behind an editorial decision process - denying readers the critical context which aids interpretation. Innovations in peer review such as open peer review - where reviews are published along with the article - or systems for recognising and rewarding the work of reviewing, have been insufficiently pursued by publishers. The state of tracking commentary and flagging retractions to articles is woeful, as documented by RetractionWatch.com (another effort which is vital to academic publishing but publisher independent).

 

Successful post publication peer review platforms, such as pubpeer.com or the covid-research RAMP forums, show that it is possible to create forums where the value of work is meaningfully discussed, and so enhanced, by a research community. The RAMP forums deserve special mention. Set up by a Royal Society initiative and hosted by the University of Edinburgh, RAMP stands for Rapid Assistance In Modelling The Pandemic, and was established in March 2020 to coordinate expert peer review of epidemiological models. It has since broadened in scope to cover review of other aspects of pandemic science. The RAMP forum shows just how much better expert review of scholarly outputs could be, creating a platform for rapid review of research findings which are assessed for their rigor, relevance and relation to other emerging results.

 

As the number and variety of research outputs increases, it is also important to integrate across scholarly literatures. This is supported by structures such as reporting results in standard formats or creating article metadata records.

 

Incredible opportunities exist in how we publish and discuss research outputs, but - with a few honourable exceptions - they are not pursued with enough vigour by a scholarly publishing industry which makes private profits from public money. They are able to rest on their laurels because researchers are locked to journal publication. Journals follow a traditional publishing model that was designed for physically printed documents, not digitally distributed knowledge. The number of journal articles published annually has grown explosively after the last decades - it is now in excess of 2.5 million every year, with 20-40,000 journals currently in circulation. The current system was designed with far fewer researchers in mind, and is not only inefficient but damaging to the reliability of the research record.

 

Peer review alone donates more than an estimated £150 million a year worth of UK researchers’ time to the publishing industry [2]. Elsevier - a notable, but not isolated, example - had a profit margin greater than 35% in 2019, far in excess of most private companies. This is, frankly, obscene for an industry which hosts publicly funded research, reviewed and edited for free by publicly funded researchers, and sells access to the content back to publicly funded institutions, like Universities. UK Universities spend approx £100 million a year on deals with big publishers like Elsevier, and these costs are increasing despite digital innovation in publishing which creates efficiencies [3].

 

Publishers’ neglect of innovation is not an accident, but a by-product of their comfort (profits) within the current system. Research publishing extracts these profits from a system which is designed to facilitate trust, and which - for the researchers involved - is about reputational rewards, neither costing them directly, nor rewarding them monetarily.  Because researchers support academic publishing in multiple ways, but are isolated from the financial model, innovation is discouraged on their part. Universities, publishers and funders must lead change.

 

Publishers are a natural locus of innovations which would support greater reproducibility and research integrity. They are cross institutional and cross disciplinary. They already disseminate and archive the version of record for research outputs. Journals are already key gatekeepers for prestige and an existing structure for organising expert peer review. Finally, journal publishing already receives a substantial amount of public research money, and could thus support investment in supporting research reliability.

 

What policies or schemes could have a positive impact on academia’s approach to reproducible research?

Publishers have come under considerable pressure to support open access publishing (making research free to read, rather than locked behind paywalls which only large institutions like Universities can afford). Schemes like Plan-S - which mandates that researchers maintain rights to their publications, and that funded work must be immediately open access on publication (which is endorsed by the Wellcome Trust and now UKRI) show that research funders can force positive change in research publishing. Open access is necessary but not enough. What is published and how it is reviewed also needs reform.

Scholarly societies and the institutions which do the research should be supported to take on a larger role in journal publishing, including taking over publication of existing journal titles. This will provide alternatives to large for-profit publishers, and create competition to encourage the innovations which will support research reliability. Structures for rewarding and recognising peer review need to be explored. Whilst this is a system of considerable complexity, the current system is not sustainable and currently sees researcher time being monetised by publishers with insufficient return.

 

The benefits of change to peer review and academic publishing would be felt beyond improving the reliability of research outputs. The current publishing system is a scandal, and can surely only persist as long as the public are kept ignorant of it. In an age when the results of research are more important and more contested than ever, it is vital that research outputs are trusted. Such trust doesn’t just require reliability of the outputs, it also requires a process that can be understood and endorsed by the public. In the long term, public trust in research will only be sustained by a publication system for research outputs which deserves that trust.

 

How establishing a national committee on research integrity under UKRI could impact the reproducibility crisis:

The biggest threats to research integrity are not failures of researcher integrity. Accordingly it is vital that the UK Committee on Research Integrity focus on structural causes of error, rather than individual instances of malpractice.  The reward systems of academia, including the incentives of research publishing, encourage haste, omission and lack of transparency. These, in turn, foster unreliable research outputs and legitimately undermine trust. The role of publishers and publishing practices as structural causes of research (un)reliability is insufficiently recognised and must be addressed.

 

As such, the Committee on Research Integrity should hold a review of academic publishing, it’s de facto support by public money, and whether it provides value for money in terms of the registration, validation, archiving and dissemination of research generated in the UK.

 

 

References

[1] Tedersoo, L., Küngas, R., Oras, E., Köster, K., Eenmaa, H., Leijen, Ä., ... & Sepp, T. (2021). Data sharing practices and data availability upon request differ across scientific disciplines. Scientific Data, 8(1), 1-11.

Vines, T. H., Albert, A. Y., Andrew, R. L., Débarre, F., Bock, D. G., Franklin, M. T., ... & Rennison, D. J. (2014). The availability of research data declines rapidly with article age. Current Biology, 24(1), 94-97.

[2] Severin, A., & Chataway, J. (2021). Overburdening of peer reviewers: A multi‐stakeholder perspective on causes and effects. Learned Publishing.

[3] https://www.go-fair.org/fair-principles/

[4] Aczel, B., Szaszi, B., & Holcombe, A. O. (2020, October 9). A Billion Dollar Donation: The Cost, and Inefficiency of, Researchers' Time Spent on Peer Review. https://doi.org/10.31222/osf.io/5h9z4

[5] UK universities ‘paid big publishers £1 billion’ in past decade. Times Higher Education, march 12, 2020, Jack Grove. https://www.timeshighereducation.com/news/uk-universities-paid-big-publishers-ps1billion-past-decade

 

(September 2021)