Written Evidence Submitted by Elsevier
Elsevier is a leader in global information analytics, helping researchers and healthcare professionals advance science and improve health outcomes for the benefit of society. We do this by facilitating insights and critical decision-making for customers across the global research and health ecosystems.
We welcome the opportunity to respond to the Science and Technology Select Committee’s Inquiry into reproducibility and research integrity. In everything we publish, we uphold the highest standards of quality and integrity. In particular, we play an active role enhancing and disseminating reproducible research, and we share our insights below for consideration.
Publishers and stakeholders across the research community fully recognise that we move research forward by reproducing and building on past research. Reproducibility is therefore crucial to enable knowledge to progress, and to ensuring productive and efficient practices in the research process. We at Elsevier recognise that reproducibility has specific challenges in that not all research studies can be repeated, and we acknowledge the potential consequences: should inaccurate research findings become part of the literature, more studies will attempt to build on tenuous foundations. If these problems persist in replicating research, this could, for instance, slow development in medical treatments, misguide government policies, and ultimately impact wider society.
However, we do not suggest that that the problems in reproducibility are to such a scale that they have irrevocably impacted scientific research, or should necessarily be characterised as a crisis. Notwithstanding the challenges, scientific research continues to yield measurable improvements to daily human life – the development of the COVID-19 vaccinations being a case in point. Moreover, the challenges in academia related to reproducing research are diverse and often down to the character of individual disciplines, or the production of research, which is naturally subject to variations and human error. Challenges are often inherent in research processes and therefore require holistic solutions across research culture (explored further below). The challenges include:
In the next section, we explore the range of solutions that we have embarked upon to address some of the above challenges.
Publishers have a clear role in maintaining the integrity of the scholarly record, and thereby ensuring the quality, reliability, and reproducibility of research. Primarily, we ensure scientific rigour in our published content through our editorial and peer review processes. We also support the long-term preservation of trusted and reliable research, making investments to ensure that the scientific record is up to date, and enabling publishers to implement editorial decisions to correct or retract inaccurate or unethical outputs. In turn, published research enables researchers and practitioners to deliver impacts for society.
Reproducibility forms an important pillar of our commitments to research integrity, and is underpinned by our efforts to encourage openness and transparency in the research process: reproducibility can only be effective if researchers are fully transparent in how studies and experiments were carried out, the methods and protocols involved and in fully describing the results.
Publishers therefore invest in a range of activities which are designed to encourage reproducible research. We use the peer review process to interrogate authors’ use of methods, results and statistical analysis; we support open science practices across the research workflow to encourage transparency and accessibility of all kinds of research outputs; our journals support the setting of standards around scientific rigour and empower researchers to share methods and processes; and we can be pivotal in providing incentives and creating platforms to enhance reproducibility practices. Examples of work that we at Elsevier have undertaken are below.
- We continue to experiment in peer review, for example via Results Masked Review (RMR). The RMR model aims for work to be judged on the merits of the research question and methodology, not the findings. RMR articles are sent for review without the results, discussion or conclusion (although data has already been collected) and reviewers are asked to evaluate the article on the hypothesis and the methodology only.
- Registered Reports also stem from innovations in the peer review process. As part of the model, authors submit their protocols before experiments are conducted. Importantly, this means that authors have committed to the protocol they have submitted and how they conduct their research. The journal then accepts the paper in principle, based on whether editors believe the protocol has merit.
- Finally, we collaborate across the industry and develop a range of pilots within peer review to promote transparency and accountability, as a clear factor in supporting integrity and, by extension, reproducible research.
- Providing products and platforms to incentivise researchers to share their research data in a way that is structured, but easy for researchers to deploy. These include Mendeley Data, a secure cloud-based repository for researchers to store, share, access and cite their research data. Digital Commons increases visibility of the full spectrum of research, across research data, articles and other outputs. And Elsevier’s Entellect enables seamless access to reusable, integrated Life Sciences data that adheres to FAIR principles.
- We have also aligned and clarified our journal policies regarding open research data to ensure that researchers understand what we expect of them in terms of sharing research data.
- To help make all scientific data more transparent, a number of our journals encourage authors to state the availability of their data – functionality that will be rolled out across the majority of our journals in the next year. With the data availability statement, authors can be transparent about the data they used in an article and make a statement about its availability together with their published article.
- We have integrated data sharing incentives to our submission workflows, thereby using our journals to influence researcher behaviours and incentivise them towards data sharing. For example, we encourage data sharing at the point that a researcher submits their paper to our journals. We have found that by simply making it easy for authors to data share, and reminding them early on in the publication process in our submission workflows, this has doubled the amount of data sharing to support reproducible research.
- Research Elements journals are dedicated to publishing research elements articles, making the process of scientific research more transparent by publicising the output that has come about as a result of the research lifecycle, including data (see next point below), methods, protocols, software, hardware and more. Journals such as Software X are designed to help serve Research Software Engineers as well as other scientists produce software and code, and participating journals have published hundreds of Original Software Publications.
- Data in Brief provides a way for researchers to easily share and reuse each other's datasets by publishing data articles that thoroughly describe data, make it easier to find, and enhance collaboration. These objectives contribute towards and underscore reproducibility.
- We have launched journals that publish negative results, encouraging authors to avoid publication bias and incentivise rigorous scientific practice. Unfortunately, even with strong editorial support and backing, we have had low take-up of these journals, suggesting that researchers have a low appetite to publish these kinds of research outputs. Our feedback from researchers suggests that they don’t feel it was worthwhile to write up negative results, or don’t want to be associated publicly with them. In the same vein, we encourage replication studies to be submitted to existing journals. We also piloted via a group of journals with calls for papers to invite replication studies. Again, we have experienced somewhat low uptake from researchers to date, but this is something we continue to market and build upon.
As the above list of our activities shows, over the years Elsevier has been experimenting with a range of solutions to meet reproducibility challenges. However, our pilots are subject to differing results and degrees of success. Some negative results journals, for instance, have seen low take-up from researchers, while our data journals have been highly successful. Equally, while publishing replication studies supports reproducibility, this is not suitable to deploy across every type of journal in our portfolio, and it is not always taken up by researchers. Further, while we provide a range of incentives to data share, researcher responses are not always in line with the related policy objectives. As per the challenges in reproducibility listed out above, researchers simply aren’t always able to share data, or are cautious about doing so, often for valid reasons relating to privacy, reasonable concerns about misinterpretations of data, or the logistics and costs involved. These challenges indicate that we need a range of targeted solutions to meet the needs of the diverse communities that our journals support, including the range of concerns coming directly from researchers regarding reproducibility practices.
We will nevertheless continue our work to promote reproducibility. By way of examples, we have rolled out several key elements of our STAR Methods to 1500 journals. We are currently implementing changes to our systems and expect to roll out functionalities relating to data sharing, for instance applying data availability statements and enabling FAIR data sets, across the majority of our journals by March 2022. We continue to test and learn from the results of our journal/ article pilots, while promoting and scaling our ongoing innovative projects like Registered Reports and our Research Elements journals. We have and will continue to take steps to educate researchers, particularly in relation to the above issues reported by researchers in data sharing, using resources such as the ANDS guidelines.
Publishers also have responsibilities elsewhere in the research process to ensure good practices, which in turn support reproducibility. For example, retractions can and do happen – and while these make up a very small proportion of published output, publishers will continue work to prevent this from taking place by educating authors on ethical issues, using detection tools where available and having clear policies. We will also work to ensure that the peer review process is as robust as possible. For instance, where feasible, we can support journals with a statistical sub-editor or designated board member. To further scale this, we are exploring solutions in software and AI to assist peer reviewers’ vital work.
Publishers are nonetheless at times implicated in the noted ‘publish or perish’ mentality, which can give rise to problems in reproducible research. Given the steps that publishers have taken to support holistic research evaluation practices, publishers work to ensure that authors are not pressured to publish reams of work in any single journal. We actively try to educate researchers about these issues, and we work with forums such as COPE to implement standards and best practice to prevent, for instance, salami publications and duplicate submissions. We will continue work with stakeholders in the research ecosystem to explore the drivers and find further solutions to this common pressure amongst researchers.
Publishers are but one stakeholder in the research ecosystem needed to promote reproducibility practices. Furthermore, journals are by their nature ‘late’ in the research process, meaning that they are naturally limited in encouraging reproducibility at critical earlier junctures. In our experience, researchers may well have been working on a research project for a long period of time ahead of publication, with many research papers taking years to get to the publication stage. A lack of awareness regarding reproducibility practices at the start of the research process clearly causes huge challenges for repeating studies at a later date. From a practical perspective for instance, it is problematic to ask researchers at the end of their project to share data if at no point beforehand they have been practiced in the art of structuring, organising and labelling their data throughout the research process.
Researchers therefore need education, rewards and incentives to embark on practices that encourage reproducibility from the outset of their research projects. A range of stakeholders, working at different stages of the research process, are subsequently required to incentivise and promote reproducibility practices, such as data sharing. Stakeholders, including funders and institutions, will be operating at these earlier stages of research development and will have important opportunities to influence, appropriately fund and incentivise researchers on reproducibility. Moreover, they could help to raise awareness of the various solutions that publishers offer, in order to reinforce to researchers the importance of reproducibility practices ahead of their submitting papers to journals.
It is therefore vital that stakeholders across the research community align on objectives relating to reproducibility practices. Collaborations across stakeholders should take place in working to build a positive research culture that rewards and integrates reproducibility practices. By way of illustration, the manifesto for reproducible science is a helpful example of how different stakeholders, encompassing regulators, journals, funders and institutions, can work holistically and pragmatically across the research process, to improve reliability and integrity of science. A clear goal in this collaboration would be to ensure that reproducibility practices are naturally integrated into the research process, from inception through to dissemination.
We suggest that we collaborate with stakeholders in the research community to workshop new ideas on reproducibility. This could include the development of incentives, or even mandates, for authors to share data; to agree standards; or to encourage researchers to publish negative/ null results. Workshops or studies could be carried out to examine fundamental questions, including: What would be gained if research was fully reproducible? What changes regarding research and publishing incentives and infrastructure would be required to make this possible, and which of these has the greatest impact? How do we resolve ongoing researcher concerns around practices such as data sharing? This would be preceded by agreeing on a definition of the concept of reproducibility itself (which is different for different domains, subsequently causing more confusion as to best practice).
Finally, Government support will be helpful to coordinate across stakeholders to drive workshops and action on reproducibility, as well as fund technologies and projects to incentivise reproducibility. However, a key risk we should avoid is that Government over-simplifies the issue or deters researchers further from reproducibility practices (for instance, box ticking exercises which introduce more administrative burdens for researchers).
We are supportive of a national committee, which could work to implement many of the goals we have outlined above. To be as effective as possible, we would encourage the committee to be cross-stakeholder, to have clear objectives and align on definitions and parameters for reproducibility itself. The committee should motivate and inspire improvements in reproducibility, rather than focus on sanctions or introduce new hurdles that may be cumbersome for researchers. Further, the committee can build on previous partnerships, initiatives and individual stakeholder activities to ensure alignment and better awareness of these activities.
 Baker, M. 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454 (2016)
 Faneli, D. Is Science facing a reproducibility crisis, and do we need it to? (2018)
 Chalmers I, Glasziou P. Lancet 374: 86-89, 2009 A 201
 Baker, M. 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454 (2016)
 Faneli, D. Is Science facing a reproducibility crisis, and do we need it to? (2018)
 Christopher Tan, A. et al. Data sharing – trialists’ plans at registration, barriers, and facilitators: a cohort study and cross-sectional survey (2021)
 Gundersen, O. E., Gil, Y., & Aha, D. W. (201- 8). On Reproducible AI: Towards Reproducible Research, Open Science, and Digital Scholarship in AI Publications. AI Magazine39(3), 56-68. 2018
 Elizabeth Gibney, This AI researcher is trying to ward off a reproducibility crisis. Nature 577, 14 (2020) - NEWS Q&A -19 December 2019 -doi: https://doi.org/10.1038/d41586-019-03895-5
 Munafò, M., Nosek, B., Bishop, D. et al. A manifesto for reproducible science. Nat Hum Behav 1, 0021 (2017). https://doi.org/10.1038/s41562-016-0021