Written Evidence Submitted by Catia M. Oliveira, Anna Guttesen, Emma Sullivan, and Juliana Olivier, University of York
(RRE0039)
Introduction
In this section, we respond to this call for evidence collectively as a group of Early Career Researchers (ECRs), specifically psychology postdoctoral and PhD students at the University of York, with an active interest in open science. This submission was compiled by Catia M. Oliveira (PhD student), Anna Guttesen (PhD student), Emma Sullivan (PhD student) and Juliana Olivier (PhD student). We coordinate the departmental ReproducibiliTea journal club and are involved in the Open Science Advocate meetings run by the university. However, the statements below summarise responses collected from ECRs who range widely in the extent to which they engage with open scholarship practices and advocacy. As ECRs are at the forefront of the open science movement (Abele-Brehm et al., 2019; Ali-Khan et al., 2017; Houtkoop et al., 2018), their perspective on reproducibility and research integrity is invaluable to shaping the future landscape of research and innovation. Whilst we aimed to provide a representative sample of the opinions of ECRs in the Psychology department at the University of York, it is important to state that not all ECRs have contributed to this document and thus we may have failed to represent their perspectives on these topics. We summarise the responses to each question in the following sections.
The reproducibility crisis seems to be particularly widespread across the social sciences, with Psychology taking a place of relevance as indicated by the vast research and media interest in this topic. However, the prevalence of the reproducibility crisis within Psychology has been difficult to determine given the large inconsistencies in the definition and operationalization of reproducibility and replicability which may help explain some of the variability in the estimates across studies attempting to assess the magnitude of the reproducibility crisis in psychological research. Within Psychology, there has been some evidence pointing at Social Psychology as the subfield most impacted by poor reproducibility. In a recent replication effort of psychological effects by the Open Science Collaboration (2015) only a replication rate of 39 % was observed for Psychology, with the field Social Psychology taking the lowest place with only 25 % of results being replicated.
As evidenced by Świątkowski and Dompnier (2017) (available https://www.rips-irsp.com/article/10.5334/irsp.66/) many issues have contributed to the reproducibility crisis and whilst the spotlight has been on the role of questionable research practices on the deterioration of the research quality, one must recognise the crucial role that incentive structures, funding agencies and publishers have played in exacerbating these issues. Structural issues seem to be at play when research practices that contribute to better science often do not benefit the researchers’ career, especially in an environment of competitiveness and job scarcity. One such example is the pressure to publish high impact research, fomenting the “publish or perish” culture. This has contributed to the adoption of questionable research practices to ensure that all research produced meets the journals’ standards of novelty and surprising findings. These practices often involve underpowered and poorly designed studies, p-hacking (multiple testing with the aim of finding interesting results without disclosure), HARKing (Hypothesizing after the results are known). Thus, the need/desire to achieve job security in the academic job market is often at odds with the requirements to produce robust and replicable science, incentivising a culture of fear of being wrong and making mistakes and motivating researchers to support their own theories despite evidence to the contrary.
Finally, poor statistical knowledge may also have contributed to the adoption of questionable research practices, as researchers did not recognise their negative impact. Researchers have often been responsible for all steps of the research design and analysis without the required knowledge and guidance. Thus, lack of interdisciplinary cooperation potentially contributes to these poor practices.
a) Role of research funders, including public funding bodies
Research funders, including public funding bodies, have a vital role to play in addressing the reproducibility crisis. In the current climate, the demand for novel, ‘exciting’ and significant results fosters a culture in which the generation of new ideas is pursued over additional evidence for and against a previously suggested idea. In addition, significant results are more likely to be published relative to those without significant results (Koletsi et al, 2009). As a result, design and analysis to increase the likelihood of obtaining significant results is favoured in order to achieve publishability (Nosek et al., 2012). There needs to be a fundamental shift in which funders incentivise project proposals which incorporate and promote open research practices. For example, crediting project proposals which have pre-registered analysis plans, pledge to make their data freely available, propose new studies in the form of registered reports or simply seek to replicate a previous finding. Moreover, research funders can also incentivise project proposals with a larger team of researchers as this makes it more likely that the research questions are tackled at an appropriate scale (i.e., are sufficiently powered). These researchers could collaborate and pool their data, resulting in larger and more diverse samples. Relatedly, research funders should also acknowledge that some projects need to run over a longer timescale firstly, to account for time taken to conduct high-quality, reproducible science and secondly, to factor in the need for longitudinal study designs, where appropriate. This would help to address the reproducibility crisis as projects would not need to be ‘rushed’ to completion by a set deadline, at the expense of producing high-quality, reproducible research.
In addition, once funded, research funders should allocate a sufficient proportion of this funding to open access publications to further promote open science practices. They should also allocate a sufficient proportion of this funding to training researchers in open research practices so that they feel confident and can adhere to them. On the topic of adherence, research funders could also enforce certain requirements on funded projects (e.g., preregistration of analysis plans, sharing of data/code once complete) to ensure that the pledge to conduct research in accordance with open science is adhered to. If these open research practices are not adhered to, or if any questionable research practices are uncovered during or after project commencement, research funders should take action to deter these outcomes (e.g., fine, ban on future submissions, requirement to undertake training in open research practices).
b) Role of research institutions and groups
Research institutions have an integral role to play in addressing the reproducibility crisis. On a broad scale, there should be an increased awareness and discussion of open science and reproducibility at both the institutional and departmental level. Currently, ECRs seem to be at the forefront of the open science movement and although they will ultimately affect future research practices, they do not have a great deal of power in the short term, with senior academics often determining research practices. Consequently, research institutions could provide better support and education for ECRs to use these open research practices, whilst simultaneously promoting awareness and discussion of open science and reproducibility more generally to more senior academics for example. Furthermore, universities often recruit on the basis of publications and grant income. In relation to the above, this fosters a culture whereby novelty and significance is preferred over rigorous, transparent research. Novelty and significance is also preferred over incremental research, whereby the findings of one study inform the next. To overcome this, research institutions could change their hiring criteria to state that they value open research practices and therefore, value candidates that ascribe to these practices, which can be evidenced in their application (Gernsbacher, 2018; Kowalczky et al., PrePrint). This could also extend to academic promotions. A change in hiring and promotion criteria would reflect a commitment to high-quality research, with recognition of replication and null findings as important research outputs.
In addition, research institutions could allocate funding for training staff and students on open research practices. This could include specific training for academic staff and/or training for undergraduate and postgraduate students. This would allow students to become familiar with these practices and give them the opportunity to adopt them from the early stages of their university careers, whilst being supported by academic staff. This could also include incorporating open science into the undergraduate curriculum so that students are taught the fundamentals of open science and can take this knowledge with them in their future careers, whether they remain in academia or not. On a final note, research institutions could also provide funding for awards which incentivise the adoption of open research practices. For example, universities could offer awards which are open to submission annually from staff and students involved in projects or initiatives which engage with, reflect on or advocate open research practices. These kinds of awards could incentivise the adoption of open research practices at different levels of researcher experience and across a variety of disciplines.
Finally, given ECRs often unique position of not managing their own grants and having less autonomy in research decision-making, research institutions should have systems in place to support and protect individuals who adopt open scholarship practices without the supervisors and wider community’s support. This issue is well explored in the paper by Zečević et al. (2020) available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7836032/. Furthermore, these institutions should also recognise that whilst the adoption of open scholarship practices are beneficial for science for allowing further opportunities for scrutiny of one’s work, thus making errors and misconduct more easily uncovered, it also raises the possibility that ECRs will be in a more vulnerable position than already established researchers whose reputation may not be as affected if these errors emerge.
c) Role of individual researchers
While individual researchers may sometimes feel overwhelmed by the reproducibility crisis, their transparent and rigorous approach to science can go a long way in addressing these issues. Individual researchers, whether ECRs or senior staff, can endeavour to provide a supportive and encouraging atmosphere in their lab environments or large-scale collaborations, where they are honest about decisions and mistakes and provide the space for others to be similarly open. Furthermore, if the individual researcher is in a position of influence, they can ensure that top-down structures (institutions, funders, lab manuals, teaching materials etc.) are promoting transparency. Across the department, experiences from ECRs are quite varied. ECRs whose supervisors have knowledge and expertise with open science have felt supported to adopt open science practices, including pre-registration, data and code sharing and, in some cases, conducting registered reports. including pre-registration, data and code sharing and, in some cases, conducting registered reports. However, ECRs whose supervisors and collaborators are less familiar with open science practices may face reluctance to implement such practices during their PhD or employment (e.g., post-doctoral research position), as processes such as pre-registration and registered reports may be seen as time-consuming and potentially risky.
Individual researchers can also implement open science practices that could help address some issues. Some examples include storing data in open repositories, pre registering studies, sharing clearly annotated analyses scripts etc. However, learning the skills and adopting these practices takes time and energy, and without the necessary support it may seem overwhelming, in particular for ECRs who lack this support from their mentors. Therefore, individual researchers should ensure an environment that prioritises transparency and respects that some of these rigorous practices may be time-consuming.
d) Role of publishers
Publishers can help to incentivise open science practices, including requesting pre-registrations, registered reports, data and analysis availability. Importantly, if publishers were to prioritise the scientific rigour of the submitted manuscripts above novelty, this could help reduce questionable scientific practices such as p-hacking and HARKing, and instead encourage replications.
The review process should ensure that the scientific output is held to a high standard. With the current peer-review system, the reviewer has the responsibility to evaluate the scientific content of the manuscript. However, as pre registrations, open data and open analyses become more widespread, the workload for the thorough, but unpaid, reviewer increases. A more standardised and formal way of distributing the work would be beneficial. For example, one option could be to employ separate individuals to review the analyses, whilst independent reviewers could focus on the rigour of the content. A more efficient use of the reviewer's expertise and time could help ensure that the review process is sufficiently thorough.
Finally, instead of the excessive fees for open access publishing, removing this pay-wall altogether would improve accessibility. This would promote the distribution of information irrespective of the reader’s institution or financial situation.
e) Governments and the need for a unilateral response / action.
Considerations about the role of the government in addressing the reproducibility crisis elicit a range of responses, which highlight that any government involvement in addressing the reproducibility crisis must be considered carefully and developed in correspondence with researchers. Whilst ECRs recognise that the government could helpfully contribute to a coordinated response across entities, the installment of unilateral action, especially in punitive form, could be detrimental to research. Instead, the role of the government should focus on coordination and facilitation.
The government could helpfully contribute to a response through coordinated reforms to how publishers, funders, and academic institutions incentivise and encourage reproducibility and open science practices. The academic research system is synergistic, so reforms in one sphere (e.g. publishing) will fall down without a properly coordinated response in other areas.
A core role for the government in facilitating such reforms lies in assuring that academic institutions, as well as research projects, are appropriately funded to be able to provide open research practice training, as well as to then implement these practices.
To ensure that any government reforms or policies take researcher’s needs and the existing pressures of the academic system into account, there should be an ongoing dialogue between researchers and the government, and it may further be beneficial for politicians involved in policies affecting researchers to have a basic understanding of science and the academic research system.
Much demand for open science reforms has come from within academia, which highlights that motivation for change exists among researchers and within academic institutions. Punitive strategies and additional burden on researchers, including ECRs, through regulation should be avoided. As has already been pointed out in this report (see questions 2 and 3c), punishment would create incentives to hide mistakes and avoid taking steps towards open science practices (e.g., pre-registering hypotheses and analysis plans, making data and scripts openly available), and additional burdens on researchers could come at the cost of productivity and research quality.
4. What policies or schemes could have a positive impact on academia’s approach to reproducible research
We believe that open access policies across journals, as well as requiring researchers to make their data openly accessible and increased use of pre registration and registered reports, would have a positive impact on academia’s approach to reproducible research. However, this would require systemic changes among publishers and in the incentive structures for researchers.
Policies could regulate publishers in order to facilitate a shift from the marketisation of research towards a system that rewards thorough, openly accessible, reproducible, and replicable research, as well as replication studies. Schemes to establish replication focused journals would further support this positive change. Furthermore, schemes that aim to enable ECRs to practice open science would have a positive impact on academia’s approach to reproducible research. This could include initiatives such as extending PhD contracts from 3 years to 4 years to allow for more time to gain the skills necessary to practice open science (e.g., submit registered reports), as well as the recognition of replications as valid dissertation projects. Lastly, schemes that support research institutions in offering open research awareness and training as well as support sessions would be helpful.
However, it is worth noting that such schemes will only be successful if they are accompanied by changes in research institution’s hiring and promotion practices, which will need to shift away from the current strong focus on publications in high-impact journals which primarily publish novel research findings.
5. How establishing a national committee on research integrity under UKRI could impact the reproducibility crisis.
We recognise the benefits of such a committee for promoting adequate standards of practice. Yet, this committee would have to focus on identifying the best practices for science by working alongside academics and education bodies and creating heightened awareness and more opportunities for training and education on open research practices. Any changes would need to be carefully and sensitively implemented, ensuring that these do not increase disparity amongst researchers and institutions and that these represent the interests of both the academic community and science more broadly. Thus, these decisions should be carefully tailored by considering the needs of each field rather than imposing changes that further reduce academic accessibility.
(September 2021)