Written Evidence Submitted by:
Thomas Rhys Evans, University of Greenwich
Madeleine Pownall, University of Leeds
Elizabeth Collins, University of Stirling
Emma L Henderson, University of Surrey
Jade Pickering, University of Southampton
Aoife O’Mahony, Cardiff University
Mirela Zaneva, University of Oxford
Matthew Jaquiery, University of Oxford
Tsvetomira Dumbalska, University of Oxford
(RRE0007)
A Network of Change:
Three Priorities Requiring United Action on Research Integrity
Author introduction and Summary
The last decade has seen renewed concern within the scientific community over the reproducibility and transparency of research findings. This paper outlines the various responsibilities of stakeholders in addressing the systemic issues that contribute to this concern. In particular, this paper asserts that a united, joined up approach is needed, in which all stakeholders, including researchers, universities, funders, publishers, and the UKRI, work together to set standards of research integrity and engender scientific progress and innovation.
In the spirit of coordinated action, our team represents individual researchers, predominantly from the field of Psychology, who each share a self-declared interest in meta-science and reproducibility. We come from a range of UK universities, and we bring expertise from a breadth of specialisms, meta-science interests, roles, and career stages. We, therefore, represent a core group within the community of individuals, institutions, and structures that we discuss.
Introduction
There is growing evidence to suggest that most research findings are questionable (Ioannidis, 2005). This, coupled with mostly unsuccessful attempts to replicate core research findings in psychology (Open Science Collaboration, 2015) and elsewhere (Nosek & Errington, 2017), exemplify the far-reaching issues of research integrity that the scientific community is currently facing. Recently, science has put reproducibility (the ability for researchers to reproduce the same findings with the same data) and replicability (the ability to replicate the same results using new data; Plesser, 2018) at the forefront of its research agenda. Initiatives prioritising transparency, research quality, and research culture, as will be discussed below, have been a substantive driver of changes in research norms across the world, with UK researchers and research professionals playing a central role in developing and championing new approaches and standards.
Whilst the scale of change achieved in the last decade has been notable, the central barrier to sustainable and permanent change in integrity norms is a coherent, united, and coordinated action plan, in which all research stakeholders come together to embed and progress such developments. As a collection of active researchers in the movement to address reproducibility and replicability concerns, we summarise three research integrity priorities that have contributed to this wider crisis of science: 1) transparency, 2) research quality, and 3) research culture. We then suggest ways in which individuals and organisations should address these issues as a collective, where action from one stakeholder can cause changes in demands upon other stakeholders, and thus require coordinated action. To maximise consistent and widespread adoption of such recommendations, we encourage any future research integrity committee to prioritise facilitating cross-stakeholder work to identify areas of development and best practice and provide central resources to maintain, and continue increasing, standards for research.
Priority One: Transparency (e.g., Open Data)
One important factor that has contributed to the reproducibility crisis is transparency - a lack of open sharing of data, code, and research materials. As observed during the COVID-19 pandemic, open data has been transformative for scientific and public understanding, bringing into sharp focus the clear benefits of increasing transparency and accountability within psychological research (Besançon et al., 2021). Unfortunately, open data sharing has not been the norm historically, and when research materials and data are not shared, researchers, funders, and journals cannot adequately assess the robustness and reliability of published work, slowing scientific progress. For instance, when materials are not open, there is limited evidence of how particular data was collected and how they contributed to the conclusions drawn, inhibiting meaningful evaluation of the research. More pressingly, a lack of openness is a notable barrier to reproducibility, in that researchers cannot reproduce analyses or conclusions without access to associated datasets (e.g., Simmons et al., 2011; Wicherts et al., 2016).
Transparency is a direct contributor to research integrity. Low levels of transparency mean that attempts to progressively build upon previous research are inefficient and require more funding and researcher hours. It is harder to replicate and establish the boundaries of effects, to evaluate the quality of work, and to determine its value beyond the immediate sample studied (generalisability). It can also play a role in hindering error detection and correction, and identifying fraud (e.g., Simonsohn, 2013). As such, research transparency can have multifaceted direct and indirect consequences on the quality and speed of research developments made and thus should be a priority focus for research stakeholders. To represent transparency, here, we focus on the example of open data and how different individuals and organisations can address the reproducibility crisis through open data. A central priority is to ensure that UK funded data are safely preserved, conform to the FAIR principles (Findable, Accessible, Interoperable, Reusable; Wilkinson et al., 2016), and are openly available to facilitate re-use and re-analysis where possible.
As we will discuss under Priority 3 of this paper, advocating for transparency in research requires a cultural shift and a fundamental realignment of expectations. For example, historically, researchers were not expected to present comprehensive accounts of their research; there were no expectations to declare conflicts of interest, preregister hypotheses ahead of data collection, or openly share research data, analysis code, and research materials. Currently, norms of science encourage researchers to state that data is available “upon reasonable request” despite a consensus that individual gatekeeping is suboptimal, with unacceptably low rates of subsequent data sharing by request (Magee et al., 2014). Researchers that are willing to share their data also face challenges in knowing how to do so ethically whilst conforming to FAIR principles (Wilkinson et al., 2016), in an environment where there are already significant demands on their time. To facilitate data sharing, co-ordinated change is needed across stakeholders. For example, changes to journal data availability statement policies can have dramatic influences on the uptake of sharing practices (e.g., Hardwicke et al., 2018), but this causes subsequent demands for training, support, and infrastructure of consequence to researchers, research support (e.g., ethics boards, libraries, technicians), university and funders (Houtkoop et al., 2018).
Table 1. Transparency and Open Data: Interconnected Roles and Recommendations
| Individual researchers | Research support | Institutions (universities) | Funders | Publishers | Research Integrity Unit |
Roles | Collect and/or curate data to use for analysis.
Manage and deposit data using an appropriate storage location.
| Resource infrastructure that enables data storage and sharing.
Make financial choices about journal subscriptions and partnerships.
| Prioritise and fund training about transparency to individual researchers.
Fund the infrastructure offered for sharing data and materials.
Acknowledge open research behaviours as part of research quality evaluations during hiring, assessment, and promotion.
| Establish policies regarding the level of transparency and openness required for funded projects.
Evaluate adherence to transparency policies and communicate consequences for non-compliance. | Maintain author guidelines that specify how research data and materials should or must be stored and shared, as a condition of publication.
| Provide or signpost recommendations, support, and structures for all stakeholders (e.g., templates, training, etc.).
Audit institutions, funders, and publishers.
Facilitate collaborations across stakeholder groups.
Facilitate, communicate, and champion development of transparency norms and practices.
|
Specific Recommendations | Sign and follow the principles of DORA.
From project conception, incorporate open science practices (as appropriate) into the research workflow.
Use positions of power (e.g., as reviewers, line managers, project leads, lab principal investigators, etc.,) to communicate expectations, share good practice, and provide practical support for improving transparency (e.g., sharing data, preregistration, and declaring conflicts of interest). | Invest in infrastructure which provides opportunities for sustainable approaches to data management e.g., automated data archiving (see Rouder, 2016).
Offer training regarding best practices in transparency.
Use funding responsibly to prioritise partnerships with organisations centred on transparency, such as data repositories and open access journals. | Sign and follow the principles of DORA.
Hire meta-scientists to improve and encourage open data norms within departments.
Promote transparent scientific practices such as recognising track records of transparency in hiring and promotion decisions and awards (e.g., recognising preregistrations, registered reports, and pre-prints).
Instigate curriculum changes such that all undergraduate and postgraduate students have an understanding and experience of open practices (e.g., embed transparency practices into research assignments). | Sign and follow the principles of DORA.
Mandate a data sharing statement and conduct regular audits to ensure adherence and quality.
Recognise transparency track record as a positive characteristic when assessing grants and other project applications. | Sign and follow the principles of DORA.
Mandate open data (with appropriate caveats where not possible e.g., partial data, embargoed, other gatekeeping etc) and FAIR principles (e.g., meta-data and codebooks).
Prioritise policy and structural developments in accordance with TOP guidelines (Nosek et al., 2015).
| Undertake rigorous and systematic evaluations of research environments to ensure sufficient structure and support within and across stakeholder groups. Priority should be given to ensuring cohesiveness between actions from the different stakeholder types, identifying and sharing best practices, and identifying specific groups or institutions in need of more localised interventions.
Encourage and signpost infrastructures available to connect researchers / institutions and aid transparency.
|
Priority Two: Research Quality (e.g., Registered Reports)
Research quality is a second core component of research integrity and is of urgent priority. We cannot promote better integrity of research if we do not first consider how the quality (i.e., robustness, reliability, validity, and accuracy) can be improved. One systematic (rather than researcher-centred) barrier to research quality is ‘publication bias’ whereby null/non-significant results are much less likely to be published than statistically significant findings. This incentivises questionable practices such as p-hacking data to ‘find’ a significant result or selectively reporting significant results (Bruton et al., 2020). This directly contributes to the reproducibility crisis because it makes publication somewhat contingent upon the results of the work, rather than being based fully on the theoretical significance and methodological rigour of the research.
In response to the concerns of publication bias, UK researchers have developed many initiatives to improve research practices and adopt new standards in methodology and publishing. Deviating from the traditional publication route where papers are peer-reviewed following study completion, Registered Reports (RRs) are one such innovation in publication. At Stage 1, the introduction, hypotheses, methods, and analyses undergo peer-review before data collection. This important feedback allows substantive changes to be made before resources (e.g., funding, participant time, etc.) are used, and helps to identify flaws in the planned work. Work can then receive (in principle) acceptance by the journal such that the subsequent completed (Stage 2) report will then be published regardless of the findings, so long as the authors have collected and reported data in accordance with Stage 1 (Chambers, 2013). RRs serve to reduce publication bias because acceptance is based on the importance and theoretical significance of the research question and methodological rigour, rather than the direction or strength of the results. This reduces pressure to produce significant results and aims to remove the incentives that drive selective reporting and other questionable research practices (Chambers & Tzavella, 2020).
Figure 1: The Registered Report Publication Pathway (image from Centre for Open Science)
RRs are valuable amid ongoing concerns of widespread ‘false-positive findings’ in the published literature. Indeed, the rate of supported hypotheses is much lower among RRs than conventional research articles (Scheel et al., 2021), providing initial encouraging evidence for the value of such an innovative publication approach. More recently, there have been RR approaches for exploratory, rather than hypothesis-driven, research, such as ‘Exploratory Reports’. Other relevant initiatives include Verification Reports, which is a type of scientific article in which authors can evaluate the claims in published research by reanalysing the data from the original study and conducting new analyses on the data to test robustness.
Further structural support is needed to implement RRs more widely, including training, funding, and wider journal adoption (currently over 300 journals offer RR formats). Registered Report Funding Partnerships have been proposed as a method of extending the RR model by integrating it with the grant funding process so that researchers receive both funding and in principle acceptance for publication based on the integrity of the theory and methods. Combining grant funding and publication decisions in this way may help to streamline these processes and reduce the burden on reviewers, while also providing the aforementioned benefits of RRs in reducing questionable research practices and publication bias (Munafo, 2017). A recent example of a funder-publisher partnership between The Children’s Tumor Foundation (CRF) and the scientific journal PLOS ONE was so successful that both partners have stated they will continue to offer this grant partnership. Such RR-funding partnerships, and similar innovations for drug marketing authorisation (Naudet et al., 2021), offer important and innovative examples of how stakeholders and processes can be unified to revise standards for research quality.
Table 2. Research Quality and Registered Reports: Interconnected Roles and Recommendations
| Individual researchers | Research support | Institutions (universities) | Funders | Publishers | Research Integrity Unit |
Roles | Plan, develop, conduct, and disseminate research findings.
Choose publication and feedback workflow (e.g., RR, traditional, etc.). | Offer training that enables researchers to make educated and strategic choices about publishing. | Prioritise and fund training which supports researchers to prioritise higher quality evidence and more transparent and rigorous research processes.
Hire, incentivise and appraise staff on subsequent transparency and rigour in research practices. | Prioritise the role of rigour and transparency explicitly when assessing the quality of work being considered for funding.
| Assess research quality for publication based on journal criteria.
Capture and evaluate meta-data to identify meaningful trends and development areas. | Provide or signpost recommendations, support, and structures for all stakeholders (e.g., templates, training, etc.).
Audit institutions, funders, and publishers.
Facilitate collaborations across stakeholder groups.
|
Specific Recommendations | Sign and follow the principles of DORA.
Where appropriate, submit research using the RR format or create a time-stamped preregistration of research intent (e.g., via the Open Science Framework).
Engage in methods, statistics, and open science practices training.
Those in teaching or supervision roles should role model use of RRs (and similar initiatives) as responsible and sustainable publication practices, encouraging their others to do the same.
| Ensure adequate training is available to researchers in research design, analysis, and research integrity.
Research support services can go beyond providing training, e.g., subject librarians can assist in projects or trained statisticians can verify code. | Sign and follow the principles of DORA.
Value use of RRs when hiring and appraising academic staff, with emphasis on the quality (as determined by research integrity), rather than traditional metrics of quantity.
Realign incentive structures to value quality over the quantity, citation number, or journal impact factors of publications.
Publicly declaring the disconnect between journal impact factor and research quality (e.g., Fang & Casadevall, 2011) and making associated changes to structures and processes. | Sign and follow the principles of DORA.
Funding assessment criteria should prioritise the importance of research question and method quality, and transparency.
Explore Registered Reports Funding Partnerships, or similar initiatives, to encourage simultaneous funding and publication of research. | Sign and follow the principles of DORA.
Journals/ publishers should consider adopting RRs (amongst other innovations) and provide clear author guidelines (see osf.io/pukzy/ for a template).
Publication should be offered on the quality of research question and methodology, and transparency, not on novelty or positive results. Policies relating to such should be implemented and audited.
Consider setting a formal RR timetable so that projects are handled within reasonable timeframes, or engage with wider community initiatives to facilitate timely management of RRs (e.g., PCI RR).
For confirmatory work, require preregistration with a concrete theoretical background and specific falsifiable hypotheses. | Undertake rigorous and systematic evaluations of research environments to ensure sufficient structure and support within and across stakeholder groups. Priority should be given to ensuring cohesiveness between actions from the different stakeholder types, identifying and sharing best practices, and identifying specific groups or institutions in need of more localised interventions.
Encourage and signpost infrastructures available to connect researchers/ institutions and improve research quality.
Support and champion development and evaluation of new initiatives like Registered Reports, Exploratory Reports and Verification Reports.
Audit adoption of RRs and compile an evidence-base which evaluates the implications of wider adoption of these (and similar) initiatives. |
Priority Three: Research Culture (e.g., Slow Collaborative Approaches)
The third core barrier to reproducible, replicable, robust, and transparent research is the culture in which this research is created and disseminated. Two of the most prominent and debilitating barriers to reproducibility in contemporary research culture are (a) the competitive culture that prioritises individual scientists, rather than collective efforts, (b) and ‘publish or perish’ culture that rewards quantity, over quality, of outputs. The ‘publish or perish’ culture of academia dictates that, in order for scientists to progress in their career and ‘survive’, they must produce a high quantity of publications, particularly as ‘high-impact’ outputs in ‘top-tier’ journals. This preoccupation with perceived journal calibre and volume of outputs, coupled with entrenched publication bias, ultimately risks undermining the integrity of research (Grimes et al., 2018) and thus directly contributes to the reproducibility crisis.
In response to this pressing concern, there have been calls from scholars advocating for collaborative open science and to “slow down” science (Frith, 2020). ‘Slow science’ is a vision for an ideal science that values more impactful and thoughtful projects (compared to multiple quicker contributions) and in doing so encourages collaboration and inclusivity, assessment of research quality, and more rigorous and informative forms of data collection (e.g., longitudinal data). This could be achieved in practice by limiting the number of grants awarded to any given individual, advocating for longer timescales on empirical work, and shifting the emphasis from quantity of output to quality and impact. Many open science advocates have argued for the benefits of slow science, citing this as a mechanism to improve the robustness and rigour of psychological research (e.g., Siegel & LaMarre, 2019). During the COVID-19 pandemic, this concern has grown, and requirements of early career researchers seeking permanent faculty positions have escalated due to hiring freezes and an intensified competition among applicants. Relatedly, the reliance upon, or encouragement of, novelty in scientific findings actively discourages careful replication of important results. In combination, these issues create a hyper-competitive research culture that is focused upon ‘discoveries’ as opposed to an incremental development in understanding. The end results are inconsistent bodies of evidence that are, not surprisingly, low quality and difficult to reproduce and replicate consistently.
For research culture to actively and meaningfully embed a concern for collaborative approaches to science, these values should be embedded in the incentive structure of universities. Some universities now include evidence of open research practices in their hiring and promotion criteria (e.g., University of Bristol, University of Glasgow, Cardiff University; Kowalczyk et al., 2020). To embed prioritisation of collaborative, inclusive and thoughtful approaches to science into the research culture, incentive structures should be fundamentally realigned to champion these values (e.g., see Munafò et al., 2020). One accessible way to value team science approaches, whilst also appropriately acknowledging the contributions of individual researchers, is to mandate a Contributor Roles Taxonomy (CRediT) statement for each academic output to make explicit the various skills and contributions that are required for impactful collaborative research.
There are some useful examples of Team Science approaches that demonstrate the scientific value of collaborative efforts in science. Most notably, is the Psychological Science Accelerator (PSA; see Moschontz et al., 2018), a collective of global psychological laboratories that conduct crowdsourced empirical projects. Studies conducted by the PSA benefit from larger, more representative sample sizes, distributed workload, more creative and critical insights, and more efficient research timelines. Notably, they recently harnessed this power to produce a series of Rapid Response COVID-19 studies (e.g., https://psyarxiv.com/n3dyf/, https://psyarxiv.com/m4gpq/). Structures like the PSA provide a valuable community to facilitate open and collaborative projects, particularly where groups might previously have been formed ad-hoc (e.g., the Many Labs projects; Klein et al., 2018), however they require investment and coordinated action to sustain.
Table 3. Research Culture and Slow Collaborative Science: Interconnected Roles and Recommendations
| Individual researchers | Research support | Institutions (universities) | Funders | Publishers | Research Integrity Unit |
Roles | Engage in research as an individual or as part of a team.
Contribute to the research culture of the immediate working context, institution, and field.
| Support researchers in adhering to university policies.
Maintain infrastructures that enable collaboration and sharing within and between institutions. | Establish an agenda for research culture, designing, and implementing incentive policies.
Hire and appraise staff for their contributions to outputs.
Instigate developments in collaborative work expectations which advance levels of inclusion, diversity, and access to science (e.g., Grahe et al., 2020). | Set timelines for funded research, regulating the research quality of funded projects.
Review constraints and limits to applications e.g., number of co-applicants and co-PIs on grants. | Review policies and structures which recognise contributions to research e.g., for Co-first authorship.
Consider alternative publication models which prioritise transparency and efficiency (e.g., publishing the peer reviews alongside manuscripts). | Provide or signpost recommendations, support, and structures for all stakeholders (e.g., templates, training, etc.,).
Audit institutions, funders, and publishers.
Facilitate collaborations across stakeholder groups.
Encourage and advocate for inclusive, thoughtful approaches to science.
|
Specific Recommendations | Actively collaborate with researchers, statisticians, technicians, etc., being inclusive of geographical location, institutional context, and experience.
Recognise each individual’s contributions e.g., via CRediT statements.
Supervisors, principal investigators, and department leads should encourage PhD students to focus on quality and not assess degree completion solely based on published work. | Facilitate collaborations between researchers and other contributors to research e.g., statisticians/ technicians etc.
Embed structures and support to enable longer-term, collaborative, research. | Acknowledge how participation in Team Science ventures are legitimate and important contributions to scientific knowledge, and reflect this value in hiring, assessment, and promotion criteria. Value high-quality, collaborative efforts over smaller contributions or flawed rules of thumb e.g., number of first/last-author publications.
Create a research culture that places research integrity at its core, transparently disseminating implementing policies and practices accordingly. | Facilitate slow and collaborative approaches in systems (e.g., making it easy to recognise many contributors).
Facilitate slow and collaborative approaches in policies (e.g., presenting reasonable periods between funding announcements and submission deadlines). | Be sufficiently flexible to allow for Team Science contributions to journals e.g., use CRediT statements rather than a simple order of authorship.
Re-evaluate practices as norms change (e.g. introducing code review to assess reproducibility). | Undertake rigorous and systematic evaluations of research environments to ensure sufficient structure and support within and across stakeholder groups. Priority should be given to ensuring cohesiveness between actions from the different stakeholder types, identifying and sharing best practices, and identifying specific groups or institutions in need of more localised interventions.
Encourage and signpost infrastructures available to connect researchers / institutions and aid collaboration.
Recognise and promote the notion that a ‘publish or perish’ culture can facilitate rushed, low-quality research that may contribute to the replication and/or reproducibility crisis.
Audit funders to ensure that they are funding collaborative, “slow” science e.g., through accessible timeframes and rigorous transparency auditing.
|
Conclusion
To summarise, in this report we have outlined three urgent priorities for the improvement of research integrity and the reproducibility crisis: transparency, research quality, and research culture. We have suggested ways to combat each via different research stakeholders, emphasising the need for collective action. Overcoming the issues underlying the replication crisis, thereby driving meaningful long-term changes to research norms, requires united and interconnected changes across individuals and organisations. For example, individuals may personally adopt Registered Reports, but journals must offer this publication route and changes to infrastructures are needed to log the more substantive documentation and complex two-part process. Similarly, journals can mandate open data sharing, but researchers require training and appropriate support and infrastructure from institutions to facilitate this.
Considering these three priorities, we also stress that there are other related issues that co-occur with these concerns, which should also be addressed. For example, while we focused here on open data when discussing transparency, this priority should also consider promoting open sharing of materials, analyses, and code, which all rely on the same mechanisms. Similarly, we have focused on Registered Reports as one method to alleviate publication bias, but there are other initiatives, such as preregistration of analysis plans and crowd-sourced open review, which also represent promising avenues to improve research integrity. Thus, the priorities and ideas here should be viewed as a focused but incomplete starting point for a wider, more comprehensive consideration of how the transparency, quality, and culture of research, and thus integrity, can be improved.
(September 2021)
CRediT Statement
Thomas Rhys Evans | Conceptualization, Project Administration, Writing (original draft), Writing (review & editing) |
Madeleine Pownall | Conceptualization, Writing (original draft), Writing (review & editing) |
Elizabeth Collins | Conceptualization, Writing (original draft), Writing (review & editing) |
Emma L Henderson | Conceptualization, Writing (original draft), Writing (review & editing) |
Jade Pickering | Conceptualization, Writing (original draft), Writing (review & editing) |
Aoife O’Mahony | Conceptualization, Writing (original draft), Writing (review & editing) |
Mirela Zaneva | Conceptualization, Writing (original draft), Writing (review & editing) |
Matthew Jaquiery | Conceptualization, Writing (review & editing) |
Tsvetomira Dumbalska | Conceptualization, Writing (review & editing) |
References
Besançon, L., Peiffer-Smadja, N., Segalas, C., Jiang, H., Masuzzo, P., Smout, C., ... & Leyrat, C. (2021). Open science saves lives: lessons from the COVID-19 pandemic. BMC Medical Research Methodology, 21(1), 1-18.
Bruton, S. V., Brown, M., & Sacco, D. F. (2020). Ethical consistency and experience: An attempt to influence researcher attitudes toward questionable research practices through reading prompts. Journal of Empirical Research on Human Research Ethics, 15(3), 216-226.
Chambers, C. D. (2013). Registered reports: a new publishing initiative at Cortex. Cortex, 49(3), 609-610.
Chambers, C. D., & Tzavella, L. (2020). Registered reports: Past, present and future. https://doi.org/10.31222/osf.io/43298
Fang, F., & Casadevall, A. (2011). Retracted science and the retraction index. Infection and Immunity 79(10), 3855-3859.
Frith, U. (2020). Fast lane to slow science. Trends in cognitive sciences, 24(1), 1-2.
Grahe, J. E., Cuccolo, K., Leighton, D. C., & Cramblet Alvarez, L. D. (2020). Open science promotes diverse, just, and sustainable research and educational outcomes. Psychology Learning & Teaching, 19(1), 5-20.
Grimes, D. R., Bauch, C. T., & Ioannidis, J. P. (2018). Modelling science trustworthiness under publish or perish pressure. Royal Society Open Science, 5(1), 171511.
Hardwicke, T. E., Mathur, M. B., MacDonald, K., Nilsonne, G., Banks, G. C., Kidwell, M. C., ... & Frank, M. C. (2018). Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition. Royal Society Open Science, 5(8), 180448.
Houtkoop, B. L., Chambers, C., Macleod, M., Bishop, D. V., Nichols, T. E., & Wagenmakers, E. J. (2018). Data sharing in psychology: A survey on barriers and preconditions. Advances in Methods and Practices in Psychological Science, 1(1), 70-85.
Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Jr., Bahník, Š., Bernstein, M. J., . . . Nosek, B. A. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45(3), 142-152. http://dx.doi.org/10.1027/1864-9335/a000178
Kowalczyk, O., Lautarescu, A., Blok, E., Dall'Aglio, L., & Westwood, S. (2020). What senior academics can do to support reproducible and open research. PsyArXiv: https://psyarxiv.com/jyfr7/
Magee, A. F., May, M. R., & Moore, B. R. (2014). The dawn of open access to phylogenetic data. PLoS One, 9(10), e110268.
Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., ... & Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501-515.
Munafò, M. R. (2017). Improving the efficiency of grant and journal peer review: registered reports funding. Nicotine & Tobacco Research, 19(7), 773-773.
Munafò, M. R., Chambers, C. D., Collins, A. M., Fortunato, L., & Macleod, M. R. (2020). Research culture and reproducibility. Trends in Cognitive Sciences, 24(2), 91-93.
Naudet, F., Siebert, M., Boussageon, R., Cristea, I. A., & Turner, E. H. (2021). An open science pathway for drug marketing authorization—Registered drug approval. PLoS Medicine, 18(8), e1003726.
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., ... & Yarkoni, T.. (2015). Promoting an open research culture. Science, 348(6242), 1422-1425.
Nosek, B. A., & Errington, T. M. (2017). Reproducibility in cancer biology: Making sense of replications. Elife, 6, e23383.
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251).
Plesser, H. E. (2018). Reproducibility vs. replicability: a brief history of a confused terminology. Frontiers in Neuroinformatics, 11, 76.
Scheel, A. M., Schijen, M. R., & Lakens, D. (2021). An excess of positive results: Comparing the standard Psychology literature with Registered Reports. Advances in Methods and Practices in Psychological Science, 4(2), 25152459211007467.
Siegel, J. A., & LaMarre, A. (2019) Navigating “Publish or Perish” as Qualitative Researchers
https://socialsciences.nature.com/posts/54648-navigating-publish-or-perish-as-qualitative-researchers
Simonsohn, U. (2013). Just post it: The lesson from two cases of fabricated data detected by statistics alone. Psychological Science, 24(10), 1875-1888.
Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., ... & Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1), 1-9.