Written Evidence Submitted by the University of Edinburgh



About the University of Edinburgh

The University of Edinburgh is a world-leading research-intensive University, with an ambition to address tomorrow’s greatest challenges. We approach this with a values-led approach to teaching, research, and innovation, and through the strength of our local and global relationships.

The range and scale of the University’s research is extensive, with more than 7,600 academics and 5,750 postgraduate research students across the sciences, social sciences, and humanities. We are a leading UK university in terms of research quality and output, and our researchers secured £359m in external research funding in 2020/21.

In recent years the University has sought to consolidate the quality of our research through a number of linked initiatives including (i) the report of a working group advising on responsible use of research metrics; (ii) a research culture survey, with articulation of a research culture strategy and the establishment of a research culture committee; (iii) institutional membership of the UK Reproducibility Network; and (iv) appointment of an academic lead for research improvement and research integrity.


The issues in academia that have led to the reproducibility crisis

Our primary concern as a research institution is that the findings of our research should be useful to research users. Where we or others have sought and failed to replicate the findings of our own research or that of others, we recognise at least four potential reasons for this:

Category 1

A valid research claim was made based on the observed data, but the statistical test had returned a Type I or “false positive” error.

Category 2

The claim that was made was valid under the particular circumstances under which it was tested but is not observed under the circumstance in which replication was attempted. These different circumstances may be obvious or subtle, and their impact on the observed phenomena may or may not be important in understanding the question at hand.

Category 3

The observations may have been due to sub-optimal study designs (which for instance allow the emergence of experimenter bias, or selective data presentation, or hypothesising after results are known), which might generally be considered as questionable research practices, with varying degrees of researcher culpability.

Category 4

The research claim may have been made following deliberate researcher malfeasance such as falsification or fabrication.

In our view the first and second explanations cannot be characterised as part of a “reproducibility crisis” because they are integral to the scientific process. They only cause problems where research findings become part of a canon of belief without appropriate scrutiny or replication, and where our research ecosystems discourage the funding, conduct and publication of replication studies and of neutral results.

The third explanation could be characterised as contributing to a crisis in reproducibility in that it relates to avoidable harms. We understand that the expectations placed on researchers may encourage the learning (perhaps unconsciously) of approaches which are more likely to provide exciting, significant, publishable results, and that this impacts the integrity of research claims made; but we do not consider this a deliberate manipulation or subversion of the research process. Rather, these might provide a focus for improvement based on training, audit, and the provision of resources and incentive structures which enable and encourage researchers to do their best work. This third category relates to the integrity of a research claim.

There has been much written about the main drivers for this third category, with a consensus that the research ecosystem (and, in particular, systems of funding, publication and promotion) condition researchers to behave in ways which maximise the prospects for “success” in these terms but may have adverse consequences in the reliability of research outputs.

In contrast, deliberate researcher malfeasance (the fourth explanation) is completely unacceptable. This fourth category relates to the integrity with which the researcher conducted the research. It is we believe important to make a distinction between the contributions of research integrity (category 3) and of researcher integrity (category 4) to the reproducibility crisis. While the impact of an individual instance of compromised researcher integrity is substantial, the aggregate impact of more prevalent problems with research integrity is likely much greater.

The research community will be most efficient when failed replication efforts are never due to issues of research integrity or of researcher integrity, as this would allow focus on the scientific reasons for why two apparently similar experiments should reach different conclusions.

Importantly, concerns about reproducibility and research integrity feed into wider issues of public trust in science and in the policy which this informs. It is therefore crucial that we ensure a well-founded credibility and authority for science in public debates and amongst policy makers, as they seek to apply and exploit research findings for the public good.


The role of the following in addressing the reproducibility crisis:

Research funders, including public funding bodies

There is now a substantial corpus of research which explores the causation of the behaviours and research approaches which lead to poor reproducibility, and which describes the prevalence of such problems and their consequences for the progress of science. However, there is much less research effort which evaluates the effectiveness of interventions which might improve research reproducibility. Many experts and expert groups have articulated possible solutions, but very few of these have been subjected to rigorous tests of whether these interventions work as expected. Where such tests have been conducted, they often find that credible interventions are in fact without effect (see for instance https://doi.org/10.1186/s41073-019-0069-3).

In our view, the generation of evidence for what works in research improvement is an important and legitimate topic for research funding. While there are some funding schemes relevant to aspects of such work (for instance the UKRI MRC “Better Methods – Better Research Funding Panel), they do not have a specific remit for developing and testing interventions in research improvement. Further, a three-year funding cycle discourages the development of interventions which may require several years to show a sustained benefit. These are missed opportunities, and provision of ring-fenced funding for research on research is desirable.

Replication studies provide exceptional value for money, in that they have a high probability (40-80%, depending on the field) of overturning existing knowledge. An efficient research funding model would contain a substantial stream dedicated to prospective replication studies. Within a given research field, the establishment of multicentre consortia to conduct such research would also – by emphasising the importance of preregistration and rigorous experimental design – have beneficial effects on other (non-replication) research conducted by consortium members.


Research institutions and groups

As an institution, we seek to address issues of replication by firstly having robust systems in place to identify issues of researcher integrity, to respond to internal or external allegations of such activity, and to take appropriate action. These systems are under constant review to ensure their effective operation, and we are moving towards greater transparency in our reporting of research misconduct issues.

Secondly, and we believe more importantly, we recognise that we have a critical role in engendering research integrity (category 3 above). We try to do this through encouraging appropriate researcher behaviours by ensuring they have [1] the capabilities to do excellent research, through education, training and mentoring; [2] opportunities to do so, for instance through the provision of tools to support open publication, data sharing and pre-registration; and [3] motivation, for instance through exploring ways in which our appointment and promotion criteria can be more concerned with the research that was done than where it was published or how much grant income it generated, and also how these criteria might be more sensitive to broader contributions to good research citizenship.

Of course, the behaviours of researchers and research institutions are deeply ingrained and shaped by external as well as internal forces. For instance, while a PhD student may be entirely confident that their own institution recognises and rewards excellence in research conduct, if they do not have this confidence in other institutions where they might seek future employment, they may feel obliged to seek more conventional (grants in and papers out) markers of esteem. Enabling a “levelling up” across UK institutions for instance through organisations such as the UK Reproducibility Network – will help address this concern.

Further, we believe that there may be substantial opportunities in applying improvement methods – widespread in healthcare, industry and even sporting endeavour - to the process of producing research. As an institution we are adopting this approach. Firstly we establish, within our various research communities, what they consider to represent best research practice. This may be informed for instance by external guidelines for the conduct and reporting of research, and by research on research, and by the needs of research users. We then evaluate our current performance and develop strategies to improve that performance. These are then piloted in small tests of change. If performance improves, we adopt the change, and if not, we revise the strategy and try again. We are in the first stages of implementing this approach, and early projects include efforts directed at increasing the completeness of clinical trial reporting; the proportion of a group’s research outputs which are “Open”; and reducing the interval between completion and publication of research projects. Through focussing on delivering improvements in the way we do research, we seek to embed a system of “quality by design”, to complement the external “quality by outputs evaluation” involved for instance in the REF.


Individual researchers

Our research staff are our greatest research asset. We believe it is part of our responsibility to them to create an environment where they can do their best research. Specifically, we seek to provide opportunities for their continuing professional development as independent researchers, including having time for reflection on the strengths and weaknesses of their current research approaches, and the development of skills in research improvement approaches (see above).



We acknowledge the efforts of major scientific publishing groups to improve the quality of reporting of the work which they publish, and their adoption of the Registered Report format. We recognise their role as a research partner, for instance in the evaluation of the peer review process. However, publishers should give much greater prominence to replication studies, and at times they have been reluctant to accept manuscripts describing replication studies even when they have given an editorial commitment so to do. Secondly, some of their behaviours in discussions around open access publication have had more to do with their proprietary interests than with the promotion of effective scientific communication. Specifically, attempts actively to diminish the contribution of alternative channels such as preprint servers, and their depth of engagement in discussions about how best to ensure full and immediate open access to UK research outputs, have both been disappointing.


Governments and the need for a unilateral response / action.

We expect the governments will seek to maximise the value of taxpayer investment in research. This value comes not just from research outputs, but also from closer interactions between academia and industry; the economic contribution of start-up and spin-out companies; and the availability of a skilled research workforce available for recruitment to the private sector. To maximise this value, we therefore need to focus not only on the reliability, credibility, and reproducibility of research outputs, but also on ensuring that our research workforce is well versed in the approaches to increasing reproducibility described herein. It would be surprising if government did not seek to ensure that intermediate agencies such as UKRI sought to maximise the reproducibility of the work which they funded.

On the need for unilateral action, it is unlikely that substantial progress will be made without the complementary (if not co-ordinated) action of all research stakeholders. However, if we require involvement of all stakeholders we may be waiting for a long time. It would be helpful, therefore, to encourage initial unilateral efforts (for instance within institutions) while at the same time supporting approaches which bring together different stakeholders (such as the UK Reproducibility Network) to align strategies and to share best practice.

What policies or schemes could have a positive impact on academia’s approach to reproducible research?

We believe that the early steps which we are taking as an Institution, described above, will have a positive impact on the reproducibility of our research. Essentially this rests on creating an environment where the quality of the research process is valued more than the results of that research. For us this means

A key insight is that issues of reproducibility – what one might consider as the provenance of a research claim- are likely to be problematic in most disciplines – and we should proceed on the basis that they are present unless we can demonstrate that they are not.

Finally, the precarious nature of research careers is such that early career researchers, and their supervisors, spend much of their time focussing on where they will get their next three years of funding – or what they will do if they do not. This continual pressure is one reason for the drive to publication, even if the work is not quite ready and the findings are not quite secure. Re-shaping research careers to reduce the metronomic requirement for stellar results would go some way to improving research practice. Of several options, the provision of “run through training” from first post-doc to independence, with funding secured for 7-10 years, might be helpful. Another approach would be to shift the balance of salary source, so that a higher proportion came in core funding and less came through research grants. These are complex issues, and a national review of research career pathways may be helpful.


Would establishing a national committee on research integrity under UKRI could impact the reproducibility crisis?

Research integrity is an issue in every aspect of the work of UKRI, and there is a risk that the establishment of a national committee could lead other parts of the organisation to think that responsibility for research integrity did not lie with them. It is also critically important that UKRI makes a distinction between research integrity and researcher integrity (see above). In this light, we agree that it would not be appropriate for the UKRI RIC to have any role in the evaluation or re-evaluation of institutional enquiries into issues of researcher conduct or misconduct. The proposed role in “championing research integrity” is welcome but should go further. This enhanced role should include (i) an annual audit of UKRIs efforts to support research integrity; (ii) being empowered (with a budget) to fund research in and pilot implementation of research integrity improvement projects; and (iii) developing proposals for how an institution’s systems to support research improvement can be recognised in UKRI funding decisions and in research evaluation exercises (rewarding systems which provide “quality by design”, as described above).



(September 2021)