Written Evidence Submitted by Stephen Bradley, Nicholas DeVito, Kelly Lloyd, Jess Butler, David Mellor and Patricia Logullo
This submission was written by Stephen Bradley, a GP and Research Fellow at the University of Leeds, Nicholas DeVito, a Doctoral Research Fellow at the University of Oxford, Kelly Lloyd, a PhD Researcher at the University of Leeds, Jess Butler, Research Scientist at the University of Aberdeen, David Mellor, Director of Policy, Center for Open Science, USA and Patricia Logullo, Postdoctoral Meta-Researcher at the University of Oxford.
Specific recommendations are presented throughout the submission in bold red text. Several of these actions are reframed from the perspective of the different stakeholders in medical research in Section 8: What can stakeholders do to improve reproducibility of medical research?
The UK holds important competitive advantages in the sciences, particularly in medical research. It is imperative that the substantial sums of public funding the UK spends to hold this advantage deliver findings which are reliable. If not, the UK’s R&D cannot support the innovation that benefits patients, taxpayers and the economy. Unfortunately, systemic problems undermine the rigour of medical research and mean that much R&D investment is wasted.
Mediocre standards and poor reliability of medical research are a worldwide problem, and so improvements in research rigour present the UK with an excellent strategic opportunity to become the leader in robust medical research. Here we propose simple measures to reduce funding waste in medicine which the UK can leverage for much greater impact from existing investments, ensuring it remains a dynamic centre of research for researchers who are committed to genuine scientific discovery.
How do we know if research is true? The findings of research are only useful if similar results would be found if the research is repeated. ‘Reproducibility’ is a broad term that covers many issues affecting whether the results can be trusted. Why might research be unreliable?
Fraud is an important problem and also leads to non-reproducible research, but the most common issues that undermine reproducibility do not require deliberate misconduct. Instead common biases and systemic problems incentivise scientists to carry out work in ways that means their research is not reproducible. As research integrity has already been covered by the select committee’s previous inquiry this submission will address these systemic issues rather than fraud and misconduct.
Only a small number of attempts have been made to reproduce findings in medical research so it is difficult to quantify the scale of the problem with confidence. We do know that when attempts have been made to reproduce findings from research into new cancer drugs, only a minority have succeeded.(1, 2) An approximate, but plausible, estimate suggests that around 85% of all expenditure on medical research is wasted because of the kind of problems that make research non-reproducible.(3, 4) Even if this is a gross over-estimate, because the UK government spends over £2bn on medical research per year, it is certain that colossal sums of public as well as charitable funding are lost to avoidable waste.(5)
Such waste causes great harm because of the opportunity cost of the discoveries we forgo or postpone, and the wasted efforts that occur when multiple groups pursue similar dead ends that were never shared. When misleading or incorrect findings are generated from poorly conducted research this results in more direct harm. Misleading or incorrect findings in the literature can influence policy decisions that impact the lives of thousands or millions of people. Medical guidelines are often formulated by putting together many different studies (evidence synthesis), but if the individual studies are not trustworthy then the conclusions that informed this guidance will be misleading and could well be harmful.
The coronavirus pandemic has highlighted just how vulnerable our medical research system is to these problems. For example, during the early days of the coronavirus pandemic, the possibility that the anti-malarial drug hydroxychloroquine might be beneficial was much debated because of findings of a small number of low quality studies. By the time further high quality trials had shown that hydroxychloroquine has no benefit for preventing or treating coronavirus, many patients around the world were exposed to unnecessary harm by taking the medication and much scientific capacity was diverted to debating whether it might work.(6)
Poor research practices, however, also impacted research that shed doubt on hydroxychloroquine’s safety and efficacy. Readers of a health database study on hydroxychloroquine published in the prestigious journal The Lancet, raised serious questions about the validity of the underlying data. Following an investigation, in which the underlying data could not be accessed for verification, this publication and another by the same group in the New England Journal of Medicine, were retracted. Because the data was not shared or made accessible to third parties, it could not be checked. If not for the high profile of hydroxychloroquine these issues may never have been identified.(7) Similar issues have been arisen regarding the reproducibility of research into the anti-parasitic drug ivermectin, which has been used in many countries despite the lack of any persuasive confirmatory evidence of its effectiveness.(8)
Although most UK medical research is funded by the public through taxes and charity donations, there is remarkably little onus on researchers to share data, describe how they conduct their research, or provide their full results. The dissemination of research continues to be bound by the centuries-old conventions of printed, private journals, with details summarised in a few thousand words. Being able to understand how studies were actually carried out requires full transparency of study protocols and analytic code to verify results and conclusions, as well as access the data used in research. Without the context provided by such information the validity of published results have to be taken on trust.
Most major journals now have policies that nominally require that authors make data, analytic code and protocols available. Unfortunately, in practice, these requirements are routinely ignored.(9-11) There are legitimate obstacles to sharing some types of data, particularly when there are concerns around confidentiality which mean that in many cases unrestricted sharing of all data is not possible. But frequently there are ways de-identified data can be curated with secure safeguards or the consent of study subjects that allow data to be shared. In many cases blanket invocations of ‘data protection’ are in effect a convenient way to evade scrutiny.
Recommendation: Journals should ensure authors their fulfil obligations to share study documentation, analytical code and research data
Reasons why researchers do not routinely share their data are numerous but include the lack of strong incentives to do so and insufficient expectations of transparency within academia.(12, 13) The OpenSafely project has provided one model throughout the COVID pandemic for safe and secure access to patient records for emergency research in the UK.(14)
Data sharing is not just necessary to spot misrepresentations or over-hyped findings. In the information age the extra scrutiny that can be obtained from ‘crowd sourcing’ expertise to identify errors or different explanations for results is invaluable. If data is not shared it also means that other analyses that might benefit patients cannot be performed leading to extra waste.(15) Too often research data that is donated by patients to help others is hoarded as private property, instead of being shared so that it can achieve the maximum benefits possible.(15, 16)
Recommendation: Funders should ensure researchers share study documentation, analytical code and research data independent of publication in traditional journals.
Several reporting guidelines have been developed for different types of studies to improve the transparency and accuracy of how research is reported. Journals and other institutions routinely require researchers to state they have abided by these guidelines, but this is usually not verified and much more could be done to improve compliance. (10, 17, 18) The EQUATOR network, with its UK centre based in the University of Oxford, promotes improved reporting of research, including through the training and support to researchers and resources such as their database of reporting guidelines.(19)
Recommendation: Study reporting guidelines should be promoted by funders and research institutions along with mechanisms to verify claims that research is complaint with these guidelines
It is well established that results which are ‘positive’ or novel are much more likely to be published.(20) This happens not only because journals are more likely to interested in publishing eye-catching findings but because researchers are also less likely to submit results which they perceive to be less interesting.(21) There is little incentive to put in the work to publish results if they are nor viewed as impactful. The result of this is that the published record is distorted to privilege only certain kinds of results. One study found that 96% of research reported findings were ‘statistically significant’, which would be a mathematical impossibility if the published record was representative of all research carried out.(22) While regulators may have access to more comprehensive results, the public is left with a literature biased towards ‘positive’ findings. Where unpublished results are taken into account the evidence of benefit for these interventions may diminish or disappear altogether.(20, 23)
In the field of clinical trials there have been significant developments to address this situation, with requirements from European and US regulatory agencies for sponsors of medical interventional trials to share results regardless of the study outcomes. In the UK considerable progress has resulted thanks to the Science Select Committee’s 2018 inquiry, with the Health Research Agency taking up the agenda that all research findings should be published through their #MakeItPublic campaign and their recent announcements to mandate and track this.(24) Existing evidence shows that while many trials covered under these regulatory schemes share their results, gaps remain.(25, 26) As HRA begins to implement their new strategy to increase transparency in clinical research, ensuring there are proper tools for both education and enforcement of these requirements is essential.
Recommendation: The UK government should reinforce compliance with the #MakeItPublic campaign’s transparency policies by introducing sanctions, such as financial penalties, for companies, universities and NHS Trusts that do not make the results of their clinical trials public.
It is possible to monitor whether clinical trials have been published because there are requirements in place at many leading journals that trials are ‘pre-registered’ in a clinical trial registry before they commence).(27) Clinical trials are only one type of research and are a minority of all medical studies. For other types of studies, including observational studies that have informed guidance throughout the coronavirus pandemic, there is little expectation that studies are pre-registered. This makes it impossible to monitor what research is planned, whether those plans have been followed, and if the results are eventually published. Pre-registering all types of research is now straightforward and cost-free using platforms such as the Open Science Framework.(28) These pre-registrations can and should be used to promote transparency and accountability throughout the research system.
Recommendation: Institutions including research funders, ethics committees, and universities should establish the expectation that all research is registered prior to commencement whenever possible and the select committee should challenge these stakeholders as to why this currently does not happen
Although study pre-registration enhances transparency and should be much more widely utilised, it is quite common to find deviations between the methods and research questions set out before hand on registers and what was later published. When prespecified outcomes are altered without acknowledgement and justification this is known as ‘outcome switching’, a form of reporting bias. It is only because of registration that outcome switching can be identified at all, but detecting these differences is painstaking and when pointed out, journals often fail to take action. (10) Related questionable research practices, like manipulating analyses to generate a statistically significant result (‘p-hacking’) and amending study hypotheses retrospectively to suit the result found (Hypothesising After the Result is Known, or ‘HARKing’) can also happen when research is not pre-registered or the registrations are not routinely checked against final reports.(29)
A more robust format for publishing called Registered Reports, could help address reporting and publication bias for almost all types of research.(30) Using Registered Reports, peer reviewers assess proposed research plans submitted by researchers before they undertake the research. If these are deemed satisfactory the journal commits to publishing the research, whatever the findings, as long as the research has been carried out as approved. By ensuring journals make a decision on whether to publish before the results are known Registered Reports help discourage other biases, since researchers will have less incentive to come up with eye-catching results. Early evidence suggests that Registered Reports are working as intended by decreasing outcome switching and improving the quality and rigor of proposed study designs by providing peer review before the research has been conducted. (31, 32)
Medicine lags behind other disciplines, like psychology, in adopting Registered Reports, with only around 1% of medical journals offering the format.(33) Consequently even medical researchers who would be motivated enough to model good practice by publishing through Registered Reports, have virtually no outlet to publish using this format. A small number of UK based medical journals, including the British Journal of General Practice have adopted the Registered Reports, however none of the so-called ‘big five’ medical journals have so far done so.(34) The major journals have an important role in improving standards in the industry as a whole as they have a great deal of influence and where they lead, many thousands of smaller journals are likely to follow. Two of the ‘big 5’ are based in the UK: The Lancet and BMJ.
In April 2021 we collaborated with the Center for Open Science on a campaign to lobby medical journal editors to ask them to being to offer Registered Reports. We contacted 84 editors of the world’s major medical journals. We received only 13 replies, few of which offered any clear explanation as to why they would not adopt Registered Reports. To our knowledge none of the journals contacted has since changed their policy with respect to Registered Reports. We feel that journal publishers, most of whom rely upon substantial support from the taxpayer through the subscriptions of public institutions and provision of editorial services from academics, should be able to explain why they will not countenance the additional transparency and rigour that Registered Reports would bring.
Researchers tend to prefer to publish their research in the most prestigious journals, such as the ‘big 5’. Funders should consider incentivising researchers to publish through the few journals that do offer the format, which could help promote the uptake of Registered Reports by other journals. There is precedent for this as many funders already either mandate or strongly encourage that publications are not placed behind paywalls (‘open access’), a positive development that has transformed publishing. Incentives could include establishing funding pathways which integrate registered report peer review with the evaluation of funding applications(35) and/or offering a cash bonus to institutions and researchers who publish using a registered report.
Recommendation: Funders should incentivise researchers to publish using the Registered Reports format and establish funding pathways which integrate evaluation of funding proposals with initial peer review of Registered Reports
It is also very difficult to find out if researchers have conflicts of interest. Currently, researchers declare conflicts of interest in a statement at the end of publications. These statements are very brief and often omit important potential conflicts, including sources of funding.(36) The RetractionWatch project, has logged numerous instances of problematic findings, and eventual retractions of articles, due to undisclosed conflicts of interest and the risk of bias they present.(37) There are voluntary registers for people who have received payments from the pharmaceutical industry and for doctors.(38, 39) Because there is little incentive for individuals to make such declarations, these registers are greatly underutilised, with only 0.002% of those registered with the General Medical Council (GMC) listed on the doctors’ voluntary register.(40) Patients, the public and other scientists are entitled to be able to easily check whether any doctor or researcher has conflicting interests. A central register could be readily established with researchers indexed using the unique identity numbers which are already required by institutions and funders.(41)
Recommendation: A central, mandatory register of interests for all those who are involved in medical research should be established, with an expectation that individuals maintain the accuracy of their declarations and that this is checked by their employers during appraisals.
Decisions about who to hire and promote in academia are often informed using reductive metrics such as the impact factor of journals in which researchers have published or the amount of grant income they have won. Such metrics do not reflect the quality of research itself and instead incentivise researchers to generate results which are perceived to be exciting or newsworthy, rather than prioritising genuine scientific discovery that benefits patients. There has been recent controversy over this issue in the UK with the University of Liverpool’s use of certain metrics to inform staffing cuts.(42) Practices that support high quality research by improving transparency and reducing bias, such as registering studies and publishing all results, are not typically used to appraise performance in academia. Initiatives which seek to mitigate dysfunctional incentives and promote practices that are conducive to reproducible research are becoming more widely established.(43) Such efforts should be supported and expanded.
Recommendation: Institutions should stop using metrics such as journal impact factor when assessing researchers and evaluate research on the basis of quality, reproducibility and societal value as set out in the San Francisco Declaration on Research Assessment (DORA)
Outside of academia, doctors and other professionals, who may have no sincere interest in research, are often pushed into performing studies which may be of low quality as research publications, because publications can are used to as criteria for promotion and selection into training programmes. There is no convincing reason to suggest, or any plausible grounds to believe, that because a doctor has published a research paper they will be a better clinician. Pushing clinicians to spend time undertaking research for their career advancement is frustrating for many doctors who would prefer to concentrate on learning to become better clinicians or preventing burn out by enjoying their free time.
Recommendation: Organisations such as the UK Foundation Programme and Medical Royal Colleges to should stop awarding points to applicants for publications in selection procedures for non-academic medical jobs
It was observed decades ago that much time and money could be saved and more useful discoveries made if we had ‘less research, better research and research done for the right reasons’.(44) Since then the pressures to produce research have led to an extraordinary proliferation of published research, much of which is of very low quality.(45)
Aside from with problematic incentives, there are wider problems with the culture within institutions that also contribute to non-reproducible research. These include insufficient understanding of the necessity of open science practices, such as registration of studies before they commence and of the need for expertise on the appropriate statistical methods. There is also intense pressure on academics to publish frequently in order to be promoted, leading to a high volume of poor-quality research publications.
Many valuable initiatives have been established to address these problems, some of which are listed in Table 1. Provision of training and mentorship is improving and the establishment of bodies like the UK Reproducibility Network to take a pro-active role in promoting reproducibility across disciplines and institutions is particularly welcome.(40, 46, 47) Practical initiatives like the EQUATOR Network has contributed greatly to raising standards by producing comprehensive guidelines and resources that support journals and researchers to improve how they design and report research(48). Increasingly, major funders like the Wellcome Trust are also doing more to promote improved cultures in research.(49, 50) However, engagement of medical researchers with cross-disciplinary initiatives to improve research culture is inconsistent and more needs to be done to ensure that medical research is at the forefront in adopting measures that lead to the production of reproducible research.
Recommendation: Encourage medical research institutions to support initiatives to improve research culture and engage with important cross-disciplinary efforts such as the UK Reproducibility Network.
Table 1: Initiatives and organisations working to reduce waste and improve the openness and quality of research.
Voluntary register of doctors’ declared interests
Open Science Badges
Badges appended to publications to acknowledge and incentivise open science practices
Originated in 2014 Lancet series on waste in research. Promotes efforts to increase the value of research and reduce waste in research.
San Francisco Declaration on Research Assessment (DORA)
Initiative which calls for improvement in how research quality is evaluated
UK Reproducibility Network (UKRN)
Initiative which promotes the practices of open science
Campaign to ensure all clinical trials are registered and published. Highlights problem of publication bias, e.g. through ‘unreported clinical trial of the week’ and trial trackers which monitor reporting performance
Enhancing the QUAlity and Transparency Of health Research (EQUATOR)
International network which promote transparent and accurate reporting and wider use of robust reporting guidelines.
Campaigning organisation which advocates for registration and full reporting of clinical trials
Improving Methodological & Statistical Practices
Open Science Framework (OSF)
On-line platform which facilitates open sharing and preregistration of research.
Oxford – Berlin summer school on open research
Training for researchers organised by the QUEST Center for Transforming Biomedical Research and Reproducible Research Oxford
Evidence-Based RESearch (evbres)
European network established to promote evidence based clinical research, particularly the need to use systematic reviews when planning new studies and when placing new results in context.
The authors of this submission have supported a call to prioritise three of the specific recommendations which have been outlined in this submission (the Declaration to Improve Health Research). These measures were selected because they can be readily implemented and would yield significant impact in improving the reproducibility of medical research.(51) We believe implementation of these represents a minimum level of transparency that patients, tax payers and the public are entitled to be able to expect from the research they fund and rely on.
More information is available at www.ImproveHealthResearch.com and the ways in which these measures would improve health research are summarised in Table 2.
The proposed measures are:
1) Mandatory registration of interests for all people and institutions who conduct and publish health research on a single on-line platform accessible to all;
2) That journals and funders support uptake of Registered Reports (which includes pre-study methodological review and results-blind publication of research); and
3) Pre-registration of all publicly-funded research on a central research registry that is accessible to all, along with protocols, analytic code and, where possible, research data
The rationale for these measures is outlined in detail elsewhere.(33, 52) So far, over 100 organisations, academics and patients have supported the call for these simple measures to be implemented(53), their names on www.improvehealthresearch.com/signatories
Table 2: Problems in medical research and how they can be mitigated by authors’ proposed strategy
Relevant proposed solution(s)
How proposed solution(s) addresses problem
Tendency for results deemed ‘negative’ or ‘uninteresting’ to remain unpublished
Study accepted for publication based on methods, not results
Study results and documents made available, regardless of publication status
Practice of presenting results of study as more striking, ‘positive’ or newsworthy than warranted.
Mandatory Declaration of Interests
Reduced incentive to ‘spin’ to obtain publication
Study documentation available to allow greater scrutiny of researchers’ claims
Information on possible conflicts of interest allows peers to judge if researchers have vested interest in applying spin to study
Deliberate falsification of evidence, for example fabrication of results
Availability of full study documentation allows peers to scrutinise results. Researcher compelled to demonstrate ‘not just the answer but their working out’
Non-adherence to reporting checklists
Inaccurate self-disclosure by researchers of fulfilment of checklist statements
Peers can scrutinise methods from available study documentation.
Researchers generate hypotheses to fit results and present these as if formulated prior to obtaining results
Hypotheses and aims are agreed prior to undertaking research. Any further post hoc analyses are declared as such
Researchers manipulate results until findings generated which satisfy statistical significance
Analyses agreed prior to generation of results
Analysis plans and code available to peers for scrutiny
Researchers do not report certain outcomes, or switch primary and secondary outcome, to highlight favoured results
Mandatory Declaration of Interests
Outcomes of interest agreed prior to undertaking research
Protocols and analysis plans made available to peers for scrutiny
Conflicting interests which could engender bias made known to public and peers
Other Questionable Research Practices
Practices including deciding to collect more data after inspecting results, selective rounding of p-values, selective reporting of dependent variables
Methods are agreed prior to publication. Incentive to generate results which favour publication removed
Protocol and analysis plan made available to peers for scrutiny
Undisclosed conflicts of interest
Researchers may have, or could be perceived to have, vested interest in obtaining certain outcome in their results
Mandatory Declaration of Interests
Researchers compelled to made comprehensive statement of their pecuniary interests, gifts and hospitality received and non-pecuniary interests
Results unable to be replicated, either because of insufficient information to reproduce methods or because of biases in original study (including problems in this table) mean work not reproduced when attempted
Adequate study documentation made available such that study can be repeated or analyses repeated.
As a condition of funding researchers should be required to:
Funders should also:
Stephen Bradley: I am employed as a General Practitioner for one day a week. I receive funding for PhD study from CanTest collaborative (Cancer Research UK), I am a member of the executive committee of the Fabian Society which is a political think tank affiliated to the Labour Party (unpaid). The publication costs of a collection of essays on health inequalities which I co-edited for the Fabian Society was funded by the Association of the British Pharmaceutical Industry and Lloyds Pharmacies, I received no direct funding or payment for this. I sit on the NIHR’s Health Services & Delivery research prioritisation committee (unpaid aside from reimbursement of travel expenses). I am a co-investigator on a study which is funded by Yorkshire Cancer Research (Patient-centred models for surveillance and support of cancer survivors with bowel and breast cancer). I am a member of the steering group of a campaign to improve health research (the declaration to improve health research). I have previously received funding from the Mason Medical Foundation to undertake a study on chest x-ray and lung cancer diagnosis.
Nicholas DeVito: I am a doctoral student at the DataLab (soon to be the Bennett Institute for Applied Data Science supported by the Peter Bennett Foundation) and the Centre for Evidence-Based Medicine at the University of Oxford and I am supported in my studies by a studentship from the Naji Foundation. I have been employed on grants in the last three years from the Laura and John Arnold Foundation, the Good Thinking Society, and the German Federal Ministry of Education and Research (BMBF). I have also received grant support from the Fetzer Franklin Memorial Fund.
Kelly Lloyd: I am supported by an Economic and Social Research Council studentship [grant number ES/P000745/1]. I am a member of a steering group of a campaign to improve health research (the declaration to improve health research).
David Mellor: I am an employee of the Center for Open Science (COS) in the United States. COS is a non-profit organization, whose mission is to increase transparency and reproducibility in scientific research. COS builds and maintains the open source Open Science Framework (https://osf.io).
Jessica Butler: I am employed by the University of Aberdeen where I currently receive funding for medical research from the Health Foundation and from Wellcome Trust. I am an honorary analyst for NHS Grampian (unpaid). I am on the editorial board of Scientific Reports and Scientific Data (unpaid). I am a member of the UK Reproducibility Network and the Association of Professional Healthcare Analysts.
Patricia Logullo: I am a postdoctoral meta-researcher at the University of Oxford and a member of the UK EQUATOR Centre, an organisation that promotes the use of reporting guidelines, and I am personally involved in the development of some new reporting guidelines or their extensions. I receive funding from Cancer Research UK and NIHR Biomedical Research Centre for her research work. I am also a member of the Oxford-Brazil EBM Alliance, a not-for-profit organisation interested in disseminating evidence-based medicine principles.
1. Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nature Reviews Drug Discovery. 2011;10(9):712-.
2. Kaiser J. Rigorous replication effort succeeds for just two of five cancer papers. Science. 2017.
3. Glasziou P, Chalmers I. Research waste is still a scandal—an essay by Paul Glasziou and Iain Chalmers. 2018;363:k4645.
4. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. The Lancet. 2009;374(9683):86-9.
5. Public sector expenditure on medical research in the United Kingdom (UK) from 2013/14 to 2018/19 [Available from: https://www.statista.com/statistics/298897/united-kingdom-uk-public-sector-expenditure-medical-research/.
6. Effect of Hydroxychloroquine in Hospitalized Patients with Covid-19. New England Journal of Medicine. 2020;383(21):2030-40.
7. Ledford H, Van Noorden R. High-profile coronavirus retractions raise concerns about data oversight. Nature. 2020;582(7811):160.
8. Lawrence JM, Meyerowitz-Katz G, Heathers JAJ, Brown NJL, Sheldrick KA. The lesson of ivermectin: meta-analyses based on summary data alone are inherently unreliable. Nature Medicine. 2021.
9. Wallach JD, Boyack KW, Ioannidis JPA. Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017. PLOS Biology. 2018;16(11):e2006930.
10. Goldacre B, Drysdale H, Dale A, Milosevic I, Slade E, Hartley P, et al. COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time. Trials. 2019;20(1):118.
11. Naudet F, Sakarovitch C, Janiaud P, Cristea I, Fanelli D, Moher D, et al. Data sharing and reanalysis of randomized controlled trials in leading biomedical journals with a full data sharing policy: survey of studies published in <em>The BMJ</em> and <em>PLOS Medicine</em>. BMJ. 2018;360:k400.
12. Puniewska M. Scientists have a sharing problem The Atlantic2015 [Available from: https://www.theatlantic.com/health/archive/2014/12/scientists-have-a-sharing-problem/383061/.
13. Smith R, Roberts I. Time for sharing data to become routine: the seven excuses for not doing so are all invalid [version 1; peer review: 2 approved, 1 approved with reservations]. 2016;5(781).
14. OpenSAFELY [Available from: https://www.opensafely.org/.
15. Gill J, Prasad V. N of 1 Data Sharing: The Impact of Data Sharing within the Hematology-Oncology Drug Products Division of the US FDA. Trends in cancer. 2021;7(5):395-9.
16. A matter of trust: OpenPharma; [Available from: https://www.openpharma.blog/blog/guest-posts/a-matter-of-trust/.
17. Blanco D, Biggane AM, Cobo E, Altman D, Bertizzolo L, Boutron I, et al. Are CONSORT checklists submitted by authors adequately reflecting what information is actually reported in published papers? Trials. 2018;19(1):80.
18. Blanco D, Altman D, Moher D, Boutron I, Kirkham JJ, Cobo E. Scoping review on interventions to improve adherence to reporting guidelines in health research. BMJ Open. 2019;9(5):e026589.
19. EQUATOR Network: what we do and how we are organised [Available from: https://www.equator-network.org/about-us/equator-network-what-we-do-and-how-we-are-organised/.
20. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy. New England Journal of Medicine. 2008;358(3):252-60.
21. Rosenthal R. The file drawer problem and tolerance for null results. Psychological Bulletin. 1979;86:638-41.
22. Chavalarias D, Wallach JD, Li AHT, Ioannidis JPA. Evolution of Reporting P Values in the Biomedical Literature, 1990-2015. JAMA. 2016;315(11):1141-8.
23. Tamiflu Campaign [Available from: https://www.bmj.com/tamiflu.
24. The Health Research Authority moves to make research transparency the norm: Health Research Authority; [Available from: https://www.hra.nhs.uk/about-us/news-updates/health-research-authority-moves-make-research-transparency-norm.
25. Goldacre B, DeVito NJ, Heneghan C, Irving F, Bacon S, Fleminger J, et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ. 2018;362:k3218.
26. DeVito NJ, Bacon S, Goldacre B. Compliance with legal requirement to report clinical trial results on ClinicalTrials.gov: a cohort study. The Lancet.
27. Clinical Trials Registration: International Committee of Medical Journal Editors; [Available from: http://www.icmje.org/about-icmje/faqs/clinical-trials-registration/.
28. Open Science Framework [Available from: https://osf.io.
29. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, et al. A manifesto for reproducible science. Nature Human Behaviour. 2017;1(1):0021.
30. Registered Reports: Peer review before results are known to align scientific values and practices.: Centre for Open Science; [Available from: https://cos.io/rr/.
31. Scheel AM, Schijen M, Lakens D. An excess of positive results: Comparing the standard Psychology literature with Registered Reports [Preprint]. 2020.
32. Soderberg CK, Errington TM, Schiavone SR, Bottesini J, Thorn FS, Vazire S, et al. Initial evidence of research quality of Registered Reports compared with the standard publishing model. Nature Human Behaviour. 2021;5(8):990-7.
33. Bradley SH, DeVito NJ, Lloyd KE, Richards GC, Rombey T, Wayant C, et al. Reducing bias and improving transparency in medical research: a critical overview of the problems, progress and suggested next steps. J R Soc Med. 2020;113(11):433-43.
34. Help Us to Improve Health Research Centre for Open Science [Available from: https://www.cos.io/blog/help-us-improve-health-research.
35. First Funding Cycle of the Drug Discovery Initiative Registered Report (DDIRR) Awards Announced [Available from: https://www.ctf.org/news/first-funding-cycle-of-the-drug-discovery-initiative-registered-report-ddir.
36. Wayant C, Turner E, Meyer C, Sinnett P, Vassar M. Financial Conflicts of Interest Among Oncologist Authors of Reports of Clinical Drug Trials. JAMA Oncol. 2018;4(10):1426-8.
37. Failure to Disclose COI [Available from: https://retractionwatch.com/category/by-reason-for-retraction/failure-to-disclose-coi/.
38. Disclosure UK: Association of the British Pharmaceutical Industry; [Available from: https://www.abpi.org.uk/our-ethics/disclosure-uk/
39. Who pays this doctor? [Available from: http://www.whopaysthisdoctor.org/.
40. Bradley SH, DeVito NJ, Lloyd K, Richards GC, Rombey T, Wayant C, et al. Reducing bias and improving transparency in biomedical and health research: A critical overview of the problems, progress so far and suggested next steps (preprint) 2020.
41. ORCID: Open Researcher and Contributor ID [Available from: https://orcid.org/.
42. Else H. Row erupts over university's use of research metrics in job-cut decisions. Nature. 2021;592(7852):19.
43. San Francisco Declaration on Research Assessment [Available from: https://sfdora.org/.
44. Altman DG. The scandal of poor medical research. BMJ. 1994;308(6924):283-4.
45. Bornmann L, Mutz R. Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. Journal of the Association for Information Science and Technology. 2015;66(11):2215-22.
46. The UK Reproducibility Network (UKRN) [Available from: https://bristol.ac.uk/psychology/research/ukrn/.
47. Richards GC, Bradley SH, Dagens AB, Haase CB, Kahan BC, Rombey T, et al. Challenges facing early-career and mid-career researchers: potential solutions to safeguard the future of evidence-based medicine. BMJ Evidence-Based Medicine. 2019:bmjebm-2019-111273.
48. Equator Network [Available from: https://www.ndorms.ox.ac.uk/research/research-groups/equator-network.
49. Research culture: lets reimagine how we work together: Wellcome Trust; [Available from: https://wellcome.org/what-we-do/our-work/research-culture.
50. What researchers think about the culture they work in. Wellcome Trust; 2020.
51. The Declaration to Improve Health Research [Available from: www.improvehealthresearch.com.
52. HealthWatch UK [Available from: https://senseaboutscience.org/.
53. Signatories to the Declaration to Improve Health Research [Available from: https://www.improvehealthresearch.com/signatories.