Written Evidence Submitted by Timothy Bates
Is it universal?
It is slightly targeted to areas where it is incentivised most – influencing policy, or marketing products. But it should be treated as ubiquitous.
Why did it happen?
Partly because little of what is researched matters. Partly because few people were checking, partly because people pointing this out were discouraged, partly because we too seldom examine external outcomes: changes in death rates, or stone of weight lost, or new companies on the LSE.
Addressing the reproducibility crisis:
1. Research funders should stop asking for “Impact Statements” These waste time, and incentivise researchers to make extreme claims.
2. Research institutions should incentivise replication and replicable research. For instance, supporting researchers who challenge popular claims, publish failures to replicate, or whose work is independently replicated.
3. Individual researchers will follow the money: Build independent or adversarial replication into the funding mechanisms.
4. Publishers should be left free: We need a free press.
Government policy to positively impact academia’s approach to reproducible research
#1. Fund replications: Set aside 15% of the budget for straight replications.
#2. Identify candidates for non-replicability: Conduct auctions among researchers (say, permanent staff who have published 5 peer reviewed papers in the last 5 years or some similar low bar) to suggest and then anonymously rank the papers/claims they think are unlikely to replicate or which really matter if wrong and seem to run this risk (studies show we are very good at this: most bad research is an open secret or appears unlikely to be true).
#3. Build pre-registration, open science, and replication into funding. Have all grants pre-register their hypotheses in the funding application. Replace impact with power calculations and external indicators of validity. Mandate release of all materials, methods and data publicly and early. Encourage and support independent and adversarial researchers and groups testing validity of the protocols, code and analyses. When findings are published, set aside money to replicate the claims in future grants.
#3. Make large existing and future datasets funded by UKRI open access. Consider separating the data acquisition process from its analysis, and open data as soon as it is collected. Follow UK Biobank’s fabulous example of funding data, not expensive teams incentivised to stymy access.
#4. Set aside 10% of funds in government programmes to test their outcomes. But also to test all the science used in the political statements used to justify Bills. Test and document the science these claims are based on. What are the data supporting a claim that x% of people suffer y harms? Fund research to simply document what papers are being used to back policy and argument, and which has been replicated? Government is a large consumer of failed-to-replicate research, and across its largest budgets, from justice, through education and health, of course, as well as defence. But similar factors are true everywhere from drug research to fisheries.
#5. Fund meta-research: research on why research is failing. Target both general areas, and specific topics: e.g. “Why after 50 years of funding, has the UK developed zero new drugs for depression? Or “Why are prescription rates increasing if behavioral interventions are claimed to work?” Fund critical reviews of big programmatic research: What was the bang for the buck for each council’s research? How much replicated? . A useful task would be to commission submissions of non-replicable findings from UKRI funded projects to document in detail what mistakes are being made and why.
#4 Trial new funding mechanisms with new money:
1. Trial giving each permanent research/REF-active academic a nominal research budget – say £10,000. Just send in a pre-registration of the hypotheses to be tested, and make funding automatic. No expensive reviews, no university overhead cut permitted. See what you get from that in 3 years. Focus it on replication in year 1. Reward detection of research which is not replicable by giving grants to people publishing failures to replicate.
2. Try voting schemes: Publish research applications, and allow qualified (tenured REF entered) academics to read, and then allocate tokens to the grants which they would like funded. Devote 10% of UKRI funding (preferably new money, now we’re outside the EU) to grants which are with wide support. That way you get hundreds of smart, disinterested, quasi-outside eyes on each grant making capture by non-replicable programmes less likely.
3. Demand objective outcomes (i.e., not did teachers say they liked it, but did children get significantly higher grades);
4. Prioritize research with concrete outcomes – a new treatment, a stronger steel alloy: Concrete outcomes are the most assessable: A piece of steel has to stand up in production; products have to be viable in the market.
Impact of the committee on research integrity under UKRI.
Bear in mind that already half of UK university staff are administrators, many working on compliance with government policy instead of adapting to demand and innovation. This has weakened, not strengthened us.