Written Evidence Submitted by the National Institute for Health Research (NIHR) Health Protection Research Unit in Emerging and Zoonotic Infections, University of Liverpool, Institute of Infection and Global Health, University of Oxford and the Nuffield Department of Primary Care Health Sciences

(C190084)

Background:

We are University researchers in emerging infections and health policy responses to them.[1] Since the beginning of the COVID-19 epidemic in the UK we have been conducting research into the dynamics of COVID-19-related policy development and healthcare worker behaviour in the UK. The research is funded by UK Research and Innovation and the National Institute for Health Research. We interview key policy players and frontline healthcare workers, who speak candidly to us on condition of anonymity.[2]

In an infectious disease outbreak, health policy makers are under tremendous pressure (Brändström and Kuipers 2003). They must respond rapidly to get ahead of the epidemic, and make high-impact decisions despite uncertainty (Boin 2009). This includes scientific uncertainty about the natural history of the coronavirus (Weible et al 2020). Our remarks here acknowledge the severity of these challenges for policy makers: they are not armchair criticisms from hindsight.

 

Our evidence on the Committee’s questions:

Our research has a bearing on your questions about ‘the flexibility and agility of institutions, Government departments and public bodies, and processes to respond appropriately during the crisis including: … the availability and influence of scientific advice … and the extent to which decisions taken drew on that advice. We have regularly interviewed seven senior scientists who have all played significant roles in advising the Department of Health and Social Care (DHSC), NHS England, Public Health England (PHE) and the centre of government (in some cases as members of the Scientific Advisory Group on Emergencies (SAGE).

We look first at the agility and responsiveness of government bodies. The UK’s early warning systems for emerging infectious disease, situated in PHE, served their purpose well. UK responses to news from China were triggered at the beginning of January. As the concern of these specialists mounted, the issue was escalated on the 13th to the government’s New and Emerging Viral Threats Advisory Group (NERVTAG), which advised the central contingencies team in the UK Cabinet Office that the threat level was ‘very low’. On the 21st NERVTAG raised its risk assessment to ‘low’, and, the following day, news was relayed up the Whitehall scientific advice chain to SAGE, chaired by the government’s Chief Scientific Advisor Patrick Vallance, and referred onward by him to a meeting the same day of the Cabinet Office Briefing Room (COBR) chaired by the Secretary of State for Health and Social Care, Matt Hancock (Reuters 2020).

We asked our interviewees how well this advisory machinery worked in the early part of the crisis. One said the processes around SAGE had been:

‘extremely confused to begin with … people were being on-boarded into [the secretariat] at a rate of knots, so there was a certain amount of confusion: messages were being misunderstood, passed to the wrong people and so on. … now … the civil service is actually performing, but it took a few weeks.’ (anonymised interview, 26 March)

The alarm of the scientific advisors mounted. We heard from one that:

‘By mid February we had quite a good idea what an unmitigated epidemic would look like in the UK. … the same … orders of magnitude [as] the worst … influenza pandemic.’

Unfortunately,

‘… at that point though …, people do this in every epidemic, … people … do this kind of distancing …, why it won’t happen here.’ (anonymised interview, 8 April)

We were told that even the government’s most senior medical advisors took this view, at least in one meeting. The general impression, however, is that scientific advice to government was timely and represented the best interpretation of evidence available at the time. This view differs from that of Jeremy Hunt MP, whose comments in a Commons debate attracted media attention in May. Specifically, he argued that ‘a major blind spot in the approach taken in Europe and America was our focus on pandemic flu rather than pandemic coronaviruses, adding that SAGE should, in January, have modelled the impact in England of adopting the Asian countries’ population-wide ‘test, track and trace’ approach, and calling its failure to do so ‘one of the biggest failures of scientific advice to Ministers in our lifetimes. (Hansard, 11 May 2020) This criticism is misplaced. In January it was reasonable – as the scientific advisors did – to view the novel coronavirus disease as a rare infection of returning travellers from China and plan to manage it accordingly.

Hunt was correct in pointing out the danger in focusing too exclusively on pandemic flu. However, the harm was done in the years before 2020, rather than this January. An over-focus on flu at the expense of coronaviruses during that earlier period left the UK with contingency plans, and a personal protective equipment (PPE) stockpile, which were designed for pandemic flu, not pandemic coronavirus. Michael Gove, the Cabinet Office Minister, conceded to Parliament that the stockpile was ‘explicitly for a flu pandemic.’ (Hansard, 28 April 2020) The relevance of this is that whereas (sleeveless) aprons are recommended for managing influenza patients, clinical infectious disease experts recommend gowns for aerosol-generating procedures with COVID-19 patients (Public Health England 2020). Having only a pandemic flu plan therefore left health care workers dangerously exposed to coronavirus infection. The Scottish ‘Exercise Iris’ simulation of a coronavirus epidemic in March 2018 produced warnings about the suitability of existing PPE, which were shared with NERVTAG in June 2019 but do not appear to have led to changes in the English stockpile (BBC, 5 June 2020).

Your committee asked about the influence which scientific advice had: whether it was acted on. We have just described one case where it was not. More broadly, government’s use of scientific advice appears to have fallen into two phases: one of being ‘led by the science’ and then one in which Ministers have made clear that, while the science remained important, other factors also mattered, such as the economic impacts of decisions, and final decisions would be taken by them, not scientists. The second of these phases made more sense, and felt more correct, to our interviewees. As one remarked, ‘‘advisors advise, and Ministers decide [or] the chief medical officer would become the de facto prime minister’ (anonymised interview, 8 April).

During the earlier phase, the government’s emphasis on being ‘led by the science’ was striking, and was underlined daily by the participation of Chris Whitty (the Chief Medical Officer), Patrick Vallance (the government’s Chief Scientific Advisor) and, later, other scientific or technical experts at the government’s press briefings. Politicians use scientific and technical experts as part of the rationale for policy decisions (Markoff and Montecinos 1993), but the attractions of this tactic have never been greater than during this epidemic. Why? This expertise is a comfort as well an intellectually valuable input to hard decisions: it can also comfort and reassure the public. But some of our interviewees began to fear that Ministers were shifting the accountability for hard decisions onto them. A top government advisor asked some of them: ‘“what is it the PM has to say?” … and wrote it down – even to details, which the Prime Minister later used, like “you can’t go to the pub”.’ This interviewee felt that ‘at that point, [scientific advisors] had crossed the boundary into making a decision, not just giving advice.’ (Anonymised interview, 29 April).

While scientific advice was timely, decisions based on it were often delayed, particularly during the earlier part of the crisis (January – April). We acknowledge the sheer difficulty of crisis decision-making: previous research identifies problems collecting and comprehending the necessary information, ambiguity, complexity, pace and organizational barriers to agile decision-making, including shared responsibilities between multiple organisations (Boin 2009). Nevertheless we argue that decisions – and, critically, implementation – were slow to follow the initial alert. We were told that ‘six weeks of opportunity was wasted,’ and that:

‘from 20th January, it was clear there was human transmission … that this was going to spread around the world. And that was a six week window [for] ramping up PPE, making sure there was supplies, beds, making sure we were prepared for what was likely.’ (Anonymised interview, 17 April)

We heard during interviews that:

‘there is no point saying we are doing things quickly ... I have heard many times that ... [something] is going on at unprecedented speed. But ... until that speed is faster than the pace of the epidemic, you won’t be able to mitigate ... or indeed bring the epidemic to an end. (Anonymised interview, 3 April)

‘Inevitably as things get passed down from Committees, and this is in the Ministers, SAGE, the lag phase between … advice [from] SAGE … or a decision made by whoever, Minister or anybody else, there is a lag … until it gets through the system. And when you are in an epidemic which is very fast moving … it is no good to say we are going quicker than we usually go’. (Anonymised interview, 17 April)

Persuading government Ministers to make the challenging decisions needed has often been difficult, though some decisions were said to be rapid. The majority of our policy witnesses frequently expressed frustration about delayed decisions. Our health care professional witnesses noticed such lags in many places, notably in the redeployment and retraining of staff. We heard from the policy community of ‘a couple of heated moments [in mid-March] where [scientific advisors] were saying “you are not moving fast enough”’. The government’s most senior expert advisors, we were told, responded that policy decisions were a process, that the politicians needed to be led through it. (Anonymised interview, 8 April)

Research has been conducted on how policy making can best use scientific and technical expertise (see for example Black and Donald 2001, Berridge 2005 and Greenhalgh and Russell 2009). Weible et al (2020) have observed that the COVID-19 pandemic challenges scientific and technical advisors to simplify and communicate, and challenges policy makers to balance political judgement with the responsible use of expert advice. Kogan et al (2006) studied the interaction between researchers and policy makers in DHSC’s predecessor, the DHSS, concluding that it was productive when participants could span the boundaries between the worlds of research and policy, translating policy problems into research questions and research findings into actionable briefings. A scientific advisor, unprompted, recognised playing this ‘boundary-spanning’ role in their current work, but also described what happened when the two ways of thinking did not meet: policy makers would ‘say, “what should we do?” And [scientists] say “well what do you want to achieve?” And we just go round and round in circles’ (Anonymised interview, 29 April). It was as if DHSC had not learnt from Kogan and his team, although it was the Department who commissioned that work. ‘Just going round in circles’ was the situation when policy makers only wanted to ‘follow the science’: things improved markedly once they had clear policy goals and began to seek scientific advice on the effectiveness of different ways to reach them (Anonymised interview, 29 April).

 

Conclusion

We hope this testimony from some key scientific advisors to government speaks for itself, but the research team would be happy to contribute further to the Inquiry if the Committee wishes.

 

References:

Berridge V (ed.) Making Health Policy: Networks in Research and Policy after 1945 (Rodopi, Amsterdam and New York: 2005). 

Black N, Donald A. Evidence based policy: proceed with care. Commentary: research must be taken seriously. BMJ 2001; 323(7307): 275.

Boin A. The New World of Crises and Crisis Management: Implications for Policymaking and Research. Review of Policy Research 2009; 26(4): 367–77.

Brändström A, Kuipers S. From ‘normal incidents’ to political crises: Understanding the selective politicization of policy failures (pt. 1). Government and Opposition 2003; 38(3): 279–305.

BBC News website. ‘Coronavirus: Outbreak exercise showed ‘clear gap’ in readiness’  5 June 2020

Greenhalgh T, Russell J. Evidence-based policymaking: a critique. Perspect Biol Med. 2009;52(2):304-318.

Markoff J, Montecinos V. The Ubiquitous Rise of Economists. Journal of Public Policy 1993; 13(1): 37–68.

Public Health England. Guidance: COVID-19 personal protective equipment (PPE) (2020)

Reuters. Special Report: Johnson listened to his scientists about coronavirus - but they were slow to sound the alarm (7 April 2020)

Weible C M, Nohrstedt D, Cairney P, et al. COVID-19 and the policy sciences: initial reactions and perspectives. Policy Sciences 2020; https://doi.org/10.1007/s11077-020-09381-4.

 

 

(July 2020)


[1] This work comes from the NIHR Health Protection Research Unit in Emerging and Zoonotic Infections at University of Liverpool in partnership with Public Health England (PHE), in collaboration with Liverpool School of Tropical Medicine and the University of Oxford (Grant No. NIHR200907). The views expressed here are our own and not necessarily those of the NHS, the NIHR, the Department of Health and Social Care (DHSC) or PHE.  We are grateful for the support of Liverpool Health Partners, and the Centre of Excellence in Infectious Disease Research (CEIDR), Liverpool.

[2] More details of our methods are available on request.