Science and Technology Committee
Oral evidence: UK science, research and technology capability and influence in global disease outbreaks, HC 93
Wednesday 2 March 2022
Ordered by the House of Commons to be published on 2 March 2022.
Members present: Greg Clark (Chair); Aaron Bell; Chris Clarkson; Dehenna Davison; Rebecca Long Bailey; Graham Stringer.
Questions 2797 - 2910
Witnesses
I: Ed Humpherson, Director General for Regulation and Head of the Office for Statistics Regulation; and Dr Sarah Scobie, Deputy Director of Research, Nuffield Trust.
II: Professor Graham Medley, Chair, Scientific Pandemic Influenza Group on Modelling.
III: Dr Raghib Ali, Senior Clinical Research Associate, MRC Epidemiology Unit, University of Cambridge; and Dr Camilla Holten-Møller, Chair, Expert Group for Mathematical Modelling, Statens Serum Institut, Denmark.
Witnesses: Ed Humpherson and Dr Scobie.
Q2797 Chair: I am pleased to welcome the witnesses to this special session as part of our inquiry into the Covid pandemic. Today, we are looking at questions about statistics and modelling, including around the Omicron variant.
For our first panel, looking at the importance of statistical information throughout the pandemic, I am very pleased to welcome Ed Humpherson, who is the director general for regulation and the head of the Office for Statistics Regulation, our statistics regulator, and Dr Sarah Scobie, who is the deputy director of research at the Nuffield Trust. Thank you very much for coming to give evidence today.
Perhaps I could start with a question to Dr Scobie. The Office of National Statistics Covid infection survey was conducted through a large part of the pandemic. How important has that been to statisticians?
Dr Scobie: It has been hugely important in giving unbiased estimates of how much Covid there is in the population, because it is not dependent on what testing regimes are in place and on whether schools or care homes are testing. It gives a real sense of what is happening in the population by taking a random sample, effectively. It has been hugely important as a source of data. It is really important that it continues, for the same reasons.
Q2798 Chair: It is important that it continues. The plans announced by the Government in the last couple of weeks say that it might be scaled back. Have you made any assessment, or have you any knowledge, of what that entails? What would your advice be as to the limits of that for it to remain useful?
Dr Scobie: I do not have specific knowledge of what is planned there or, indeed, of how detailed the plans are. At the moment, the infection survey is published weekly. It might be possible to have a smaller sample and to publish the data less frequently.
Q2799 Chair: How much smaller and how less frequent could it be while still maintaining the level of use and integrity that it has now?
Dr Scobie: In terms of its usefulness at the moment, the data has a lag in it because of the time that it takes to collect the data and because it captures people who have Covid. That will include new infections, as well as people who have it over a period of time.
If you had a smaller sample and needed more data over several weeks, you would increase the time it would take to spot changes in the data; whether there was an increase, whether there were new variants and other things of that sort. I am sure that ONS and its partners will do the modelling to work out what those trade-offs are. I do not have the detail specifically to comment on that, but those are the issues that it will consider, I imagine.
Q2800 Chair: You say that it has a role in surveillance—presumably, in looking out for unexpected developments—and that, because of lags, any particular scaling down could trip over into dulling our speed of response. Is that a key concern?
Dr Scobie: Yes. It will take longer to spot whether there is an increase and there will be less precision in the estimates. It will be more difficult to say exactly what is happening and what the trend is, whether the trend is going up or down over time.
Q2801 Chair: Thank you. Ed Humpherson, you are the head of the body that looks at the use and integrity of statistics. I am sure that you will agree that this is an important series. Part of the discussions that ONS has to have will no doubt be about its financing. It can do only what it is paid to do, what it is resourced to do. What role can you and your organisation have in making sure that, as Sarah Scobie says, the statistics that ONS is able to collect are at a level of usefulness when it comes to responding quickly?
Ed Humpherson: I will answer your question by saying first what we will not and cannot do. Then I will say what we can do.
What we will not and cannot do is make a resourcing case. We are not here for that. We do not consider the allocation of Government finances, and we do not particularly hold a candle for how big ONS’s budget should be. That is not what we do.
What we can and will do is say, first, that there is huge value in having community surveillance. The second point, which we made in a couple of reports that we have published about Covid, is that having a community survey can be useful not just for tracking Covid; you can repurpose it for other things. The reason we say that is that we have had concerns about and critiques of health statistics more generally for some time.
Our concern is that there is an awful lot of measurement of the outputs and throughput from the machine of the NHS—waiting times, elective issues and performance measures—but less on the health of the population, particularly the health of the population in the community. This survey is a really important asset that we very much advocate for in that regard. We need to consider whether it is a useful tool not simply for understanding Covid but for broader population health, whether it can pick up other things.
Q2802 Chair: Can you be clear on that? You say that you are not empowered to comment on the resourcing of the ONS, and I understand that. If, when we know more about the details of the scaling down, you had concerns that it was impairing our ability to understand the virus in the way we did in the past, would you comment on that?
Ed Humpherson: Yes, we would be happy to. In particular, we would make this point. The thing that always troubles us is when weight is put on statistics to justify policy positions that those statistics cannot bear. A survey that was substantially reduced in the ways that Sarah described would not necessarily be a problem in itself, but it might be a problem if weight was put on those results that they could not reasonably bear. That would be the point at which we might say that there was a limitation to the statistics that was not being respected or followed through in how they were being used.
Chair: I see. The Committee may want to write to you when we know more of the details to ask you to advise us on your assessment of the changes, so that we in turn can comment to Parliament and Government on that. Thank you very much.
I turn to my colleagues, starting with Graham Stringer.
Q2803 Graham Stringer: I want to follow up on that question. I am not sure whether this is for Ed Humpherson or Dr Scobie. Who would do the cost-benefit analysis of whether it is useful to collect all this information? I am interested in statistics, as are statisticians, but you need somebody to do a cost-benefit analysis, don’t you? Whose responsibility would that be?
Ed Humpherson: The primary responsibility would lie with the Department of Health and Social Care. It is the Department’s core responsibility to assess the value for money that it is securing for the taxpayers’ money it is using.
In addition, the role of evaluation in government is an important thing to support and push forward. In fact, as we speak, there is a big What Works conference about evaluation in government. This is the sort of thing that is subject to evaluation. After that, of course, there is the National Audit Office. The NAO will want to consider value for money, too. Indeed, it has done various reports on aspects of the spending around the pandemic. Those would be the places to go for evaluating value for money and cost-effectiveness.
Q2804 Graham Stringer: When Professor Riley from the Health Security Agency was here, I asked him a question about my deep irritation with the statistics on death within 28 days, because they give you a consistent base but do not tell you what I suspect most people want to know, which is how many people have died from Covid, as opposed to with it, in that period of time. His answer was that that was the recommended statistic. Who recommended it?
Ed Humpherson: As I understand it, the 28-day measure was recommended and agreed by the four chief medical officers of the UK. Essentially, I agree with your frustration, if I can be completely direct. It is really important to use the 28-day measure for what it is intended for. It is intended as a leading indicator, as a realtime, quick readout. At points in the pandemic when infections and hospitalisations are surging, it tells us how those are impacting on mortality. It is not good at answering the question that you raised: of those deaths within 28 days, how many are due to Covid? If you want to see that, you should look at the ONS figures and the equivalents that are done by its counterparts for Northern Ireland and Scotland.
The ONS figures lag slightly; they are not 24 hours after the reference date but about 11 days after the reference date, but they distinguish between cases where the death certificate says that the underlying cause is Covid and those where Covid is just mentioned on the death certificate. They do not include cases where Covid is not a suspected or identified factor. The ONS weekly death registration figures provide the answer you are looking for.
Q2805 Graham Stringer: But even that is not completely satisfactory, is it? In round terms, the number of deaths purely from Covid, if you look at death certificates, is 16,000 or 17,000. It is that sort of figure, I understand. You look sceptical.
Ed Humpherson: I am afraid that I am slightly sceptical.
Q2806 Graham Stringer: Correct me then.
Ed Humpherson: I so dislike that figure that I am not going to repeat it. There is a figure that has been circulating that is based on the number of cases where the only cause listed on the death certificate was Covid. That is the number. If you look at all of the death certificates and ask, “Where is Covid listed as the underlying cause?”, the figure is 140,000. Both ONS and we have said that the smaller number is highly misleading because it takes a very small subset of death certificates where Covid is listed as the underlying cause.
Q2807 Graham Stringer: That is very helpful. What if a person with terminal cancer goes into hospital and picks up a Covid infection? They were going to die anyway. The death certificate has cancer, and possibly Covid, on it. You are relying entirely on the clinician to put that on the death certificate, so the clinician’s judgment is the pathway to getting to those figures.
Ed Humpherson: Yes.
Q2808 Graham Stringer: At the start of this epidemic—I have seen this in writing—temporary registrars circulated hospitals saying that, if people had any symptom associated with Covid coughs, it should be put on the death certificate. Do you believe that that distorted the statistics, at least over the first six months?
Ed Humpherson: I would probably need to be more clinically expert than I am to answer that question. Broadly speaking, I have confidence in the death registration process. Relying on clinical judgment is much more likely to be complete than relying on a crude 28-day test, to go back to our earlier point. Beyond that, I am not clinically qualified to give a view on the specifics of the guidance given to clinicians.
Dr Scobie: Can I make an additional point? If you look at the early stages of the pandemic, you see a much higher number of excess deaths—deaths above the average—than the number of recorded Covid deaths. That suggests that, in fact, Covid deaths were under-identified. In the early stages of the pandemic, a lot of the symptoms and the way Covid presented were new. For example, how it presented in older people was different from younger people. In those stages, particularly when widespread testing was not available, it is likely, based on looking at excess deaths, that in fact cases where people died from Covid were under-recorded, rather than the other way round.
Q2809 Graham Stringer: Maybe I should have asked this question first, but I have been irritated by the 28-day statistic. How satisfied are you with the accuracy of the statistics at this stage of the epidemic?
Ed Humpherson: On deaths or in general?
Q2810 Graham Stringer: On deaths.
Ed Humpherson: As I said, the key way to think about these statistics is to make sure that we are using the statistics for the purpose for which they were intended. While the 28-day measure has its limitations, it has a purpose; it is a leading indicator that gives us a quick, 24-hour response. At times when the pandemic is increasing significantly, that is a very useful measure for society, for policy makers and the public more broadly, to be aware of.
If we want to have the more complete picture of Covid—people dying from it, as opposed to with it—we use the ONS figures. If we want to have a broader sense of the overall impact of the pandemic on mortality, as Sarah said, we look at the excess death figures, the number of deaths that are above the five-year average. The best thing to do to make a way through this is to think about which question we are trying to answer. Is it the leading indicator-type question, is it the completeness-type question or is it about the overall impact of the pandemic in its entirety? To answer your question about accuracy, if we were to use one of those measures for the other purpose, we would run the risk of being misled. That is the most important way to look at it.
Q2811 Graham Stringer: Can you say the last part again? I am not sure that I follow.
Ed Humpherson: If we were to use the 24-hour measure—the one that has the 28-day cut-off—for a more complete picture of deaths due to Covid, we might be misled by that. If we were to use the ONS measure for that, we would be more likely to be on safe ground.
Q2812 Graham Stringer: At this stage in the epidemic, how many people have died of Covid?
Ed Humpherson: The latest figure I have seen is 140,000 where Covid is the underlying cause, based on the ONS death registrations process.
Q2813 Graham Stringer: You are satisfied that that is accurate.
Ed Humpherson: To the extent of my knowledge, yes.
Graham Stringer: Thank you.
Q2814 Aaron Bell: I want to ask the same question, but about case numbers. How comfortable are you with case numbers, particularly in view of the changing guidance about when you test and fewer tests being taken, and the number of reinfections? I know that there was a change to the data series at the end of January. How comfortable are you that the case numbers have been reflective of Covid since we got widespread testing, particularly for Omicron, where reinfections are becoming more common?
Ed Humpherson: The positive testing numbers reflect testing capacity and the operational delivery of a testing regime as much as they reflect the pattern of infection. To go back to the answer to the first question, that is why the community infection information is so useful. It is randomised and is not dependent on testing capacity in various places and at various times. It is also not dependent on whether people are feeling symptomatic and therefore go for a test.
I would say that the best pattern is to look at the community data from the Covid infection survey, albeit that it is slightly delayed in time, as Sarah said. That is a much better thing to look at than the pure case numbers, which are determined very much by the particular policies and operational procedures in place at the time and by behaviour, people coming forward for tests.
Q2815 Chair: In answer to Graham Stringer, you directed attention to the ONS measure using death certificates as the authoritative number. You said that 140,000 have died. Obviously, that places a great deal of weight on the accuracy of those certifications. Given the importance of this as one of the crucial assessments of how many people have died of Covid, has your office made any study of the accuracy of the recording of cause of death on the part of doctors in hospitals?
Ed Humpherson: We have not done an in-depth, end-to-end process review where we tracked cases through to see what doctors do. The reason we have not is that that is what we expect the producer to do. Under the code of practice for statistics, we expect the producer—in this case, ONS—to have very strong arrangements for quality assurance of the administrative data that it is using. That is what death certificates are. The way we verify that the producer is doing their job, to get that assurance, is to look at their published quality information and their user guides. In the case of death certificates, those are very comprehensive. We take our assurance from that.
Q2816 Chair: To take the case that Graham mentioned of someone who died of terminal cancer but happened to have Covid at the same time, you are confident that Covid would not have been included on the death certificate when the principal cause of death was the terminal cancer.
Ed Humpherson: There are two things. First, I am confident that there are very rigorous procedures in place. That is quite clear from the documentation. The other point is that one can never be complacent about statistics that are drawn from administrative data systems. That is a recurring theme of our work. If there are cases like that that imply the need to look again and to ask hard questions, those questions should be asked. We would be happy to take it forward with ONS.
Q2817 Chair: It is not so much a case as a general point. You are there to look at the integrity or, rather, the reliability of statistics; I do not want to say that they are corrupt. This is an obvious point where a lot of weight is carried by, as you put it, administrative descriptions that may not, in the minds of the people making them, have that much weight. People might think that there is no downside to including Covid if the person has Covid, but actually it is crucial to our understanding of the pandemic and the deaths that it has caused. Is that something you might look back on and advise for the future on whether the arrangements are satisfactory?
Ed Humpherson: Absolutely, but within the context of the framework of saying that we always expect producers themselves to have assurance about the administrative data. We would and do start by saying, “What is ONS doing to assure itself that these are rigorously compiled?”
Chair: I understand that. I will go to other colleagues, starting with Dehenna Davison.
Q2818 Dehenna Davison: Thank you for being with us this morning. So far we have focused on the UK’s statistics and recording of Covid deaths. I am curious to know how that fits into an international context. How does it compare with some of our international partners? Dr Scobie first.
Dr Scobie: There are a few differences in different countries in the coding of death and Covid. In some countries, a positive Covid test is a requirement for their statistics. In other countries, it is not. If I wanted to make a comparison internationally, I would be inclined to look at excess deaths as the better measure, because that would deal with the fact that in different countries there may be differences in how Covid specifically has been recorded. Particularly in the early stages of the pandemic, countries may have made different decisions about what measures to put in place and so on.
Q2819 Dehenna Davison: Can you give any more detail about some of the different systems that other nations are using? For example, you mentioned proof of a positive Covid test. Are there any others you can think of?
Dr Scobie: Yes. In some countries, different electronic notification systems have been set up for Covid deaths. The linkage to testing may be different in different countries. There will be a few different factors. The other thing is the extent to which in the reporting of the data Covid is counted as a contributory cause of death, as well as the underlying cause of death. If somebody had Covid and it was not linked directly to their death, but it affected their death, that could be counted as a contributory cause. In some countries, that is included. In others, it is not.
There will be some differences in the data. It would be a case of looking at each individual country to make the comparison. That is why, if I was making a comparison, I would be inclined to look at excess deaths.
Q2820 Dehenna Davison: For the layman, it is not as simple as looking at the overall Covid death statistics.
Dr Scobie: No, unfortunately. You can see that if you look over the course of the pandemic. The relationship between excess deaths and Covid in individual countries changes based on how complete the capture of Covid deaths is.
Q2821 Dehenna Davison: Mr Humpherson, is there anything you would like to add?
Ed Humpherson: No. That was pretty much what I would have said in answer to that question.
Q2822 Dehenna Davison: It is good to get some agreement.
Would you say that there is a gold standard mechanism for reporting Covid deaths that you have seen internationally?
Dr Scobie: To go back to what Ed said earlier, you want mortality data that you can use to inform public health decisions and that enables you to understand the underlying cause of death. The death certification process that we have here follows the WHO guidance on how to record cause of death and so on. I would say that it is pretty gold standard. I do not know whether Ed wants to add anything to that.
Ed Humpherson: Again, I agree.
Q2823 Dehenna Davison: You would say that the UK is probably the gold standard when it comes to Covid deaths reporting.
Dr Scobie: Yes. I am taking into account all the things that Ed has just said about the different measures, the different purposes and so on. I think that our method using death certification, which ONS uses, is what you would propose to use in the long term.
Dehenna Davison: Thank you.
Q2824 Rebecca Long Bailey: Thank you both for coming this morning. How satisfied are you with the accuracy of statistics on infection rates at the moment?
Ed Humpherson: Broadly speaking, I am satisfied with the statistics on infection rates from the Covid infection survey produced by ONS, because it has a random selection of households and is not reliant on people coming forward for tests. I am satisfied with that.
Q2825 Rebecca Long Bailey: Dr Scobie?
Dr Scobie: For the reasons that were discussed a few minutes ago, I would use the infection survey data, rather than the case data, to get an accurate sense of infections data.
Q2826 Rebecca Long Bailey: Are you both content that we have an accurate picture of hospitalisations at the moment?
Dr Scobie: In England and all the UK countries, although there are some slight differences, there is daily reporting based on people in hospital with Covid, whether they had it on admission or whether it was diagnosed while they were in hospital. That has been very useful, particularly as we have been tracking surges in demand. It has also covered people in critical care, where at times there have been huge concerns about whether the NHS can meet demand.
In recent weeks, the NHS has split the data based on whether the patient had a primary diagnosis of Covid or whether it was an additional diagnosis but not the primary reason for admission, to split out people who may have been admitted for an unrelated condition or for planned care, rather than primarily for Covid. That has been very valuable as an additional aspect of the data.
Even if people were not admitted for Covid, the fact that they have Covid makes their treatment in hospital considerably more complicated because of the need to keep Covid and non-Covid patients separate, where possible. There is also more data that can be analysed retrospectively. The detailed hospital episode statistics that are available for researchers will allow more detailed analysis of other diagnoses that people have had and so on. There is a lot of data to analyse. It goes back to the purpose for which it is used.
Q2827 Rebecca Long Bailey: Thank you. Mr Humpherson, do you want to add anything? No. Are you concerned at all about potential anomalies since the restrictions have been relaxed, particularly given that there is very much less emphasis on community testing at the moment? How will that affect the accuracy of data?
Ed Humpherson: I reiterate the answers I gave earlier. It is precisely because there is a relaxation of community testing for people with symptoms that the community survey done by ONS is the thing to put weight on. It is impervious to changes in testing policy.
Q2828 Rebecca Long Bailey: Dr Scobie.
Dr Scobie: The only thing to add is that people treated in hospital will continue to be tested, as far as I understand, so the hospital data will continue to capture whether or not people have a Covid infection. That will also be useful, but it is obviously giving you a different cohort of people who particularly need hospital treatment, so they are not typical, generally.
Q2829 Rebecca Long Bailey: Thank you. Finally, you have both lived and breathed the statistics right through this pandemic. How effectively have infection rates and death rates been communicated to the media and the public over the course of the pandemic, and what lessons can we learn from any failures of communication?
Ed Humpherson: The way to reflect on this whole two-year experience is not to start at the clinical or data end of the telescope, but to think about the public. There has been a quite extraordinary public appetite for accessing the data and understanding it, and relating it to their lives and their communities and so on. You see that in the number of people who are visiting the various dashboards; the UK Government coronavirus dashboard and its equivalents in Scotland, Wales and Northern Ireland. There are huge numbers of daily hits. That tells us that there is a huge public appetite.
Our verdict is that, by and large, the statistical system in government has responded to that public appetite and met it, but there have been some bumps along the way. Right at the start of the pandemic—March, April, May—it took some time for the clarity of what was being measured to come through. There was a tendency to throw numbers out without the underpinning explanations that helped all types of members of the public to get a grip on things. Most acutely, you may recall that we were concerned about the testing data, the reported target of 100,000 tests a day. We felt that the way the daily results were reported against that was not very clear and not very helpful to people.
We then went through a phase where a lot of the basic infrastructure was being put in place—things like the Test and Trace system. There you had very good operational data. You could find out how long it was taking somebody from getting their test to get their test results and get their contact tracing, but there was less emphasis than we would encourage on the effectiveness of what that actually means for understanding and controlling infection.
From the start of the vaccine programme, we stepped in several times to encourage the producers of statistics in England, Wales, Scotland and Northern Ireland to produce really comprehensive statistics on the vaccine roll-out. If you look at what is published about the vaccine programme, there are quite extraordinary levels of detail, which are very good and cut very many different ways, by region, locality, gender, ethnicity and so on.
Most recently, we have seen a number of cases where things that producers are publishing—we talked earlier about the number of death certificates with a unique sole reference of Covid—then get picked up, and with varying degrees of success the producers have to challenge the ways those numbers are being interpreted. We have encouraged a little bit more adaptability on the part of producers. Those are the phases.
Underneath that though is the thing I most want to leave with you as a Committee. The thing that really gets us banging the table with frustration is a recurring tendency on the part of Government to describe things they are doing by reference to numbers but not to make those numbers publicly available. We have stepped in repeatedly to say, “You have mentioned a particular factoid, particular individual metric, but you have not published the underlying data.” The good news is that every time we do that, the data get published, but we think it should be the default. If something is being quoted, it should be supported. The frustration we have is that we are still having to do that today after two years of the pandemic, given that this is all about the public appetite. It is the public’s right to have access to those data.
Q2830 Rebecca Long Bailey: Thank you. Dr Scobie.
Dr Scobie: I have a couple of additional points. We have seen with Covid a huge increase in the accessibility of some of the data, both in terms of making it publicly available and being able to download the underlying data and have it accessible for analysis. It would be great to see that translating through for surveillance of other diseases, because a lot of that data is not very easy to get hold of. It is published as PDF charts, and you cannot even see the individual numbers. It would be great for what has happened with Covid to be carried forward and built into how other public health and surveillance data gets published.
We have seen some real gaps and challenges in data generally, things like the recording of ethnicity, which has been exposed as a real gap. In a lot of cases, it is not routinely collected, or there are issues with data quality, particularly with hospital data, which is then used for other datasets. There are inconsistencies and there is systematic bias in it between different ethnic groups, and that will hamper our ability to get to the bottom of, and understand and address, some of the ethnic differences that we have seen with Covid and with Covid mortality and illness and related impacts. There are some good lessons that we can take to think about how we can improve data going forward.
Ed Humpherson: Can I build on that? I completely agree, particularly on the first point about taking the lessons about making the data publicly available and taking it to other aspects of health and care data.
I worry that when people learn lessons from this pandemic they might learn not quite the right lesson. The Government might learn the lesson that it is really good for them to have data and it is really good for them to be able to understand what is going on in real time. That is sort of true, but it is only half of the picture. The other half of the picture is making that available so that the public can access it and form their own views. Through the process of many people engaging with the data, more insight and more understanding emerges, and the sorts of questions that you have asked today come forward. That is exactly the right lesson to learn from the pandemic; not just that data matters, but that data matters to everyone.
Q2831 Graham Stringer: On that point and another couple of points that I would like to bring in, my impression going through the epidemic was that particularly once the Health Security Agency replaced the HSE the Government became more secretive with their statistics. It was more difficult to get statistics out both on a regional hospital basis and nationally. Was that your experience?
Ed Humpherson: First of all, it is PHE, not HSE.
Graham Stringer: Sorry. I was searching for it. Too many letters.
Ed Humpherson: Once a fact checker, always a fact checker, I am afraid. There was probably a period of time when UKHSA was being set up when it was fledgling and it was working out its protocols. It now has a head of profession for statistics whom we know well and respect and admire, and who is very active in ensuring that when UKHSA is using data, it makes it publicly available. There have been instances involving UKHSA where we have had some concerns, and we have expressed them publicly.
Q2832 Graham Stringer: You are a national organisation. Do you come across that problem on a local or regional basis? When I was looking for percentage figures in ICU at a local basis, I knew those figures were available, but an instruction went out from the centre not to release them without their permission. Was that your experience?
Ed Humpherson: I do not have that specific experience, but that is really frustrating for you. It is very disappointing. If the data are available and they have insights to share at local level, they should be made available.
Q2833 Graham Stringer: This is a really interesting session. You may not be able to answer this, but it is an interesting question. The public have an appetite for statistics. They want to know how bad this epidemic has been and to compare it with the ’18-19 flu epidemic or the ’68 Hong Kong flu or other epidemics. Can you put it in a pecking order, in terms of deaths, with previous epidemics? It is a slightly unfair question, but it is an interesting one.
Ed Humpherson: It is a slightly unfair question, so I am not going to speculate. I do not think I can create a—
Q2834 Graham Stringer: It might be an interesting answer.
Ed Humpherson: —league table of pandemics, I am afraid. I do not think I would like to do that.
Q2835 Graham Stringer: Having asked an unfair question, I will ask a stupid question, which is on excess deaths. At a point in time, excess death is obviously a very good metric, but you are comparing previous years. Do you normalise it for the structure of a population five years before, or is it a straight measure? As you go into the future, you get very odd statistics, don’t you, because you get negative deaths at the moment? The death rate is low. For the stupid boy at the back of the class, can you explain how you deal with that?
Ed Humpherson: You described your previous question as an unfair question. I agreed it was an unfair question, but here I have to disagree with you. I do not think that is a stupid question; it is a very good question.
You are correct. There are two ways of doing this, effectively. There is the crude five-year average, and that is essentially what is done in the weekly release produced by ONS, and then there is the age-standardised mortality rate, which corrects for the age structure of the population and comes out—I am not going to remember the exact frequency—possibly quarterly. It does it both ways to give you that picture. For the five-year rolling average excess deaths, ONS has decided for 2022 to drop 2020 from the five-year calculation, if you see what I mean. It is 2016, 2017, 2018, 2019 and 21, and it has dropped 2020 because 2020 had such an unusual peak of deaths. The average is slightly modified.
Q2836 Chair: Thank you very much indeed, Graham. I have some other unfair, or perhaps even stupid, questions to round up with. When reaching for the best number, Mr Humpherson, you talked about 140,000, which was the ONS death certificate derived number. Dr Scobie said, and we have heard this elsewhere, that excess deaths are the best number. Why did you choose the ONS number rather than excess deaths as the one to direct most attention at?
Ed Humpherson: Because that was in the context where people were picking out the subset of death certificates that had only Covid on. I was responding to that and saying that if you looked at the death certificates that had Covid as the underlying cause you would get to 140,000. It was in the specific context of that question. In general, I definitely agree that excess death is the best measure.
Q2837 Chair: In answer to a question from Mr Stringer, who asked you what the best estimate of deaths from Covid is—we do not know for sure—you said 140,000 based on that. Why did you not quote the excess deaths figure?
Ed Humpherson: Because I was still thinking about the particular point about the death certificates, and that is what I was—
Q2838 Chair: If we are to do what Graham Stringer said, perhaps we cannot compare Spanish flu to this pandemic, but if we were to have in our minds the most reliable number of deaths from Covid, what would it be?
Ed Humpherson: I do not have that figure.
Q2839 Chair: We have three choices. We have the 28-day figure, which you have been very clear is not reliable.
Ed Humpherson: You mean the excess deaths figure.
Q2840 Chair: It is whatever the current excess deaths figure is.
Ed Humpherson: That would be my advice.
Q2841 Chair: That would be the definitive one.
Ed Humpherson: Yes.
Q2842 Chair: Thank you. We are going to talk about modelling, and the boundary between the gathering of statistics and their publication and the publication of, in a way, prospective statistics, which is, in a sense, what modelling is. How does your office engage with the use of modelling, given that it is statistical?
Ed Humpherson: We come at it in two ways. First, we say that in formal terms we do not have a role in auditing models or doing assessments of whether a model is good or bad, or whether it predicted well or badly. On the other hand, to go back to some of my earlier responses, we think that for the public debate, the public discourse, generally speaking, these things are thrown in as numbers; they are not thrown in as, “This is a statistic versus this is a model.” For that reason, we expect the same standards of transparency and explainability to apply to something if it is a model just as much as we would formally under our code of practice if it is statistics.
A couple of times during the pandemic, when we have seen examples of where we think the models have been put out in a way that does not give enough information to enable people to make sense of them, we have said that there ought to be more in the public domain, most recently on an estimate that Omicron would produce 200,000 infections a day. Professor Spiegelhalter memorably described it as number theatre; it is just putting a number out. As a result of speaking to UKHSA, they released the underlying logic of their model.
In a way, it goes back to the points I was making earlier about making as transparent as possible the numbers that are being put into public discourse. We do not scrutinise models from the ground up as an auditor, but we expect the same levels of rigour and intelligent transparency—transparency that helps people understand the models—as we would for statistics.
Q2843 Chair: In terms of transparency, as you say, generating public confidence in statistics is a core part of your remit. How statistics and figures are derived is clearly important for that, and you were very critical of one of the numbers that has been put about as to the smaller number of deaths from Covid, such that you would not even repeat the figure; you so did not want to give it credibility. That was because of the construction of that number and the derivation of it.
The same applies, does it not, to models? You can have a number, but the credibility of the model that results in that number is the key aspect of whether the number is reliable or not. Does this not lead you to have a concern? One of the controversies in the world of modelling is on the publication of underlying code so that people can, first of all, understand how a model that produces a prospective statistic is constructed. That surely must engage your interest as a regulator.
Ed Humpherson: Absolutely, it does, and we are completely on the side of making underlying code available. Of course, making underlying code available will not be directly useful for lots of members of the public, but for some—mainly a more knowledgeable, expert audience—it enables them to understand, scrutinise and challenge. That process of scrutiny and challenge is really helpful because it helps the modeller and the people who use the model to understand its limitations and parameters and how to improve it. Absolutely, we are very strongly supportive of that as an underlying principle in how to put numbers out that can command credibility and be useful.
Q2844 Chair: Do you think your office has played an active enough role during the pandemic in looking at the transparency of models that have been used?
Ed Humpherson: Where we had concerns about model-based numbers coming out in a way that has not allowed that understanding among the public audience to take place, we stepped in. As you know, we have done it two or three times during the pandemic, and we think it is important work for us to do.
Q2845 Chair: We are about to come on to discuss some of the modelling around Omicron, and one of the aspects of contention that we will be exploring is whether there was enough modelling of prospective behavioural responses. Some people say that the absence of that, or an under-weighting of that, led to predictions that were inaccurate as a guide both to policy and to the understanding of the public as to what was likely to happen. Did your office have concerns about some of the foundational assumptions of some of the models during the Omicron modelling?
Ed Humpherson: Our primary concern was that, when we had the results of modelling exercises that took current positions and extrapolated them, and they said we would have a certain number of cases in a few days’ time, those assumptions and processes were made clear and transparent, and we stepped in then. As to whether another model might have been a better model, we did not directly address that. We were more concerned about the transparency of the assumptions that were being put into the public domain.
Q2846 Chair: Given that the number comes out, and that you can have a model that is transparent and perfectly open but that you know is based on limited assumptions that might be flawed, and that would produce a number, you would not want great attention to be paid to that number. It seems to be directly analogous to the point you made to Graham. If you just look at the sole cause of death being Covid on a death certificate as the most accurate indicator of death from Covid, you would be misleading people. If the model says that someone who dies of Covid is someone who has Covid exclusively on their death certificate, there is something about the construction of that model, if I can call it that, that is objectionable, and you have just objected to it. Why wouldn’t you do it for prospective assumptions that you may think are not based in—
Ed Humpherson: I am not quite sure what you mean by prospective assumptions, but let me try to make some progress. The starting point we will always have is whether the things that are being used to inform the public are worthy of the public’s confidence. In the space of modelling, which is distinct from statistics, you could have one model that has simple parameters, is fairly steady state and gives you one set of answers, and you could have a completely different kind of model that is much more sophisticated, fits lots of different parameters and produces a different set of results.
It is much less the case that there would be an absolutely correct or absolutely incorrect way of doing it. There are different shades of modelling depending on the time you have, the data you have and the question you are trying to answer and so on. The reason that we push transparency so much is that it is important that people understand what kind of model it is. That helps people understand, challenge, scrutinise and improve the model, but it also helps people say, “In fact, if we want to understand the problem, this might not be the right kind of model to use.” There may be a different kind of model that is, in this scenario, more behavioural. We think that that debate should happen in the open, not behind closed doors. That is why we are so pushing of transparency.
Q2847 Chair: I understand that. All I would say is that when it comes to published statistics you have a regard not just for transparency, but for whether it is a reasonable basis to collect and make assumptions for them. As the topic of modelling is very important, it may be something that in the future a regulatory oversight, not to suppress but to comment on, may be appropriate.
Ed Humpherson: I am always happy to have my remit extended by parliamentary advice.
Chair: On that note, thank you very much indeed, Ed Humpherson and Dr Sarah Scobie.
Examination of witness
Witness: Professor Medley.
Q2848 Chair: I now invite our next witness to join us. We are very pleased to welcome Professor Graham Medley, who is the chair of SPI-M, the scientific pandemic influenza group on modelling, which feeds into SAGE, in which Professor Medley has also participated. Thank you very much indeed for coming. You have appeared before the Committee several times, always providing great illumination to us.
We were hoping to be joined by one of the co-chairs of SPI-B, the scientific pandemic insights group on behaviour. Unfortunately, neither was available today, which is a shame because these questions are important. We are very grateful for your attendance, Professor Medley.
Perhaps I can start with this initial question. Thinking about the measures that were taken, and we know that it is policy makers who make decisions based on advice, would you be able to shine a light on what the Prime Minister said on 21 February when he talked about the alleged division between, as he put it, “gung-ho politicians and cautious, anxious scientists”? His implication was that that was overdone; that was not the right way to think about it. Was there a rupture between scientific advice and policy decisions that took place around the ending of the restrictions?
Professor Medley: I am afraid I am not going to be able to answer that. I am too far down the food chain. My role is to ensure that the evidence is in place. How that gets turned into advice, which is then given to politicians in their discussions with their many advisers and the decision makers, is not a discussion I am party to. My job is to organise the modelling evidence that then gets taken to SAGE.
Chair: Let us go into that, starting with my colleague, Graham Stringer.
Q2849 Graham Stringer: I was going to ask some questions of your colleague who has not been able to make it this morning, so it would be a bit unfair to ask you to answer. As you are on SAGE, I will ask a more generalised question about it. A lot of what SPI-B did was trying to influence human behaviour on a national level during the epidemic. If that had been an experiment in a university or a medical faculty, the people doing the experiment would have had to go to an ethics committee. Was there an ethical framework put around SPI-B when they were trying to change human behaviour?
Professor Medley: It is not my field of expertise, but I can speak from the point of view of someone who attends SAGE. We have an ethicist who attends SAGE who is, I think, outside SPI-B. The whole point though is that it is going to happen and the epidemic will have an impact on people. It is, to some extent, Government’s role to think about how that impact can be used to reduce the harm of the epidemic. The epidemic will cause harm one way or another, and the idea, I guess, is to come through it having caused the least harm either through the virus itself or through the measures imposed.
The other issue about experimentation, or experiment or understanding, is, again, a general one. I and others have been a little frustrated by not being able to do more controlled trials to understand what works and what does not work as we have gone through the epidemic. There are deep ethical issues around that, as you say. The epidemic is going to happen, so an ethical understanding has to be put against that backdrop.
Q2850 Graham Stringer: I understand that. It is a fair point, but then that has to be studied and tested against the point you make about whether it is likely to do more harm than the epidemic.
I am slightly reluctant to carry on because you are not the chair of SPI-B, but I will ask this. The real point was that SPI-B was using very hard-hitting emotional messaging—the words they used—in order to change the perceived level of personal threat among those who were complacent. That is quite a tough gig, isn’t it? I would have thought that you needed to explicitly discuss that with an ethics committee. Were you aware that that discussion took place?
Professor Medley: I am not aware, but I am not sure I would be. That is really outside my area.
Graham Stringer: They were questions I meant for the other person who could not make it, but thank you for trying to answer them.
Chair: We might find a way to engage with the SPI-B chairs to see if we can pursue these important matters. Let me turn to Chris Clarkson and then Dehenna Davison.
Q2851 Chris Clarkson: Apologies, Professor Medley, because my questions were mostly aimed at SPI-B as well. I will ask you in a more general sense as a member of SAGE if you can give us any context. SPI-B cautioned against the unintended consequences of any change to the rate of testing or the availability of testing. Do you know whether that had been factored into any thinking before the Government made their announcements?
Professor Medley: I do not know. Behaviour is important in terms of transmission, from my point of view anyway, and it is influenced by policy but not determined by it. The unintended consequences are something we are all particularly aware of and thinking about. In this context, it means that if you remove all restrictions the consequence might be that people feel more fearful and change their behaviour in the opposite direction. Human behaviour is largely unpredictable. As politicians, you are far more au fait with that than I am.
Chris Clarkson: It is probably a modeller’s worst nightmare, to be honest. Thank you very much for trying.
Q2852 Dehenna Davison: You will be pleased, Professor Medley, that my question is intended for you. We have heard that the Government plan to scale down the ONS infection survey despite SAGE advocating for it to continue. If it was scaled down, and three, four or five months down the line a new variant emerged, what impact would that have on the epidemiological modelling?
Professor Medley: We have been, in some ways, fortunate for the past 18 months that the community testing—pillar 1 and pillar 2 testing—that has been put in place has been a really good signal for community-level transmission. There are generally two ways in which you can use that kind of testing. One is as a surveillance tool, but it is not a very efficient surveillance tool. You have to test an awful lot of people who do not have it.
It is used mostly as a control measure. Testing people if they are positive reduces their risk of transmission. That is its primary aim, given the level of investment that went into it. The ONS community infection study is only to do surveillance. It is only to measure the amount of transmission and the prevalence in the community. It has been a primary source of information. It is delayed compared with community testing, but on the other hand it provides much more detailed information. Its household basis means you can measure what is called the secondary attack rate, the proportion of people in the household who get infected if one person becomes infected. It has been used to look at risk of transmission, or risk of infection, having had previous strains—reinfection—and so on, as well as being a surveillance for different variants.
It is certainly a critical source of information. Its delay challenges it in terms of its immediacy. We have seen with the Omicron waves that you measure the time period from beginning to end in days almost, as it develops, so it is unlikely, I suspect, that the ONS study will give us the same level of awareness of both this variant and future pandemics. If we cannot stand up a response to the next Covid variant, we will not be able to stand up a response to a future pandemic. There will be a delay, but we shall have to see whether or not that is critically important. Personally speaking, I think the level of investment in community testing purely for surveillance purposes is disproportionate.
Q2853 Dehenna Davison: When you said there would be a delay, would that be a delay in picking up the spread patterns of new variants?
Professor Medley: Yes. For the Omicron wave, we knew it was coming because we had an alert from South Africa. For the Delta wave, we knew it was coming because we had an alert from India. We did not know the Alpha wave was coming. That was first detected in Kent, and it was seen because of changes in patterns in community testing. I am not sure at what point the ONS survey picked it up, but there will have been a one or two-week delay before it detected that variant.
Dehenna Davison: Is it fair to say that you hope the Government might reconsider and keep the level of—
Professor Medley: Sorry, can you say that again?
Q2854 Dehenna Davison: Is it fair to say that you would hope the Government would reconsider and keep the scale of the survey as it is rather than reducing it?
Professor Medley: I and others made it very clear in SAGE and in SPI-M that the ONS community infection survey is worth its weight in gold for understanding the epidemiology and putting us into a position to be able to respond quickly to a new variant. If there are concerns about its cost-effectiveness, my temptation would be to make it more effective in terms of adding potentially other viral infections and finding other things to do with it. As a source of longitudinal data, being able to go back to the same households and retesting and re-asking those questions, it is pretty much invaluable.
Dehenna Davison: Thanks, Professor Medley.
Q2855 Aaron Bell: Could we turn to the specifics of the modelling of the Omicron variant before Christmas? When you set out to model the wave, what key uncertainties were there in the available data, and could some of that information have been known at the time?
Professor Medley: There is going to be a general question about what constitutes evidence. We first knew about the Omicron wave, as everybody else did, in media reports and so on at the end of November/beginning of December. It was very clear quickly that it was in the United Kingdom and was growing rapidly. In the first week of December, we knew that it was going to replace Delta, and all the characteristics suggested that we were going to have a massive wave of infections. The question then was to what extent it would result in disease.
The key issues, the ones that we included and thought about, were severity and the impact of vaccines. The intrinsic severity of the virus—how dangerous it is in an unvaccinated person—and how the vaccines and history of infection modify that were the two key things. We also had a suspicion that the virus had a different generation time, which is the time between one infection and the next. We had no clear information on that at all. Omicron has a shorter generation time, which means that the rapid growth rate did not indicate high direct transmissibility.
Q2856 Aaron Bell: You assumed initially that it was intrinsically as severe as Delta. Is that correct?
Professor Medley: No, the modelling on 13 December, two weeks after we knew about it, included severity down to 10%. What the modelling showed though was that the severity is not necessarily the most important driver of the wave of disease. In the United States, for example, they had the same Omicron wave but it has resulted in a lot of hospitalisation and death that is worse than their second wave. Although severity was important, and we included that, it still suggested that even with much lower severity there would be a challenging wave.
Q2857 Aaron Bell: You basically use an SIR model, but you had to make some big assumptions about what susceptibility meant because of vaccination. Was that the complication?
Professor Medley: Yes. They are essentially SIR-type frameworks that the modellers are using, but they are covered in heterogeneity. Age, place, history of infection and vaccination history are critical heterogeneities in that. It does not look much like an SIR model any more.
Q2858 Aaron Bell: The statement SPI-M made on 15 December was presenting scenarios. We should be clear that they were not forecasts and predictions, and you have been clear about that as well; they were the plan B measures, which are what we then proceeded with, more or less. They were that infections would peak between 600,000 and 2 million a day between late December and early February this year. Hospitalisations would peak between 3,000 and 10,000. The reality is that cases peaked at 212,000. You had a peak of 600 to 6,000 deaths between mid-January and mid-March. The peak, depending on the measure, was around 285.
Obviously, we did a lot better than that. With the benefit of hindsight, how useful would you say that statement was for policy makers, given that the reality was substantially lower? You can argue that it is within an order of magnitude. I am obviously asking with the benefit of hindsight.
Professor Medley: I can talk about the modelling. I cannot really talk about the use of modelling. It is quite clear that in the past year we have done a much better job of being able to communicate the uncertainty and to communicate the value and use of modelling to people in the Cabinet Office, for example. I believe that they understand what the models mean and how to use them, given the fact that the decisions that were taken had regard to precautionality—the accelerated booster programme, putting hospitals on standby for emergency wards, and so on. There was a precautionary note in the decisions that were taken and the policies that were announced. Clearly, the people who were making the decisions did not believe that those were predictions, I suggest.
Aaron Bell: Right.
Professor Medley: I am happy to explain why the models are at variance with reality, but I cannot really talk about how they were used.
Q2859 Aaron Bell: Why were they at variance with reality? Is it because of the shorter time of the—
Professor Medley: The generation time is one of the critical issues.
Q2860 Aaron Bell: We peaked a lot earlier as well as a lot lower. We peaked earlier than you suggested.
Professor Medley: Yes. There was substantial behaviour change, of the kind of scale we saw before during a lockdown, as plan B was introduced. With the announcement on 12 December for the accelerated boosters we did not know how that was going to roll out or what impact those boosters would have.
At the time, at the beginning of December, everything we knew about Omicron was not good. What happened subsequently was that everything turned out well, but that does not mean to say it was going to happen. Our job is to lay out the landscape of possibilities to SAGE. Inevitably, our worst case is always going to look worse than reality. Hopefully, the lower case looks better than reality, but in this case it did not.
Q2861 Aaron Bell: I will come back to the specifics of the scenario in a minute, but the implications of what you presented on the 15th was that people were expecting the Government to bring in further restrictions and to go to either step 2 or step 1 of the road map. They obviously did not. There was a big debate in Parliament. Do you believe the Government had further information within the next week? Was there new information from SPI-M or from new data going in that justified that, or was it essentially a gut call by the Government?
Professor Medley: Again, you have to ask the Government.
Q2862 Aaron Bell: Did you provide fresh modelling that suggested it was looking better before Christmas?
Professor Medley: I will have to check. We had a meeting of SPI-M on the 15th and then of SAGE on the 16th. There was some discussion about how you measure severity or what severity means on Sunday the 19th, but there was no further modelling that went to Government. There was more modelling that came out during the week of the 20th that will have played into some kind of decision making. In terms of what evidence the Government had in front of them or how much weight they gave different forms of evidence, including the economic and the social, you will have to ask them.
Q2863 Aaron Bell: Can I turn to your quite famous Twitter discussion with Fraser Nelson on the 18th?
Professor Medley: I wish you would not, but please do.
Q2864 Aaron Bell: I was going to say that I commend you for going out there and explaining SPI-M’s position on a public forum like that in the face of challenge. You started off by saying that the point being missed is that these scenarios are not predictions, which we have already discussed. Fraser then goes on to say, “Are you exclusively modelling bad outcomes?” and you say, “We model what we are asked to model.” Obviously, when you are taking a precautionary principle, the worst-case scenario is important. At what percentile are you setting this worst-case scenario? What is reasonable? Fraser thinks you have an inbuilt negativity bias. What would you say to that?
Professor Medley: Our position is that the worst thing for me as the chair of the committee would be for the Government to say, “Why didn’t you tell us it could be that bad?” Inevitably, we were always going to have a worst case which is above reality.
There is a communication problem in the sense that the media pick up on the worst cases and people treat them as predictions even though they are not. The modelling is there to understand the process and what is going on. We know we cannot accurately predict the numbers, but we can give insight into the processes that determine the outcomes. The question of negativity bias somehow suggests that we have a policy outcome in mind, but we do not. My job is to provide the evidence to SAGE. The models are an input to SAGE, not an output from SAGE. The measures that are taken affect me and everyone else on SPI-M as much as anyone else. Negativity bias suggests somehow that predicting high numbers of cases is somehow negative, when in fact you could say it is positive because it means that Government decisions will be appropriate.
Q2865 Aaron Bell: Further to what Fraser said, Professor Robert Dingwall, who was until recently on JCVI, said that what you had said on Twitter revealed “a fundamental problem of scientific ethics in SAGE,” arguing that your team produced an “unquestioning response to the brief.” Do you think that is unfair?
Professor Medley: Absolutely. We are an independent group of academics. We do not get paid for what we do. We volunteer the expertise that we have to support Government decisions.
Models cannot be policy neutral. There is always a policy decision somewhere in the model even if it is, “Do nothing”, which is a policy decision. By far, the most useful modelling is then done in discussion with the people who are going to use it. That is what was meant by that; there is no point in us modelling something in which decision makers have no interest whatsoever. We are, to some extent, guided in thinking about the scenarios and the issues and the questions that the Government have. For example, the Government have said, “It is hospitalisations that we are most interested in.” Therefore, we make sure the models produce hospitalisations as an outcome.
We are not told what to model. We can model whatever we like, but given that we are doing it to inform decision makers, we ought to discuss with the decision makers what they are interested in.
Q2866 Aaron Bell: If we are talking about worst-case scenarios though, we need to know how likely that is. You say they are not forecasts, but do you assign a probability or a percentile probability to where the model sits? We cannot take precautionary measures against something that has a one in 10,000 chance of happening, but we can against something that has a one in 20 chance of happening. Do you give that level of granularity? I appreciate that it will be an estimate, as all of it is. Do you look at that?
Professor Medley: No, we do not, and we do not on purpose because of the unpredictability. Let me approach it another way.
We have three general areas of uncertainty when we are making a model of the sort that we were making in December about the Omicron wave. One of them is relatively straightforward biological uncertainty—how dangerous the variant is, how well the vaccines will work, and so on. We could potentially do some expert elicitation, if we had the time, to generate some kind of probability distribution to say that it is very unlikely that Omicron is twice as dangerous as Delta, and then those scenarios would drop down in terms of probability, but we do not have time to do that so we end up just looking at a brief range or a range of possible outcomes—for severity, the Warwick modelling went from 100% down to 20% in that paper—without assigning any prior probability because we have no information about that.
Q2867 Aaron Bell: But one of them will get labelled as the central scenario, and that has happened in the past.
Professor Medley: Not by us. The other uncertainty we have is policy because we do not know what the Government are going to do. We do not know what decision makers are going to do. We do not know what you are going to vote for. Then we have behavioural uncertainty. We do not know what the public reaction will be. Given that level of uncertainty, it is very difficult, if not impossible, for us to say, “Right, this scenario is our favourite.”
Added to that, in terms of providing evidence of this sort to decision makers, it is the same as when you buy any kind of savings plan; the savings plan will give you a range of scenarios and say, “If interest rates are this, you will get this much, and if it is this you will get this much.” It is legally required that they are not allowed to say, “This is the one that you should look at because this is what we think is going to happen.” We take the same stance of being able to say to the decision makers, “This is the range of possibilities, these are the drivers for those possibilities, and you will have to take the uncertainty into consideration.”
Q2868 Aaron Bell: Can I pick up on the bit about human behaviour change? We are going to hear from Dr Holten-Møller from Denmark in the next session. She said that the reason the Danish predictions for Omicron more closely matched reality than the British ones was the attention the Danish groups paid to behavioural changes that were not mandated because they looked at observations of what happened before. Why was that not in the UK’s modelling?
Professor Medley: We have looked at it. We would like some data to be able to drive it. As far as we can tell at the moment, there are no data streams available that tell us what transmission is today. We will know what it was today in two weeks’ time from the outcome of transmission, but we cannot say what it is today.
My understanding of the Danish model, which you will hear about, is that it is relatively phenological. It just says that when admissions get to some level, behaviour changes. We have not included that. Maybe that is an error. On the other hand, we have seen dramatic behaviour changes that were completely unforeseen. The pingdemic at the end of July was by far the most effective three days in reduction of transmission that we have seen throughout the whole epidemic, much more effective than any of the lockdowns that were introduced. There was no indication that that would happen. In fact, the data streams that we have—for example, Google mobility—do not really show any change at all. It was potentially a testing-driven change of behaviour in the way that people went about their contacts.
The other important point is that the epidemic is dynamic; it changes. People’s responses to the situation in March 2020 were very different from those in November 2020, and very different again in January 2021. As the epidemic progresses and people’s understanding about the epidemic changes, so their response either to policy interventions or to messaging also changes. Rather than try to second-guess what people are going to do, we tend to presume that people will carry on doing what they are going to do and decision makers will have to include that uncertainty.
Q2869 Aaron Bell: The Government can therefore quite reasonably conclude that people have already stopped going to the pub so they do not need to mandate them not to go to the pub, which is perhaps what happened.
Professor Medley: That happened on 15 December. There were signs of behaviour changes. We saw much more clearly in the subsequent weeks that there had been a quite dramatic shift in behaviour around 15 December, and that might have fed into Government decisions, but you will have to ask them.
Q2870 Aaron Bell: You have been very generous with your time. I speak as somebody who used to model in a different field, and I believe that modelling is the best tool we have for pandemics. What lessons do you think we need to learn from across the whole pandemic?
We had a debate in Westminster Hall in January. The points I made, if I may be so modest, were that there was a failure to understand human behaviour that could usefully be included; a failure to make code open source, so that other people could play with the models in the same way that we heard in the first session from Ed Humpherson about making statistics open source; and sometimes a failure to put in the most up-to-date parameters, as we saw in the summer with the vaccination data not being in the models when we were being asked to vote on step 4. What would you say are the biggest lessons the modelling community can learn from the pandemic?
Professor Medley: There is a whole load of lessons about the structure of the models technically, and that includes the behaviour aspect. Human behaviour is very difficult to predict. If you could do it, you would make a fortune on the stock exchange. That is the challenge.
In terms of open source, at the beginning of the pandemic there was a challenge. Most of the groups now provide open source for their coding. A model is not just a code, but I completely understand the need for transparency, and I think that is very important. There has been a communications problem. My role is to provide the information for Government and not the public. But then of course the models get made public, and I do not think there is anyone who has the role of talking about the modelling to the public.
Q2871 Aaron Bell: Who should have that role? Should it be someone from your community, or should it be a politician?
Professor Medley: The Science Media Centre has been very important in trying to get scientists to be in touch. I have done briefings with journalists. I have talked to journalists throughout the pandemic, and I have answered most of the questions I have had from the public. At the same time, I am advising Government or I am in the process of advising Government. Although I am an independent member of SAGE, as far as the public are concerned, I am not an independent adjudicator or an independent commentator on the models.
It is a very good question. Who should have that role? I do not know. On various social media platforms, there are a lot of people who look at the data at a very detailed level as it comes off the dashboard each day, and they have performed a huge public service, but there really is not anybody who has done that for the modelling.
Aaron Bell: Thank you. As you pointed out earlier, you are all volunteering your time on this, and I and others are extremely grateful. Even though there may be criticisms of specific models, we have been very well served by the scientific community throughout the pandemic.
Professor Medley: Thank you.
Chair: Thank you. That is echoed by all of us.
Q2872 Rebecca Long Bailey: Thank you very much for speaking to us today, Professor. I have a lot of questions about behaviour, but I will park them for now until we can speak to your colleague. Very briefly, can you let us know if there are any current trends emerging in the data as you see them at the moment, particularly in light of the removal of a number of restrictions recently?
Professor Medley: Community testing acts as a control measure and also as a surveillance measure. One of the things that happens as you change surveillance measures like that, particularly removing the testing, is that not only are you losing some control, because people are no longer having to self-isolate, but you are losing the surveillance as well. This is a particularly challenging time for the epidemic in that, as the data streams change, because we do not quite know how people are going to respond to the changes, it will take several weeks before we settle down to a situation where we have a much better idea about what is going on, and the ONS community infection survey will be critical in understanding that.
Q2873 Rebecca Long Bailey: Are you concerned that there is certainly less emphasis on community testing and home testing now?
Professor Medley: Personally, I have throughout tried to avoid making second guesses and judging policy decisions, because it is the decision maker’s job—your job—to balance the harms caused by the virus and the harms caused by the interventions. As we move through the epidemic, inevitably we come to a point at which many of the measures that were put in place to prevent the health service falling apart will become redundant, and the timing of the removal of those is a judgment call that has to be made by the decision makers. Whether this is the right time, to some extent we will see in the coming weeks. Inevitably, the removal of the requirement to self-isolate will result in an increase in transmission, or it will not go down as fast as it has, but whether it is a significant one is to be seen.
Q2874 Rebecca Long Bailey: Finally, with regard to variants of concern—new mutations—and the relaxation on home testing and community testing, do you have significant concerns? One of the things that certainly struck me in our previous sessions on this Committee is the fact that, even when you get to the PCR stage when people go for a more intensive test, only a small proportion of those go for genomic sequencing testing. If that is only happening with a small portion of PCR tests, but you have a dramatic reduction in people taking lateral flow tests, to take them to the PCR stage, are you concerned that we could see a whole range of mutations but not really be aware of them at all in the UK?
Professor Medley: UKHSA is thinking about the surveillance question. Hospitals are going to remain the main source of genetic material. That includes people who are in hospital because they have a Covid infection as well as people who happen to have Covid while they are admitted for another purpose. PCR of admissions to hospital would continue to provide genetic surveillance. It will not be as good as testing a million people a day, or whatever is being done at the moment, but it is certainly much better than nothing.
Rebecca Long Bailey: Thank you.
Q2875 Chair: I have a couple of final questions, Professor Medley. You talked in the answers you gave to Aaron Bell about producing scenarios but not the probabilities, and that is advice to policy makers. The policy makers have to make a decision, and that has to be based on an assessment of the probabilities. How are they to do that without expert advice?
Professor Medley: The decisions are very challenging. That really is a question you are going to have to ask them. The SAGE papers and the SAGE documents that are produced lay out the uncertainty and, to some extent, summarise the evidence. How that gets turned into advice and how Governments and decision makers can be guided is not something I have been involved in.
Q2876 Chair: I understand that you have simply set out scenarios and have not ascribed probabilities to them, but shouldn’t you have done? Haven’t you left the policy makers in the lurch? You have given a range of scenarios, but said, “We’re not going to make any assessment of the likelihood.” You must know surely, better than lay people, which are likely and which are unlikely, which is essential to have to crystallise it into a policy decision.
Professor Medley: We could guess. Our guesses might be 10% more accurate than other people’s, but they are still guesses. In a period of uncertainty, it would be wrong for policy decisions to be made on the basis of the guesswork of a few people.
Q2877 Chair: But you have to make a decision. You have to decide whether you are going to lock down or not. I might question whether “guess” is the right word, but at least it is literally an educated guess if we take it from people who have studied these things. If it is not informed by people who have studied them, it is an uneducated guess, which is surely worse.
Professor Medley: There are two ways to that. We have in the past been asked by people from the Cabinet Office, “Yes, but which is going to happen?” Our immediate response has been, “We don’t know.” You cannot say, “What is the generation time of Omicron?” We do not know what the generation time of Omicron is. That is the right response. On the other hand, more recently, we have not been asked that question by the Cabinet Office, so people are not saying to us, “Yes, but which is it?”, partly because as soon as you point to one of the scenarios and say, “But, actually, this is the one we think,” the decision makers will automatically focus on that one even though it might not be true. The decision has to include the uncertainty. I think that is what I am trying to say.
Q2878 Chair: Aaron Bell referred to your Twitter exchange with Fraser Nelson. It was illuminating because it brought to light something that none of us knew before, which was that your colleagues in the modelling group provide scenarios but do not give any steer as to which are the most likely ones, and it is completely over, unsupported, to politicians and other policy makers to decide. That is the state of things. That is how it is.
Professor Medley: Occasionally, we have said in the documents, “This is what we think.” Famously, we did that in September and we got it wrong. If we knew, or if we had some idea of how well we knew, so that we could give some probabilities, we would. But we do not. Pointing to one of the scenarios and saying, “This is the one we think is going to happen,” means that, in a sense, you would lose the uncertainty. I gave the example of buying a savings plan. A range of interests rates are given, but the company is not allowed to say, “Yes, but this is the one you should look at,” because that is the one that everyone would look at.
Q2879 Chair: I understand that, but you have some advantage. There is a very long-standing concept in economics of comparative advantage. I would say that members of SAGE and members of the modelling community have a comparative advantage in knowing which is the more likely compared to lay people.
Professor Medley: Quite possibly, but we have not tested that. My suspicion and my hope is that within Government they use the results of the modelling, and they use the outputs of those scenarios in further models and further risk analyses, including the economic information, which we do not see, to come up with support for the decision that is actually made.
Q2880 Chair: I have a final question that has arisen from what you said, again in answer to Aaron Bell. You said that one of the questions that you might anticipate from the people who commission your work is, “Why didn’t you tell us it was going to be as bad as it turned out?” The implication was that that was a more pressing question than the flip side, which is, “Why didn’t you tell us that it could have been as benign as it was?” That is interesting for the Committee in looking at whether there is an unconscious bias in scientific advice. It would tend to be more towards caution and to pessimism because you are reporting that you perceive the pressure is, “At least tell us about the worst that can happen,” and that is more pressing than the best that can happen.
Professor Medley: Part of that is the way it is covered in the media. If you read the papers, the SPI-M consensus is far less. On the other hand, that is where the decision making has to be done. The interest is naturally more in the outcomes in which decisions will have to be made. Many times, we have new variants. We have BA.2, for example, which is now taking over from BA.1, but there is no indication that that is going to cause any change in health outcomes from current transmission, so that is an outcome. We have done the modelling for that, and there you are. There are no decisions required because of it.
Q2881 Graham Stringer: I am sceptical about modelling in all sorts of contexts. Can you make the case for even starting to do the modelling as against simple extrapolations against one or two variables, which would be simpler and a lot less costly? Why is modelling, with all the computer power, better than just drawing a few graphs with a few variables?
Professor Medley: It is always important to look at complicated models and go back to the back of an envelope—
Graham Stringer: That is the question that I am asking.
Professor Medley: —and make sure that they make sense. We try to have a range of different models in SPI-M to ensure that the more complicated models are backed up by much simpler models. The whole point of SPI-M is consensus. It is a result from the modelling rather than from the models. That will include people who are doing, exactly as you say, very simple extrapolations. It is important though because we do not know how important the heterogeneities and variations are—for example, regional variations. If we had a very simple model structure, and we did not include regions, we would not know how different regions could be from each other. It is only by including regions that you can see that there will be significant regional heterogeneity in both timing and size of peaks.
Q2882 Graham Stringer: I suppose to cast the question in another way, to use an old saw, if you have a hammer, everything begins to look like a nail. We have all these modellers and computers, so we had better use them for something. Is using the models helping the Government to make what are very difficult decisions, which are, and have been, matters of life and death, or would it not be simpler to be simpler?
Professor Medley: Data always trumps models. Models become more important, and I would say essential, when we do not have data and we have to make up assumptions. For example, how dangerous is Omicron? We did not know that on 27 November, so you have to include some variability in looking forward to what is going to happen. Whether that information is then useful to the Government in making very difficult decisions is something that the inquiry is going to delve into. In all honesty, you need to ask the decision makers what was critical in forming their decisions.
Graham Stringer: Thank you.
Chair: Thank you, Professor Medley. Echoing what Aaron Bell said, we are very grateful for your service during the pandemic, in addition to your day job as an active researcher. The point of these sessions is to take evidence from you, and you have illuminated what is the practice, and then it is up to us to make recommendations to the Government if we think that there should be changes in the practice. You have been very clear in your answers, and we are very grateful for your attendance today.
Examination of witnesses
Witnesses: Dr Ali and Dr Holten-Møller.
Q2883 Chair: We now turn to our final panel of witnesses. I am pleased to welcome from Denmark, virtually, Dr Camilla Holten-Møller, who is chair of the expert group for mathematical modelling at the Statens Serum Institut in Denmark. Thank you very much indeed, Dr Holten-Møller, for joining us. I hope you can hear us okay.
Dr Holten-Møller: Yes, I can hear you fine.
Q2884 Chair: Good. There has been anticipation of your evidence in some of our earlier discussions, so we look forward to going into that in more detail.
I am pleased to welcome Dr Raghib Ali in person. He is the senior clinical research associate for the MRC Epidemiology Unit at the University of Cambridge and an active clinician. Perhaps I could start with a question to Dr Ali, referring back to our first panel, on the recording of deaths and how we are to know how many people have died of or from Covid. We talked about the weight that is placed on the certification of deaths by doctors. You heard what was said, and the most reliable figure comes from that. With your clinical hat on Dr Ali, how reliable and accurate, in your experience, is that certification?
Dr Ali: Thank you, Chair. I want to make three points in relation to that. When we fill in a death certificate for any death, we have what is called part I and part II. We are looking at the underlying cause of death, which is something that may have contributed to it but was not the direct cause of death. In the example that Mr Stringer gave, if a patient comes in with a condition that caused either their admission to hospital or their death, and incidentally had a positive Covid test, that would not be recorded on the death certificate.
If they came with a heart attack where their positive Covid infection could have contributed to their heart attack, it would be regarded as a part II. For ONS purposes, that is recorded as a mention on the death certificate part II. If they came in with Covid pneumonia and died of Covid pneumonia, that would be recorded in part I. It would be shown as “due to Covid” on the death certificate. If they also had diabetes, for example, that would be put as a part II.
The most reliable figures that we have are from ONS, and both those figures are produced every week. They show the “due to”—the underlying cause—and the mention, “contributed to.” From that, we know, as was mentioned, that about 140,000 people have died from Covid due to Covid, as an additional number where it was mentioned on the death certificate and as an additional number where it was a positive test.
One of the reasons I am confident that the system has good integrity is partly my own experience on the frontline in all four waves, seeing how death certificates have been completed and seeing how we code patients when they come in. During Omicron, the previous pattern was reversed. Previously, the 28-day positive test would underestimate the figures, particularly in the first wave when we had much less testing.
In the Omicron wave when we had a lot more community testing and a much higher proportion of people had it as an incidental finding, that pattern was reversed. The ONS figures were lower than the 28-day figures, and that is a good check that the integrity of the death certificate is accurate. It also matches quite closely with excess death figures. We have had about 120,000 excess deaths. I have full confidence that the figures that we have seen have been accurate.
Chair: Thank you. That is very helpful. I will turn to my colleagues, starting with Chris Clarkson.
Q2885 Chris Clarkson: Dr Holten-Møller, could you briefly explain for us the science advice structures in Denmark, and specifically who generates epidemiological models and how those are communicated to the Government?
Dr Holten-Møller: We have initiated an external expert modelling group, which basically consists of experts from different universities in Denmark, and I head that group. We are given tasks by the Health Ministry in Denmark to do specific modelling, which could be reopening scenarios or the effect of different vaccination programmes. SSI, the Statens Serum Institut, which is the public health institute in Denmark, also holds a chair on what we call the epidemic committee, which basically consists of different Ministries and all the health authorities. They give advice to the Government regarding how to proceed in the handling of the Covid epidemic in Denmark. When SSI sits in the chair on the epidemic committee, they also include modelling results in the overall risk assessment and give that advice to the Government.
Q2886 Chris Clarkson: Thank you. Can you outline the key conclusions of the Danish epidemiological modelling of the Omicron variant and compare how they measured up with the real levels of transmission, hospitalisation and deaths in the country?
Dr Holten-Møller: The first models we did on the Omicron variant were on 17 December, and our approach was very similar to what you saw in the UK, basically modelling with different scenarios the uncertainties regarding Omicron, vaccine effectiveness, transmissibility and severity. We had two different severity scenarios; one was Omicron being 50% less severe than Delta, and the other was the same severity as Delta. Those we sent out on 17 December, which was the same day that further restrictions were implemented in Denmark, with the closure of cultural events and nightlife.
We did an update on 6 January, when we again took a look at the same predictions and plotted the observed development of the epidemic within those predictions or scenarios, and we could see that the scenario with Omicron being half as severe as Delta was more likely. What happened underneath that in Denmark is also important to mention. We saw that BA.2, the other sub-variant of Omicron, took over in late December/beginning of January, and that gave us some extra transmission, or an extra peak, that was not in the model results.
Q2887 Chris Clarkson: Thank you. You mentioned that you updated the model on the 6th and worked on the assumption that it was half as severe as Delta. How did you update your advice to the Government based on that?
Dr Holten-Møller: We did a public report, and it was a key part of the risk assessment that went out from the Statens Serum Institut to the epidemic commission. What was also in that risk assessment was the more benign picture of the Omicron variant being less severe. Even though we modelled new hospital admissions, we stated both in the model report and in the risk assessment that we expected those going to hospital to be less ill, with more being in hospital with a positive test, and not being ill with Covid. That was in addition to the modelling of really important knowledge about the Omicron variant and the picture we saw in Denmark. We did not specifically put in the model the exact understanding of the clinical picture of the hospitalisations, because that is really difficult to do, but it was in the risk assessment from the SSI.
Q2888 Chris Clarkson: Thank you. Why, in your opinion, were the Omicron scenarios modelled in Denmark more accurate than those modelled in the UK?
Dr Holten-Møller: That is difficult for me to answer. In the UK, you had multiple modelling groups giving different modelling scenarios. In Denmark, we only have one small modelling group giving the scenarios. In many terms, we had the same approach as in the UK. When I look at the models, we had the same approach regarding severity, transmissibility and vaccine effectiveness. It is very much the same approach.
Perhaps we have more detailed information in our models, especially, as was mentioned, the effect on transmission from behavioural changes, and we strove to put in the exact level of activity or restrictions that were imposed in Denmark—for instance, having schoolchildren going on early school holiday for Christmas and the closure of nightlife and cultural activities. That was specifically put into the models. We had an extensive booster vaccination campaign in Denmark where we managed to roll out 3.5 million booster vaccines before the end of 2021, which was a huge effort and was really important to put in the models.
Q2889 Chris Clarkson: Thank you. Finally, are there any lessons that the UK could learn for the production of epidemiological models and future variants based on experience from other countries like Denmark?
Dr Holten-Møller: I listened to Graham Medley, and I think we have had the very same difficulties in explaining the level of insecurity in the models or the range of predicted scenarios. To a large extent, we struggled with the same issues in Denmark in terms of modelling, but it is equally important to give those scenarios to our decision makers to point towards the risk assessment of new variants, which is key to understanding and acting accordingly in an epidemic situation and have the correct response. It is important that we give this picture to the decision makers whenever new variants arise.
Q2890 Chris Clarkson: To round off, are you sharing best practice with colleagues around the world—things that worked and things that did not work—and how you have modified your modelling over the course of the pandemic?
Dr Holten-Møller: We participate in a number of sessions. We have participated in meetings with the UK model groups and Scandinavian model groups, especially where we have shared knowledge about how to implement seasonal effects or behavioural changes. We often have discussions on the level of difficulty in doing that, especially given that the data to inform the models are different in different countries. We have those discussions in the model forum internationally.
Chris Clarkson: Thank you, Doctor.
Q2891 Chair: Before I go to Aaron Bell, I want to pick up on the discussion you probably saw us having with Graham Medley. The modelling group here has produced scenarios that it gives to policy makers. We talked about giving a weighting or a probability to the different scenarios. In Denmark, when you made your modelling advice, how did you tackle the question of which scenarios were more likely than others?
Dr Holten-Møller: We did one model report where we tried to give our advice on what was the most likely central scenario. It is the same situation as Graham Medley explained. We would rather not do it. It is very difficult for us as well to give exactly the uncertainty of different scenarios. It is equally difficult for us to give advice on that. We did it once, and it too turned out not to be correct, so we would rather not. In terms of modelling, what you need to do is to get more accurate models and try to have a smaller range in your modelling scenarios. You can only do that by adding detailed data information to your models.
Q2892 Chair: That is an interesting case study. Do you remember the occasion when you gave a central estimate and it was wrong?
Dr Holten-Møller: I think it was one of the Delta models we did in the fall. It was also in September, just as Graham Medley explained, when we pointed out a central scenario and it turned out to have a higher peak than we estimated.
Q2893 Chair: So you stopped doing that; you stopped having a central scenario from then on.
Dr Holten-Møller: Yes. We discussed whether it was the thing to do. We understand the politicians asking us to give a central scenario, but it is equally difficult for us to give that central scenario, simply because of all the uncertainties and different parameters, as was explained by Graham Medley.
Q2894 Chair: Indeed. You said that in responding to it you narrowed the range of scenarios. Could you say a bit more about that?
Dr Holten-Møller: That is what we did in the 6 January model report. We followed the observed development and looked at how well the different scenarios adhered to the realities, indicating to us which was the more likely scenario. We tracked the observed development in the model scenarios to see which was more likely in hindsight, and that was what we worked on further.
Q2895 Chair: You narrowed it in practice based on the dynamic experience of the infection.
Dr Holten-Møller: Yes. Another approach is that you sample your parameters. If you have uncertainties regarding the level of transmission or vaccine effectiveness, you can say, “Okay, the observed data shows us that it is within this range,” and then you can sample it many times in the model to try to give an estimate of what fits the observed data the best. This was also an approach that was used in Denmark.
Chair: I understand. Thank you very much.
Q2896 Aaron Bell: Dr Ali, according to Steve Baker MP in the recent Westminster Hall debate I referred to earlier, he said that you, among others, had been asked by No. 10 to come in and help challenge the central received position from SAGE around modelling. How many times has that happened during the pandemic? How often have you been asked to do that?
Dr Ali: The first occasion was on the day that the second lockdown was being debated in No. 10. We were presented with data on the likely number of hospitalisations and deaths, and, on that occasion, from the data that I was presented with, I was convinced that it was necessary to bring in the second lockdown.
Q2897 Chair: That was September 2020, was it?
Dr Ali: It was the end of October.
Aaron Bell: We brought it in.
Dr Ali: Yes. I supported it on that occasion, based on the data that was available at that time. Subsequently, I was appointed as a Government adviser in a slightly different area, which was on Covid and ethnicity. From that time until now, I have had ongoing input to Cabinet Office discussions. I also worked closely with the Vaccine Minister, Nadhim Zahawi, back in 2021, and he asked for my advice, ahead of the Cabinet meeting on 20 December, as to my best estimate of what was likely to happen. We have had a lot of discussion this morning about giving weight to scenarios, and this is the point I made.
As the decision maker, not being given any weight is not very helpful, particularly when the range of scenarios is so wide. When you are talking about 200 or 300 deaths to 6,000 deaths and 2,000 admissions to 20,000 admissions, these are huge ranges. My feeling was that, of course, you do not have perfect information. On 18 December, I gave my advice to the Secretary of State for Education. I published it online and in the newspapers, and I was confident enough, on the basis of information we had then, to say that the most likely scenario was going to be the best-case scenario, or better, based on the experience of previous waves and based on what we had seen with behaviour change, particularly after vaccination was completed.
There is good data now that shows that household visiting, which is a key way that the virus spreads, was more determined by the level of cases and the level of risk than Government rules in place at the time. Between the second and third lockdown, household visiting fell significantly, even though it was legal, whereas in the third lockdown when it was illegal it increased significantly, even to above pre-pandemic levels. There was other evidence showing the same.
Q2898 Aaron Bell: To be clear, you did not give formal advice to No. 10 about Omicron; you gave it to the Secretary of State for Education. Are you aware of anyone giving that formal challenge process to No. 10 over the Omicron decision?
Dr Ali: I was told there was a Cabinet meeting, and I was asked to give my opinion. I gave them my opinion. Of course I don’t know what happened in the Cabinet meeting, because those minutes are not published. The decision was made not to bring in restrictions.
I made a number of points. One was that based on behaviour change it was very unlikely that the scenarios we were being presented with were going to materialise, from what we had learnt throughout the course of the pandemic so far. The second thing, which may be where I have an advantage as a clinician, is that I had already spoken to colleagues in South Africa and seen their information, and also to colleagues in the UK. From the early data we had, we already had fairly good evidence that Omicron was less severe at the clinical level. It was not definitive. That is why the point I made at that time was that there was still a degree of uncertainty, so we should wait a few more days before making a definitive decision.
Q2899 Aaron Bell: I have a huge amount of respect for Nadhim Zahawi and I am sure his views carry a great deal of weight in Cabinet, but doesn’t it strike you as a bit strange that the way to get your challenge into Cabinet is through a particular Cabinet Minister, rather than through a more formal process or by going directly to No. 10?
Dr Ali: I also give my advice to the Cabinet Office through the person I have contact with there. Rob Harrison is on the Covid-19 taskforce. I am given regular feedback based on the evidence that we have had over the course of the pandemic.
It is not for me to decide how the decisions are made. I have felt this is important when I have looked at the SAGE minutes. I know many of the people on SAGE, and Professor Medley, of course. They are all doing their best with the evidence that is available to them. It is a problem that there may not be enough clinicians on SAGE, particularly in relation to Omicron. Maybe that is why we were not as confident as we could have been as to the decreased severity when that decision was made.
An alternative history was that on 20 December a lockdown would have been brought in. It was a very possible outcome. We would have seen the outcome that we saw by mid-January, and, in fact, it matched fairly closely with what was predicted with bringing in step 1 of the road map. If you look at where we were a month later with the number of cases, admissions and deaths, it was not far off what was predicted if we had brought in step 1 of the road map, which was basically a full lockdown except for schools. That shows that behavioural change—we have data now to show this—and contacts were similar to what they were during lockdown. Now, we have seen it and I have seen it. I saw it in October or September when these predictions were made. I was confident there would not be a large wave in October, and I made the case again publicly.
Q2900 Aaron Bell: Is there not perhaps a chicken and egg thing, in that by putting out those scenarios and creating the impression that there might need to be a lockdown, it actually caused the behavioural change?
Dr Ali: Of course, that is true. If every day in the media you are seeing people saying how terrible things are going to be—some of my colleagues from both the medical and scientific professions were in the media making that case—it will have an impact on people’s behaviour. That is where we need to get the behavioural scientists in, but what also has an impact is knowing that your friend, relative or colleague had Covid, and during Omicron that was very common. Every one of us knows someone who got Covid.
Aaron Bell: I got Covid at Christmas.
Dr Ali: I would say that has a significant impact as well. Between the second and third lockdown, maybe the warnings were not quite as severe as we saw pre-Omicron, but household visiting fell significantly. Again, it is because people knew people who were getting Covid at that time.
Q2901 Aaron Bell: You said just now that your assessment was the best-case scenario, or even better was where you pitched it. What do you think were the challenges facing the modellers looking to produce those scenarios in December 2021, and could those issues have been addressed at the time?
Dr Ali: What was known by then is that people’s behaviour changes. I do not think it makes sense to produce scenarios without taking that into account in the future. One of my main concerns about what has happened is that there will be a new variant, and scientists and doctors will come on TV and say, “This is bad and this going to happen,” and people will say, “Well, you were wrong last time, and therefore I’m not going to listen to you this time.” That is very dangerous.
It is important that we as doctors and scientists acknowledge that mistakes were made, and that we learn from those mistakes and explain why they were made. That is really for SAGE to do and the doctors and scientists who made those comments. In future, given that we have seen it not just once—we saw it in July, October and December—we have enough evidence to show that behaviours change in response to risk levels.
Aaron Bell: That is the biggest challenge for modelling. Thank you.
Q2902 Rebecca Long Bailey: Dr Ali, what do you think UK modellers got wrong in the scenarios they produced over the Omicron wave of Covid-19?
Dr Ali: The range of the scenarios they produced was very wide. It was clear fairly quickly that the worst-case scenarios were unlikely to happen, even by 20 December. Maybe that information was fed back to the Cabinet by that time, but we could exclude those worst-case scenarios because the models were produced a week earlier, and even in that one week we had a lot of new information.
It is better to narrow the range of scenarios earlier. The figure of 6,000 got picked up by the media, and that is what we saw on all the front pages, so that was unhelpful. It would have been better to say by 20 December, “We have new information. The range is 200 to 1,000.” By then, we probably had that level of information. It is probably about more regular updates. One of my colleagues, Professor Karl Friston, calls it dynamic causal modelling. All the new data that comes in on a daily basis updates the model. It was not that accurate for previous waves, but for Omicron it was quite good. It takes into account behaviour change. We can learn from those different modelling approaches.
Q2903 Rebecca Long Bailey: Thank you. Dr Holten-Møller, is there anything that you think Danish modellers got wrong in the scenarios that they produced throughout the Omicron wave?
Dr Holten-Møller: What was also difficult for us to put in the models was true understanding of the clinical picture of Omicron. We modelled new hospitalisations, but the actual number of patients hospitalised was really difficult. The capacity models in hospital were difficult simply because we did not have the length of stay of Omicron patients. We did not understand the extent of intensive care unit stays for Omicron patients. Some of those were difficulties in the models, so we only managed to give a picture of new hospitalisations, but not really the severity of Omicron hospitalisation. That was informed by other sources of data in Denmark.
Q2904 Rebecca Long Bailey: Thank you. In Denmark, in your opinion, how much weight did decision makers give the scenarios produced by modellers?
Dr Holten-Møller: It was part of the overall risk assessment that was sent out from SSI and went to the epidemic commission, so it was part of that. It is also important to stress that other epidemic indicators were equally important, and especially to understand the less severe clinical picture of Omicron, which was key to understanding why Denmark was one of the first countries to reopen afterwards. Understanding the effect of booster vaccination was equally important.
Q2905 Rebecca Long Bailey: Thank you. Dr Ali, the same question to you: how much weight did decision makers in the UK give the scenarios produced by modellers?
Dr Ali: Based on the decision that was made on 20 December, they took into account the fact that it was unlikely that the worst-case scenarios were going to materialise. I do not know, as I said before, exactly how that decision was made. I gave my opinion. I am sure other scientists gave their opinion as well. It turned out to be, I would say, the right decision, based on what we saw happen.
We need those models. I am not a mathematical modeller. I am a clinical epidemiologist. I look at real-world data and try to interpret it and understand it. If I did not have the models at all, I would not have the range of scenarios to look at, so I find them quite helpful. Based on previous experience and based on what has happened, I would say, because of what happened in July and October, “I predict this is what will happen on behaviour this time.” That is why I was confident enough to say, “This is the most likely scenario.” I am not saying that anyone was right throughout the pandemic. That is extremely unlikely, but since July, I and others have had a fairly good record in predicting what was likely to happen based on that.
Rebecca Long Bailey: Thank you.
Q2906 Chair: Dr Holten-Møller, in terms of the contributing factors to the more moderate experience of Omicron in Denmark, there is the virulence of the virus itself and its lower impact on hospitalisation. You have drawn attention to groups of people in Denmark changing their behaviour in ways that were not mandated. That accords with something that Dr Ali said. Would you expand on that, if you would not mind?
Dr Holten-Møller: Throughout my experience with the model group in Denmark, we knew that this was a factor that was equally important to put into the models. We understood that quite early. We have a project in Denmark called the HOPE project that observed behavioural changes during the pandemic in Denmark, and we had very close collaboration with that group. They informed us about behavioural changes, and we understood quite early that we would see behavioural changes as case counts went up, or that understanding of the threat of the epidemic situation in your country would affect the behaviour of your population.
We put that into our models in early fall 2021. That was when we first implemented it in our models. It was also as a response to the Government’s decision here in Denmark that whenever you reach a level of incidence of cases in your local community, which could be at parish or local level, they would mandate local authorities to lock down sports events, schools and libraries, and that was implemented in our models.
We kept it in our models as an effective measure so that whenever case numbers went up we could see that transmission was diminished simply because people behaved differently. It is really important to connect that behaviour to the sense of threat or fear in the population that the peak is actually dangerous. That is changing now with Omicron. I am not certain that if you measured the level of threat in the Danish population in the current situation right now, even though we have high numbers of cases, the perception of threat would be equal to what it was in previous waves. It was already implemented in our models a year ago.
Q2907 Chair: Briefly, could you describe the connection your modelling groups have with behavioural scientists? Are they part of the same team? Does one group advise the other?
Dr Holten-Møller: There is a university team situated in Aarhus called the HOPE project and they have a grant to do these investigations. They do surveys weekly on the behavioural perception of the pandemic in Denmark. We have reports weekly and have really close collaboration with the leader of the HOPE project. He was also invited to be part of the external expert group in Denmark. He is not a modeller; he is in political science, but we take his reports into our approach in modelling to understand behavioural changes in Denmark. It is really important for us to understand and implement that in the models.
Q2908 Chair: Thank you. Dr Ali described his clinical experience—his experience as a clinician—as informing his judgment and assessment that the experience of Omicron was going to be more benign than was perhaps feared. How plugged in are clinicians to your decision making, and how represented are they in your advisory group?
Dr Holten-Møller: We have representation from clinicians. We also have an external advisory group that we summon once in a while to ask if the assumptions that we put in our models are sane and whether they represent reality. We have had meetings inviting an advisory group, also consisting of clinicians, to give their advice. Equally, we have had close collaboration with other epidemiologists at SSI to inform us whether the parameters we use in our models are correct or the most likely. There are also clinicians advising the Government and the Health Ministry in different forums outside SSI. That could be comparable to my colleague.
Q2909 Chair: Thank you very much indeed. Finally, Dr Ali, you mentioned that it is not very helpful for policy makers not to have a sense of the relative weights of scenarios. Here we are, hopefully, as Omicron is petering out, at least in terms of hospitalisations. We have taken away restrictions. We look to the future, and the range of possibilities must be very great, from having a new variant that evades the vaccine and starts killing a lot of people to, hopefully, seeing the back of Covid as a very severe disease killing lots of people. How are we to navigate between those two scenarios and all the ones in between? What is your solution to your observation that it is not helpful to have no weighting?
Dr Ali: On the first point, a new variant, of course, is inevitable. We have seen that over the last two years, and we know enough about the virus to know that there will be new variants. It is not predicable as to whether they will be less or more severe. We have much higher levels of population immunity than we have had before—more than 95% of everyone now. Over 98% of adults have some degree of immune protection in the UK. All future variants will be coming across that wall of immunity, which was not there at the beginning of the pandemic. When the new variant comes, whichever country it arrives in, I hope one of the other lessons we learnt from Omicron is that we should not penalise countries for alerting the world to the fact that they have found a new variant by closing borders, because that makes them less likely to report those new variants.
Q2910 Chair: By implication, you think we did with South Africa, do you?
Dr Ali: That is what they felt. The South Africans said that they felt they were penalised because of that. We have to be careful, because we need to know as early as possible when we have a new variant, in whichever country it is, so that we can look particularly at the degree of immune escape, vaccine efficacy and severity. Those are the three key components. No one, I believe, can predict that with any certainty going forward.
I come back to the point I made earlier. The early data we had on Omicron at the beginning of December was extremely uncertain, and those early models inevitably had a wide range of possibilities. Every day, we were getting new data, and one thing I would say is that we should update the models. Of course, all these colleagues are volunteers, and they have other jobs to do as well. Given what we have seen, if we updated the models as soon as a new piece of information came in, it would help to narrow the possibilities.
The final point is learning from what has happened. My fear is that going forward the behavioural response may not be quite the same because of what has happened with Omicron. It is like the boy who cried wolf; people will say, “It wasn’t that bad last time, so we’re going to carry on.” That may or may not happen. There is one piece of data we have seen this week that I hope may reassure Rebecca Long Bailey in relation to self-isolation. Last week, they asked people what proportion isolated. ONS asks that every couple of weeks. It was 80% the week before last, or whenever it was still mandatory legally. This week, they asked how many would isolate if they were positive. It was 72%. There was a fear among some people that it was going to go from 100% to 0%. It was never 100%, and it is never going to be 0%.
The majority of people will continue self-isolating when they have symptoms and, of course, if they have a positive test. The majority of people even now are wearing masks. I was on the tube this morning. It is still very common. From the survey data, it is still above 50%. Not having legal mandates does not seem to be the key to people’s behaviour. It really is based on the level of risk. It is very important going forward that we communicate the level of risk accurately, and that scientists and doctors have a more united voice than we have had over the last year.
One of the other learnings is that that is very important for people in the media, in particular doctors and scientists. We do not expect politicians to be expert in these fields, so we may not give as much weight to their views, but we expect doctors and scientists in particular to be trusted voices. One of the outcomes that I hope will come from this, ahead of the next variant, is that we have more consensus as to what the right message is for the public.
Chair: On that note, I thank you, Dr Ali, for your evidence today and, you, Dr Holten-Møller, for joining us from Denmark, and I thank all our witnesses this morning. That concludes this meeting of the Committee.