HoC 85mm(Green).tif

 

Science and Technology Committee 

Oral evidence: The right to privacy: digital data, HC 1000

Wednesday 20 April 2022

Ordered by the House of Commons to be published on 20 April 2022.

Watch the meeting 

Members present: Greg Clark (Chair); Aaron Bell; Chris Clarkson; Rebecca Long Bailey; Carol Monaghan.

Questions 79 - 179

Witnesses

I: Dr Nicola Byrne, National Data Guardian for Health and Social Care; and Matt Westmore, Chief Executive, Health Research Authority.

II: Dr Tim Ferris, Director of Transformation, NHS England; and Simon Madden, Director of Data Policy and Covid Pass Policy, Department of Health and Social Care.


Examination of witnesses

Witnesses: Dr Byrne and Matt Westmore.

Q79              Chair: Welcome to this meeting of the Science and Technology Committee, where we continue our inquiry into the use of digital data and, in particular, aspects of privacy.

I am very pleased to welcome our first panel of witnesses this morning. Dr Nicola Byrne is the national data guardian for health and social care. This is a role that is sponsored by the Department of Health and Social Care but operates independently, representing the interests of patients and the public on questions of the use of data. I am also pleased to welcome on this panel Matt Westmore, who is the chief executive of the Health Research Authority, a non-departmental public body of the Department of Health and Social Care that makes sure that the research that is sponsored by the Department meets standards of ethics, transparency and rigour.

I will start with a question to Dr Byrne. We have heard in evidence already about the juxtaposition of real possibilities for important breakthroughs in medical research, making a big difference to people’s lives, with well-justified concerns about data around very intimate aspects of people’s lives and wellbeing. What is your reflection, as the national data guardian, on how that is being addressed in Government? What more needs to be done?

Dr Byrne: In broad terms, I want to stress that I concur on all the potential benefits you heard about in the previous session, but I am keen to point out that innovation and the transformation of the health and care service for people’s benefit, and doing that in a trustworthy way for the public, are not mutually exclusive. I think that things would be much stronger if those two things were held hand in hand. That would be my starting point.

Q80            Chair: Matt Westmore, what is your perspective as the head of the Health Research Authority?

Matt Westmore: Very similar. The power that can be released by unlocking data for the benefit of patients and the health service is immense, but we cannot unlock it if we do not maintain and ensure public and patient trust in the use of that data. As Dr Byrne said, those two have to go hand in hand.

What does that look like? Public trust requires transparency in what we are doing and how we are using the data. It requires patient and public-centred motives and values. It requires secure systems and processes. Above all, it requires the involvement of patients and the public at every stage of the discussion and conversation about how we build secure and ethical systems.

Q81            Chair: Everyone is agreed that the opportunities for improvements to people’s lives are immense, but there are real questions that need to be addressed to satisfy people’s privacy concerns. The Government have published a consultation, “Data saves lives”. Mr Westmore, to what extent do you think that this addresses adequately the concerns that are there and would allow confidence to be reposed in the system?

Matt Westmore: We are really supportive of the recommendations to bring people closer to their data, in their access to and understanding of the uses of that data and agency around the use of that data. That is a welcome theme in the plan.

The plan also talks about the importance of data linkages. We are not just talking about data that results from patient interactions within the health service. The real power for improving health outcomes is when data is linked to other sources. That is where we think that a lot more work needs to be done, because it opens up another set of challenges that are above and beyond health data.

Q82            Chair: More work than is proposed in “Data saves lives”? Are you saying that “Data saves lives” is not sufficient to address the concerns?

Matt Westmore: “Data saves lives” is sufficient, but it is high level. The detail of precisely how these systems are built and developed is to come. I think that it is sufficient, but the devil will be in the detail.

Q83            Chair: I see. Dr Byrne, what is your assessment of “Data saves lives”?

Dr Byrne: I fully support what Matt calls its high-level ambitions. The feedback that I have given is that it needed to include much stronger plans for public engagement. I stress that the public are not concerned only about their privacy. Of course, that is a major consideration, but over recent times we have learnt that people are very concerned about the uses of data and their being for public benefit. Privacy is not the only concern.

I fed back that I thought that there was a serious omission in that there was no mention of opt-outs. Choice is important. It is crucial for people to have agency in an ethical system.

The main strand of feedback that I gave was about the importance of language and being clear and straight with people that, with data use, as people know, benefits come with risks. There is an issue if the public cannot see the system openly acknowledging that. People know in general terms that there are risks with data use. Not to mention that can sometimes be the biggest barrier of all; a lack of transparency in that regard can create a suspicion of deceit or that things are being held back from us. It is very important both in that strategy and more generally for the Government that there is always communication that holds those two things in mind: “These are the benefits of data use. We recognise the potential risks, and these are the safeguards and mitigations.” Then people will know that you are being straight and they will see the system as credible and trustworthy.

Q84            Chair: Thank you. Rebecca has some questions about the implications of opt-outs for the use of the data.

Dr Byrne, you mentioned the importance of people feeling that this is for public benefit, as well as privacy. Obviously, that raises questions of what happens when data leaves the public sectorliterally, when it leaves the NHS. We know, and have seen in Covid, that much medical research is conducted by private sector companies. The Oxford AstraZeneca vaccine clearly refers in its nomenclature to the fact that there is a private company involved. This is the standard model. Have you given some thought to how you address what I am sure you correctly point out is a desire for public benefit when the model that we operate involves engaging with and, literally, sharing data for research purposes with private, for-profit companies?

Dr Byrne: Yes. There is actually quite a lot of empirical evidence to help us on this matter, in terms of what matters to people. There has been work by the HRA, by Understanding Patient Data, with the Ada Lovelace Institute and NHS England, and, more recently, by Understanding Patient Data and ourselves, looking at questions of public benefit.

To focus on the commercial involvement, we know that once the rationale for commercial involvement is explained to people they are not necessarily against it. Indeed, they may be supportive of itfor example, in the kind of drug development that you mentioned—but there are conditions for that support. Public benefit must come above commercial profit and be the primary consideration. People want to see that safeguards are in place and that there are meaningful sanctions for any improper use.

Q85            Chair: The sanctions aspect is clear and, I dare say, could be actioned. In the context of a pandemic, when everyone is desperate to get a vaccine, it is clearly established that public benefit, rather than commercial gain, is the primary motivation. But is it not the case that, with most drug development, it is not clear how revolutionary its impact on public health will be? How, ex ante, can you determine how important the public benefit versus the commercial benefit is going to be?

Dr Byrne: It might be worth my saying a little about how I would suggest that we think about public benefit. Both of the words in that phrase are at play.

First, if you think about the benefit side, we know from the work that we have done that people have a very broad and inclusive understanding of what comes under potential benefits, whether that is treatment, prevention, resource use or simply knowledge generation. An unsuccessful medication might prove still to be valuable in that sense. People are very comfortable with the idea of small numbers of people having significant benefit, as with a rare disease. That would still constitute public benefit.

However, benefit is a net value. You cannot establish public benefit if the perceived or potential harms are greater than potential gains. The things that people consider under the overall term “benefit” have to include consideration of risks such as privacy, equity and clinical safety, safeguards against data manipulation and questions about profit, and that being proportionate. The benefit side is a complex amalgam of all those different factors, rather than a simple metric.

The public bit of public benefit is to stress the importance of transparency. It is not a nice to have; it is indivisible from the concept. Part of that is public involvement, going beyond simple comms or invoking the idea of bringing the public with you. It is about the public actually having a say and some agency in our decision making about how data is used and what is in the public benefit, once you have done the balancing of potential risks and opportunities.

Q86            Chair: Obviously, the public benefit from improved health treatments and breakthroughs that are made. In so far as there is commercial benefit from research based on use of NHS data, is it your view that there should be an additional contribution from the commercial benefit back to the NHS financially, over and above the benefits for patients?

Dr Byrne: I know that is something that was mooted at the last session. Without knowing more about it, I think it would certainly be interesting to explore. There are three caveats from me at a high level at this stage.

First, in reality, it would take a very nuanced approach to its application. Some small organisations and charities have public funding to innovate and develop things for the NHS. An SME developing something for the NHS is a very different prospect from a large multinational developing something that would be sold at great profit in the US. There are complexities around that application.

More fundamentally, I have two concerns. Donation or paying into some kind of health fund might diminish the importance of the ethical obligations around public benefit assessment and what constitutes public benefit. I would not want to see any financial contribution mean that those were somehow paid off and were no longer to be considered.

The other thing is that there would be a risk of underestimating the actual value of the data. It has come from all of us. It is an extraordinary asset, possibly one of our greatest national assets. It has taken hundreds of thousands of hours for NHS staff to collect and curate. The value of this data is not to be expressed or understood purely in fiscal terms in the short to medium term. People understand that the value of this data is, potentially, knowledge generation in years way beyond where we are now. I share some of the concerns that were well raised by Professor Goldacre in his recent review. The NHS has to be extremely careful about being locked into any commercial contracts that might give exclusivity to that knowledge in future and lock it away from us. The commercial is not my area of expertise, but things like intellectual property and co-ownership for the NHS would have to be thought through very carefully in terms of what they would mean in any such model that might be proposed.

Q87            Chair: Before I turn to Rebecca Long Bailey, I will put the same theme to Mr Westmore. In your agency, have you thought about what is required to maintain public confidence as regards private companies’ use of NHS data, given the ubiquity of that model for medical research?

Matt Westmore: At the risk of repeating what has been said, very briefly, we see the same fact. Patients and the public are not against commercial interests per se. They just need greater transparency about the public benefit that goes along with the use of the data, which may come alongside commercial interests. If data is used purely for public benefit, there is clearcut support from patients and the public. If it is used purely on a commercial basis, many patients and members of the public are against that.

The grey area in the middle is that if something includes commercial interests it does not necessarily mean that patients and the public are against it. What we see is that when the role of commercial organisations within our health and care system is explained and when the wider checks and balances, such as Dr Byrne’s organisation and my own, are explained, patients and the public are much more comfortable about data and wider types of research being used for commercial benefit, as long as there is still public benefit as well. That is the context. That is why it is complicated.

Your specific question was what we do about that. In order to be in that space, public companies themselves need to be transparent about their use of data and their own data-processing processes. That is in conflict with the commercial realities of protecting intellectual property, but that is what needs to be done.

As a wider system, we need to explain the checks and balances that exist, to continue to maintain the interests of patients and the public, such as the work of the Health Research Authority, the national data guardian and many others in the system. Dr Goldacre’s report sets out a plan for establishing clearly trusted environments where many of the privacy and security concerns can be addressed. Again, that will be really important.

Finally, we believe that there needs to be an ongoing process of governance and monitoring of the use of that data. We should not find ourselves in a situation where we approve and then forget, where we hand over the data or hand over access, even through a trusted research environment, and then no longer take an interest. We need continued interest to make sure that patients’ and the public’s interests are always maintained.

Chair: That is very helpful.

Q88            Rebecca Long Bailey: In our first evidence session, we heard that opt-outs could lead to distorted health data and biases in health research. Do you agree with that view? How can we maintain opt-outs without damaging health research?

Dr Byrne: I certainly agree with that view, for the reasons that were outlined previously. I would add a second concern. This is not simply about research. It is very important to surface the importance of data use for system planning. My concern for people opting out is that they are no longer represented in the data that is used, so their health experiences, outcomes and access to care all get lost and are not represented. That is a concern for me, especially if particular communities start to opt out more than others.

It is essential to maintain that choice, but the way to approach it is to think, “How do we make the system more trustworthy?” There will always be some people who choose to opt out, for reasons of privacy that make absolute sense. People have their own reasons. For example, if there is particularly highly sensitive information in their records, it makes absolute sense that somebody may choose to opt out. However, for a lot of people who have opted out, my sense is that if the system could demonstrate that it was more trustworthy they might be prepared to opt back in. I think there is a chance of that. That will be the focus. The choice is incredibly important to maintain, but I am optimistic that if the right work is done to make the system trustworthy some people will choose to opt back in.

Matt Westmore: My view is very similar. Patients’ self-determination in the use of their data is fundamental to both the ethical use of that data and public trust in any use, so maintaining some form of patient agency to opt out of the use of their data, for whatever purposes they see fit, is really important. Without it, public trust will be undermined. Then the foundations of the research and all the benefits that we are keen to see released will evaporate.

The current opt-out rate was 5.4% the last time we looked, around 18 April. It ratchets up. About half of it was following care.data and around half was following the GPDPR programme. Big events where public trust is not maintained cause a ratcheting system. We are certainly concerned about that continuing.

The solutions are exactly as Dr Byrne said. They are around transparency, communication and engagement with patients and the public. When things are communicated clearly, patient and public trust is maintained. Behind that, we need the right systems to ensure that that trust is warrantedin other words, that the system is trustworthy. That is where work like the recent data strategies and the Goldacre review will make a big difference.

Q89            Rebecca Long Bailey: Dr Byrne, you raised concerns about opt-outs in your response to “Data saves lives”. You briefly mentioned some of those concerns. Could you elaborate a little more those wider concerns and how they can be addressed?

Dr Byrne: Currently, both the public and professionals are quite confused about the opt-out choices. I think that some work needs to be done. It would be premature to look at that now perhaps, before the completion of the GPDPR programme, following its reset, and the use of its new infrastructure with the TRE, but the public certainly need to be engaged in some work to look at how we make sure that the opt-out choices are clear, coherent and simple to action. I think that needs to be looked at.

Q90            Rebecca Long Bailey: One concern that is frequently raised with me, as a constituency MP, is that patients are not aware that they can opt out. How far do you think that GP practices and healthcare providers need to go in making patients aware of their right to opt out?

Dr Byrne: I am optimistic, as more and more people interact with services online and through the app. I think that is a really good platform. Obviously, it is not right for everybody, and we always need to be thinking about those who are digitally excluded. For those who can and do use it, it is a terrific opportunity for people to get the information that they need generally, including information about their rights, the opt-out and so on, as well as information about how their data is used. That is the platform through which people can get better understanding of what is being done and why.

Q91            Rebecca Long Bailey: Mr Westmore, is there anything that you would like to add?

Matt Westmore: My view is very similar. The one thing I would add is that the national level opt-outs are not the only mechanism. We would certainly promote the use of study-specific, trial-specific, research project-specific opt-out mechanisms. They are much more targeted and much more in tune with being able to explain the specific nature of the use of data on a project-by-project basis. Patients and the public can take a much more informed decision as to whether they would like their data to be used for that purpose, but not necessarily for another purpose. Reliance on just a national opt-out is a relatively blunt tool when much of the work that we do in life sciences research and health-related research in the UK can be explained at a project-by-project level.

Q92            Chair: Mr Westmore, can I follow up on something you said? You described a ratchet effect of opt-out. That seems to be quite striking. One might think that a current opt-out rate of about 5% is something that one can live with. It is not too distortionary. But it is forever, isn’t it? If every time there is something in the news that gives concern another few per cent. of people opt out, and do so permanently, you might quite quickly transform quite a rich research environment into one that is very limited and subject to gaps that make it less useful. Is that a concern of yours? Do you have any views on how the ratcheting effect can be dealt with? Should opt-outs be time-limited rather than permanent, for example?

Matt Westmore: I have not considered the last point. I agree with Dr Byrne that it is not necessarily forever. Obviously, people could opt back in. I have no evidence to support this, but my assumption is that it would be harder to encourage someone to opt in than it would be to encourage them to opt out. It would be a hard task to do that, but it is not impossible.

At 5%, if the number of opt-outs were evenly distributed across society, it would probably not be concerning from a bias in the data perspective. As Dr Byrne identified, the problem comes as soon as it starts to cluster in certain parts of society and certain communities. Then, not only do we have bias in the data, but we risk under-representing the very people in society who are ordinarily under-represented by many of our institutions. Again, I have no evidence to support this, but my assumption would be that, as the number gets higher, it would follow lines of public trust in public institutions, and we know that there is less trust in public institutions in more disadvantaged communities. As soon as the number got to a point where we started to have a lack of representative data from those communities, we would be really concerned. That might not be an overall numerical concern, but it would certainly be bias in certain communities.

Q93            Chair: There is a paradox, isn’t there? The more opt-outs, the smaller the sample and the less public benefit there will be, because the data will be less reliable. The smaller the number of data points, the higher the risk, at least, of people being identified. For the privacy side of things, accepting that that is not the only concern, the more people participating—in other words, the fewer the opt-outs—the more robust the privacy protections can be.

Matt Westmore: Exactly. Of course, coming back to the point, the real thing we are worried about is why people are opting out, not necessarily the fact that they have opted out. By opting out, they have exercised their rights. That is a positive choice. The question is, why do they feel the need to opt out? What is coming across in both our contributions is that the way around that is to ensure that patients and the public are involved at all stages, so that we can maintain the trust of patients and the public and they do not feel the need to opt out.

Q94            Chair: What is your current view as to why people are opting out? For the 5% or so at the moment, what are the principal drivers of that?

Matt Westmore: It is a lack of understanding of what data is being used for what purposes and by whom. That lack of understanding is the fault of the institutions doing the communicating and engaging. It is in no way due to a lack of capacity to understand by patients and the public. They are more than capable of understanding these issues and the opt-out system. It is lack of transparency around how we are going to handle data.

Q95            Chair: How do you know that?

Matt Westmore: From some of the evidence that we have seen, and certainly from the responses around care.data and the GPDPR programme, concern around who would have access to data and what they would be using it for was one of the primary drivers for people exercising their opt-outs.

Q96            Chair: I see. Dr Byrne, do you have an understanding or insight into the principal reasons why people are opting out at the moment?

Dr Byrne: I echo what Matt said. I will not speak to it, but I understand that NHS Digital has done some specific work in relation to the GPDPR programme.

For me, at that point in time, there is real learning to be taken from it. During the pandemic, the public became far more data literate and could see the importance of using data for research, treatment and operational planning, such as the vaccine roll-out. What was very clear about the rise in opt-outs, from the media discourse and anecdotally from people talking about it, was that privacy was not the only concern. I come back to that point. People were worried about use. Although people’s data literacy increased during the pandemic, that did not mean that there was social licence or, if you like, a public mandate for all uses ongoing from that point.

For me, it underlines the point that public benefit assessments need to be made on a case-by-case basis. The public may be perfectly happy with and see the benefits and risks, well managed, of project A. That does not translate into simply accepting project B, if that makes sense. It is important to hold in mind that public trust cannot be assumed. This is ongoing work. It is not a static approval and then it is all fixed. The data landscape, both opportunities and risks, will continue to evolve. We need to continue public engagement work alongside that, to have public trust in what is happening.

Q97            Carol Monaghan: Dr Byrne, can I go a bit further with that? A lot of the evidence that we had talked about public trust and engagement, and echoed some of the things that you are saying this morning. A number of submissions said that they were disappointed that “Data saves lives” did not say more on public engagement. Do you agree with that?

Dr Byrne: Yes. That was very much part of my feedback. I thought that needed to be front and centre when it comes to data use. Absolutely.

Q98            Carol Monaghan: How could the Government change this in future?

Dr Byrne: I know that you are going to hear from the Department of Health later. My understanding is that when the strategy comes out it will have much greater emphasis on public engagement and there will be some real resourcing of that work centrally. That is very important.

In the last session, you heard a lot about the importance of streamlining governance and looking at what is good governance across the system. From my perspective, I would want to bring into that that good governance has to involve public involvement and engagement across the system and doing it at a scale that can be resourced so that it is meaningful.

Q99            Carol Monaghan: What does good public engagement look like?

Dr Byrne: It depends on the context. It is something I would want to flag in the context of Caldicott principle 8. The Caldicott principles are a set of good practice guidelines for the use of confidential health and care information. Under my predecessor, Dame Fiona Caldicott, the last principle to be added, in 2020, was principle 8, which is about the importance of informing people about how their confidential information is going to be used.

The principle stipulates that a range of steps need to be taken to ensure that there are no surprises for people and that work is done to make sure that people’s expectations are clear about how their information is to be used and what their choices are. As part of that, it says that reasonable steps have to be taken, depending on context. In some contexts, that might be simple, accessible communication. In other contexts, it may mean much more extensive public involvement, depending on what use is being considered. Context matters.

Q100       Carol Monaghan: Can I stop you just for a second? If I want to download an app on to my phone, I have to agree to all sorts of things that talk about how my data will be used. I do not see that as good public engagement because, actually, I am just going to say “Agree” because what is involved is an awful lot of reading of legal terms I do not necessarily understand. Is there an opt-out then for those who are supposedly doing public engagement, that they could provide something like that that does not help anybody?

Dr Byrne: I am not sure if I fully got the question, sorry.

Q101       Carol Monaghan: If I am being asked questions on my phone about the use of my data, I have to read through a whole pile of complex pages. They could argue that they have asked and that there has been public engagement about the use of my data.

Dr Byrne: I absolutely agree. That is not meaningful consent, absolutely. There is work that could be done, and I would think it was a priority now for the system to do some work. We can take at a high level principle 8 and what that means, how it gets translated, and how we understand from the public how we build and maintain people’s expectations about how their information will be used. That could be important work to do now, especially as we are moving towards integrated care, which will be underpinned by data flowing across the system rather than just within an organisation. It is time to resource some proper work with the public about how to understand that and how to do it better.

Q102       Carol Monaghan: Thank you. Mr Westmore, if I could come to you and the 5% opt-out that the Chair has already spoken about, is there any way of capturing the demographic of those who are opting out? You talked about some of those you think are opting out and how it might skew, but is there an actual way of getting hard evidence on that?

Matt Westmore: I don’t know, I am afraid. That data I think will be held by the NHS and those who manage the opt-out system. It might be a question for NHS England.

Q103       Carol Monaghan: Dr Byrne, do you have any thoughts on that?

Dr Byrne: I won’t speak for NHS England. I think there is some information from NHS Digital available to researchers to get some sense of the potential biases; it is some technique using the Hospital Episode Statistics database. In general terms, it would be very problematic at the point of opt-out to be asking people information about themselves in the context of them choosing to opt out so that they are not involved in research.

Q104       Carol Monaghan: I am trying to understand whether we are just making assumptions about the demographics that are not represented.

Dr Byrne: I would not have an expert view on it. There is some information to give some sense of the opt-out, but it is probably limited. For good reasons, we could not be collecting more information from people at that point.

Q105       Carol Monaghan: I suppose it is problematic in that we are making assumptions about those who have opted out.

Matt Westmore: I would probably pose it as a concern rather than evidence or an assumption. That is where we would be coming from. It seems like a possible issue in the system, and that would be concerning.

Dr Byrne: It would be worth asking people, and quite possible to ask, why they are opting out. Obviously, it is an entirely voluntary question. People often opt out in anger and would be quite happy to have their voice heard about what made them opt out. At the moment, it is a bit of a wasted opportunityasking people for that voluntary feedback—because, quite often, people would be happy to tell you.

Q106       Chair: Thank you. Mr Westmore, would you not expect the Health Research Authority to have a view as to the nature of the opt-outs, to know about how robust research on the other 95% was?

Matt Westmore: We would certainly have a view when we see it on a project-by-project basis as to whether or not the data would support the research questions being asked. We are not responsible for the overall scheme. We do not have access to the data. Certainly, our review committees would look into those sorts of issues when a particular research proposal was put to us.

Q107       Chair: You would not have an upstream view as to whether the whole dataset was sufficiently representative. Isn’t that something you ought to have insight on?

Matt Westmore: We would. When we see a proposal for research into a database, we would certainly ask those sorts of questions and have that sort of data. At these levels of opt-out, I do not think we would have significant concerns. It would be about concerns for the future. It is a valid question. If we could get that data, subject to Dr Byrne’s concerns around whether it is possible to get the data and whether it is appropriate to get that level of data, it is something we would be interested in.

Q108       Chris Clarkson: Going back to the good governance piece, Dr Byrne, you mentioned the Caldicott principles. The majority of evidence that we have had in previous sessions has pointed towards the need for better training and guidance in the handling of data, as opposed to new legislation. Do you both agree with that sentiment?

Dr Byrne: I will keep it brief. Yes, for the reasons you have already heard.

Matt Westmore: Similar. We see that the current legislation, particularly in its application to health-related data, strikes the right balance between flexibility to support innovation and protecting patient and public interest and ensuring trust. We go back to the same point: if you lose patient and public trust, that harms innovation. They go hand in hand.

Q109       Chris Clarkson: We have the right tools for the job. We just need to understand how to use them. On that basis, Dr Byrne, we have heard concerns that the removal of certain safeguards that are being proposed by the Government—DPOs and data protection impact assessments—could potentially harm how we handle data. Are you concerned by those proposals?

Dr Byrne: Yes, I would be. I did not formally feed back on that aspect of the Government’s proposals, but I would align myself to what the ICO advised about them. There should be an individual who has the requisite expertise and authority to hold this in an organisation when it comes to balancing risks and making decisions about data. Equally, there should be routine assessments of risks when it comes to accessing data. I am less invested in whether or not that is called a DPIA. I would certainly align to the concerns about the need for those things. I was not clear in those proposals what problem was being solved. I did not see the connection between that as a solution and the problems that were outlined as the current issues. That would be my position.

Q110       Chris Clarkson: It is a solution looking for a problem.

Dr Byrne: Yes, I could not see the relationship between the two. You need that expertise and you need robust risk assessment.

Q111       Rebecca Long Bailey: Previous witnesses have been unanimous in telling us that trusted research environments and the use of the ONS Five Safes approach were central to safe and secure data sharing. The Government have also highlighted the role of these in “Data saves lives”, as Ben Goldacre has in his recent review. Do you agree?

Dr Byrne: Absolutely. They really represent a step forward and are much safer privacy and security safeguards. I feel very positive and optimistic about that potential and concur with all the other evidence you have heard on that point. The only thing I would stress is that this new infrastructure, this new technological approach, does not make good governance obsolete. The same principles apply in terms of our ethical obligations to the public and making sure that the uses and purposes are right, and so on. That all stands, but I am very positive for all the reasons you have already heard.

Q112       Rebecca Long Bailey: Mr Westmore.

Matt Westmore: Exactly the same. I am very positive about the direction of travel, but I would highlight that although trusted research environments and, indeed, other privacy-enhancing technologies fix many problems around data security and privacy, they do not address all the issues around ethical conduct and ethical use of research and wider governance. There is nothing significantly concerning in the plans at the moment, but we would want in the future to make sure that they were not used to circumvent current ethical standards.

Q113       Rebecca Long Bailey: Thank you. Mr Westmore, Cancer Research UK told us that trusted research environments were not yet fully interoperable or accredited, and if they were it could speed up research. How do you think that can be addressed, and how quickly?

Matt Westmore: On the second question, it is very difficult to say. These are early days for exactly how we build a system of trusted research environments. You will hear later that it is hard enough to build one at national level, let alone a system. There are many good examples of trusted research environments that we can learn from, and we should build from those, but there is no single plan as to how you do this, and therefore there is no single plan as to how you would safely interoperate them. The timing issue is difficult to put a figure on. We agree with Cancer Research UK’s assessment that they would make a big difference to the speed and safety around research, because you presumably would apply the same principles of trust and privacy to the interconnections between trusted research environments as you would within a trusted research environment.

Q114       Rebecca Long Bailey: Thank you. I have one final question to both of you. Witnesses have told us that there is a fine balance between making data anonymous without stripping it of all of its useful information. How can that challenge be addressed? Are there any examples of privacy-enhancing technologies that you are aware of that can further help reduce or eliminate such risks?

Dr Byrne: Beyond TREs? I will not be able to give you a technically informed answer. My office is currently involved with the ICO, which is doing work on this. It has recently produced a report on some workshops that my team has been involved in, and I know it is intending to produce some guidance, which I look forward to, outlining the use of PETs in health and care, but I would not be able to give you a more informed answer at this point.

Q115       Rebecca Long Bailey: Mr Westmore.

Matt Westmore: Similar in the sense that we are looking forward to the ICO’s work on this because it is a technically complicated area, and ideas and understanding about these technologies are still emerging. There are some interesting things out there beyond TREs, such as use of homomorphic encryption where you can analyse the data without decrypting it, having a sliding scale of anonymisation and pseudonymisation, which is fit for the purpose of the research question or the intended care use and does not harm the usefulness of that data but protects privacy.

There are aspects of things like synthetic data and dummy data where you do not need to use real data at all; you just need to be able to analyse a dataset that can look very similar to what you are going to end up using in the long term. In the long term, you can then employ one of the other privacy-enhancing technologies. A range of options is currently being considered and worked up. The important point is that it may well be one of those situations where we do not have a single solution because we do not have a single problem. There will be some aspects of research and care that require one type of privacy-enhancing technology and others that could well be delivered in another.

Q116       Rebecca Long Bailey: Thank you. Briefly, what is homomorphic encryption?

Matt Westmore: You are testing my briefing here.

Dr Byrne: I will leave that with you.

Matt Westmore: Yes, exactly. You can secure data by encrypting it. You can scramble it with an encryption key. Usually, to analyse that data and get the information that is inside, you have to decrypt it. In other words, you have to turn it back into human-readable forms that could then pose a privacy risk. Homomorphic encryption allows you to analyse the data in its encrypted state. At no point does the researcher who is analysing the data have decrypted data that could then pose a privacy risk. Please don’t ask me another question.

Chair: You passed the exam question with flying colours.

Q117       Carol Monaghan: We are hearing that transparency, honesty about risks, opt-outs, and frameworks such as the ONS Five Safes are all central pillars to ethical and secure data use. Are these embedded in the UK’s data-sharing ecosystem, and what can we do to further improve this? I will start with Mr Westmore. That is an easier question.

Matt Westmore: Yes, it is. The quick answer is that they are not embedded. We are at a really exciting phase in this ecosystem, to use your term, in that these new approaches and new technologies are in the system but they are not distributed across the system and in wide use. That is why it is such an interesting, exciting and opportunistic time for both patients and the public and the life sciences sector in the UK. The short answer is no, but we are getting there.

The key thing from our perspective is to continue the cross-agency working that is going on in this area because data does not respect the boundaries of society, let alone institutions. Every piece, whether it is to improve care or, in our world, to research, probably touches multiple regulators and multiple institutions. It obviously includes multiple patients or public datasets. Continuing the work that all the agencies are doing to work together to join that system up, to have a consistent and streamlined path through the regulatory landscape, is really important. We are doing some good work in that area. We need to do more, and we need to try to keep up with these emerging technologies.

Q118       Carol Monaghan: Thank you. Dr Byrne, do you have anything to add?

Dr Byrne: I very much agree with everything Matt said. The thing I would add is the importance of mindset for the system, because that really matters. For me, there are two foundational points.

This is data that comes from all of us. It is our shared asset. We all have an investment in how it is used. Crucially, it is collected in the context of a relationship of trust. That trust matters and we should not break it, otherwise people will be reluctant to share information or, indeed, will opt out. If the system retains those two fundamental points, the ethical obligations that flow from that around respect for privacy, the fact that there should be no surprises for people, which entails transparency and ensuring uses are aligned to expectations, and if it retains agency of choice, and holds those things in mind, it will mean, as Matt said, that things can be further improved in terms of how well they are embedded.

Q119       Carol Monaghan: Dr Byrne, are there any challenges facing those who seek to apply the Caldicott principles to health data sharing?

Dr Byrne: In 2020, my predecessor did a survey of Caldicott guardians, and one thing that came across was the need to give the role more prominence to support wise, ethical use of data. We produced guidance last year that increases the scope of organisations to appoint Caldicott guardians. It is for organisations to understand better the roles and responsibilities for those people in post. All public sector organisations, or those contracted by them, handling confidential health and care information should have a Caldicott guardian by next year. That will help.

To return to my earlier point, it would be very helpful to do work around Caldicott principle 8, about what needs to be done to build and maintain reasonable expectations around use. That currently is a challenge for people working on the ground across systems to understand and to think what they need to do. If we could do more work there, it would be helpful too.

Chair: Thank you very much indeed. There are a couple more fiendish questions from Rebecca Long Bailey before we then finish up with Chris Clarkson.

Q120       Rebecca Long Bailey: Witnesses told us that the UK data ecosystem was fragmented and siloed. One suggestion was an overarching body to help smoother data sharing. Do you think that approach would work, and, if not, why not?

Dr Byrne: Without knowing more about what that would look like, I would be unclear at this stage what benefit it would confer beyond the work that you have already heard that HDR UK is doing. I was not clear what the benefits would be. Probably thinking more about governance across the system rather than thinking it could be held and controlled centrally further in that way would be the right approach. I am not sure what Matt would think about that.

Matt Westmore: I agree. One of the challenges is that there is no clear edge to the space that we are talking about, either for the health and public and social care benefits of the techniques or the data itself. The real power is when we start to link datasets that can give us a whole picture of a citizen. It would be very difficult to have a single body that would oversee all of that.

The other challenge is that, while there is absolutely some unnecessary duplication to a certain extent within the system, many of the agencies exist for different reasons. If you put them all in one place, there may be other unintended consequences around different dynamics and conflicts of interest. It is not something we have thought deeply about. It is not something we would say we were against, but there are some concerns about how it would be developed that would need to be thought through.

Q121       Rebecca Long Bailey: Thank you. Are there any international examples that the UK could learn from, in your view, Mr Westmore?

Matt Westmore: Not that I am aware of. We are quite critical of ourselves in the UK. We have one of the most capable systems for the use of data. The reason we were able to do the things we did during Covid is that we have, in the world, the most integrated health research system embedded in our national health services. There are not many places in the world that have similar systems. My guess is that people should be looking to the UK for leadership on this.

Q122       Rebecca Long Bailey: Thank you. Dr Byrne?

Dr Byrne: I am aware of, though not an expert on, ELIXIR, which is a cross-governmental European life sciences organisation. As I understand it, there may be things we could learn from both the technical side of things, things they are doing with data linkage, and the people side of things. Their networking and training across countries—I think something like 23 countries are involved—is apparently very impressive. Perhaps there is something we can learn from there.

HDR UK came to my panel recently to talk about the work that it referenced here. It is looking at the UK and what we can learn from good practices going across the four nations in what is happening. I am sure there is more that we could learn from examples from both near to home and a bit further away.

Q123       Rebecca Long Bailey: Thank you. Mr Westmore, one of the consistent issues that gets repeated in Committees is the fact that data sharing is often of poor quality when it comes to the data itself, ranging from inconsistent metadata to legacy hard copy information. How close are we to solving this issue, and are there any quick wins that would have a noticeable impact straightaway?

Matt Westmore: The first challenge is that often what we use the data for is not the purpose for which it was originally collected. Whether data is of “poor quality” is sometimes an artefact of the reason that it was collected for another purpose, such as to track activity across the health service or to ensure money flows to the right places, and we are trying to use it to track patient flows through a system. It is difficult to see how that could be improved in a proportionate fashion.

Some of the ideas around trusted research environments will help, particularly in the model that the Goldacre review proposes, which is to have fewer, more expert, more complete, more professional trusted research environments that have well-developed and well-established analytical pathways, protocols, data standards and the like to ensure that data is cleaned once, curated accurately and comprehensively once, problems are fixed once, and then there could be multiple uses of that. One of the problems we have at the moment is that the process of cleaning and fixing data is repeated multiple times depending on the research study or the use in care. That will certainly help. Whether it is a quick fix is a question I do not know the answer to.

Q124       Rebecca Long Bailey: Thank you. Dr Byrne, Health Minister, Lord Kamall, told peers that he was consulting you regarding the merger of NHS Digital, NHSX and NHS England. What concerns, if any, do you have about that?

Dr Byrne: My concern, which I have raised with Lord Kamall, is that it is incredibly important that there is still a safe haven for data and there are robust safeguards around that, so that we continue to have strong protections around data use. One aspect of that is to continue having independent oversight that involves lay involvement as well in data access decisions. That goes for both internal and external requests. I have stressed the importance of that. For that to be credible and trustworthy to the public, those safeguards need to be enshrined in law, and I look forward to hearing more details about the proposals.

Chair: Thank you.

Q125       Chris Clarkson: Previously, I asked whether or not more legislation would be a good thing or whether it was an issue around guidance. I want to go back to that theme. To what extent do you think the current regulation of artificial intelligence in the use of data sharing is appropriate? Is it robust enough?

Dr Byrne: The legislation is sufficient, but the landscape for developers is indeed very confusing, as you heard before, and too complex. I am looking forward to seeing what comes out of the NHS AI Lab, the multi-agency advisory service bringing together NICE, MHRA, HRA and CQC. There is a need to streamline guidance and advice in one place and to ensure that there is harmonised and agile regulation, because this is a fast-moving landscape and we need regulation that can flex without breaking and evolve as the technology evolves.

Q126       Chris Clarkson: You definitely think it still relies on the regulatory piece rather than the legislative solution.

Dr Byrne: Yes, absolutely.

Q127       Chris Clarkson: Matt, do you have anything to add?

Matt Westmore: Part of the challenge is that artificial intelligence is not a discrete type of technology, as you have heard in other responses. The technology-neutral approach of the legislation, we think, is the right way to think about it. It introduces new issues, or amplifies them at least, which we may not have in other areas of technology. You will have heard about the risk of bias. The explainability of these tools is much harder to do. It is much harder to figure out why they have made a recommendation.

That stands for many other advanced analytical techniques as well where you are talking about very complex datasets and very complex models. We do not think it is atypical—it is not a new class—but it needs thinking about differently, which is why we are doing the work, as Dr Byrne says, with the other agencies to try to create a clearer regulatory pathway. That is one of the reasons why we would be slightly cautious about new legislation. This is one of those areas that is moving so fast that the best thing is for regulators to try to work together to stay ahead of that opportunity, for the good of UK patients, but also for the good of the UK economy. The new legislation may not help that task.

Q128       Chris Clarkson: When you talk about explainability and bias, presumably that is the black box; you put data in, you get a solution but you are not always sure how it is arrived at. From an ethical perspective, how concerned are you about that bias?

Matt Westmore: We are. When we see applications either for use of data where patient consent is not practical, through our confidentiality advisory group, or for research into AI, we ask questions to ensure that things like the training datasets are representative of society. We ask the researchers to describe how they are going to ensure explainability with their technologies, which is an emerging field of AI research. It is not that it is impossible to understand how one of these technologies comes up with a recommendation or an answer. There is an emerging field looking into how you explain the results of those kinds of technologies. It is a relatively new field in a relatively new field, which is why the real focus for us is to try to keep up with the pace.

Q129       Chris Clarkson: Would you say that the piece around explaining how it is explainable would be sui generis to each form of AI? It will require a certain flexibility in how you represent those outcomes.

Matt Westmore: Yes. This is well beyond my technical expertise, but we would ask questions on a project-by-project basis about the issues. There is growing expertise in the UK, among the best in the world, through things like HDR UK and others about how to ensure that these problems are headed off.

Q130       Chris Clarkson: Dr Byrne, is there anything you would like to add?

Dr Byrne: I echo what Matt said. As you heard from Professor Holmes, this is a continuum of technologies. There is no clear cut-off when something is AI. All the same ethical obligations apply.

From my perspective, there are perhaps three considerations that might be helpful and require us to think about how we integrate humanity and human values into the technology itselfthe black box considerations, if you like. First, there is the nature of the task; where it sits on the spectrum from the science of medicine to the art of medicine. I very much see the art of medicine as a judicious application of the science. Some tasks lend themselves to AI in terms of improving accuracy and efficiency. Things like diagnostics seem to sit very well there. The parameters of the data you need are well known. There is a right or wrong answer; for example, “Is there a tumour here?”

There are other things in medicine that go much towards the art where, speaking as a psychiatrist—I am still a practising clinician—you may have a situation where the things that are most important cannot be spoken about and perhaps cannot be put into words. You cannot necessarily know all the data that you require in that situation. It might require your clinical judgment, intuition and gut feeling, depending on what is happening in an interaction.

There are also situations where human values matter. If we all were diagnosed with the same tumour, what treatment to start and when to start it might be different for all of us, depending on particular values and priorities we had and our particular social context at the moment. These things do not naturally lend themselves to logical algorithm weighting, if that makes sense.

A second consideration is about the expectation of human agency. There has been some citizens jury work by the NIHR in Greater Manchester on this. People were supportive and did not feel that explainability was important for things like diagnostic scans. I think stroke was the example they were given. They thought accuracy was far more important than explainability. However, there are situations where that will not be the case. We can imagine that issues such as who gets preferential access to a new treatment or which areas get to have more resources diverted to them will be contentious, and people will expect to have a say and to challenge and be able to question.

The third and final consideration I would flag is about the importance of integrating doubt into the software itself and the people using it. Doubt is the human attribute that keeps us safe. It protects us from hubris, but it also drives continued innovation and continued questioning in science. “Have we got have the inputs right?” “Are the outputs what we expect?” The importance of doubt is very important as well. Those three things have particular ethical implications to think through in the application of AI.

Chris Clarkson: Thank you.

Chair: Thank you very much indeed, Chris, and thank you to our two witnesses in our first panel, Dr Byrne and Mr Westmore.

Examination of witnesses

Witnesses: Dr Ferris and Simon Madden.

Q131       Chair: I ask our next panel of witnesses to join us at the table. While they do that, I will introduce them. I am pleased to welcome Dr Tim Ferris. Dr Ferris is director of transformation at NHS England and NHS Improvement. He was appointed to the role in March last year. The transformation directorate that he leads brings together the operational improvement team with NHSX, the digital arm of the NHS. Dr Ferris is a former professor of medicine at the Harvard Medical School. He founded the Centre for Population Health and has a long-standing interest in and expertise on population health management. Welcome. Thank you, Dr Ferris for joining us.

Simon Madden is the director of data policy and, as it happens, Covid pass policy at the Department of Health and Social Care. The Committee has taken a great interest in Covid, but that is not the focus of our interest today.

Perhaps I could start with a question to Dr Ferris. It was kind of you to sit through the previous panel of evidence. You may have seen our previous sessions on this. You know about the context of concerns about privacy combined with great opportunity. How safe and secure is the public’s health data in the NHS?

Dr Ferris: My assessment is that the public health data is safe and secure, but it is always possible to make it safer and more secure. This is a journey, not a destination.

In my work as a clinician—a primary care physician—I am constantly reminded of the importance of my patients’ trust in me to safeguard the truths that they tell me, truths that I need to deliver the right diagnosis and the right therapies. At the same time, I am, as a health services researcher, interested in the transformation of care. The importance of the use of health data in making both small innovations and large-scale transformations cannot be underestimated. That data is the core asset that enables transformation. I do not think it is an exaggeration to say that addressing the NHS’s current capacity issues will require an extraordinary increase in the use of data to inform the delivery of healthcare and the transformation of that delivery of healthcare.

Q132       Chair: Thank you. Mr Madden, in your role in Whitehall, you were responsible for overseeing the safeguards that are there, and there are some plans to enhance them. Is there an urgent problem? We have seen the ratcheting up of opt-outs. Is this something that is causing urgency to generate higher levels of confidence than currently exist in data privacy and security?

Simon Madden: The Department would recognise that there is a degree of urgency, in the sense that we need the learn the lessons from previous mistakes. The pandemic has allowed us to demonstrate the huge value of data, and the public have been able to see how it can be used in practice. We need to learn the lessons, for instance, from the GP data for planning and research programme last summer where we did not engage sufficiently with the public and, to some degree, we took the trust for granted.

It was interesting to hear the previous testimony from Dr Byrne about the “Data saves lives” draft strategy. The notion of trust, greater public engagement and greater public involvement in decision making is central to the final version of the data strategy. Certainly, I encourage all of my teams to think about data in this way. It is not an abstract thing. It belongs to each individual patient, and the NHS holds that in trust. I say it is not an abstract thing because it is the individual stories and the collective stories of each patient who interacts with the health and care system. That deserves respect, and it has to be the central guiding principle in how we approach the overall legal framework for handling data.

Q133       Chair: On the events of last summer and the withdrawal of the measure, have you established why it was done in the way that it was originally? Was it an attempt perhaps, to put the most positive interpretation, to avail oneself of the newfound interest and enthusiasm for health research, and it was thought that it was a moment to make that big change? The less charitable version might be that, when people had much to consider in other areas, it might not get the degree of public debate and scrutiny. Was it seen as an opportunity to make the change?

Simon Madden: The reality is that it was a long-planned programme that had been delayed because of the pandemic. At the earliest opportunity to reset the programme and put it back on track, it was deemed the appropriate time for NHS Digital to do that. As I say, we should have done more engagement. We perhaps took for granted the level of public trust based on the experience and the attitudes that we had seen during the pandemic. That is why Ministers have been very clear that this is now an opportunity to reset the relationship with the public, and that is very much what the final version of the data strategy seeks to do.

Chair: Thank you very much indeed.

Q134       Aaron Bell: In terms of data sharing being safe and secure, our witnesses have been fairly unanimous about the importance of trusted research environments and the ONS Five Safes approach. I assume you both agree with that in principle. Is the corollary that health data should only be shared in those environments?

Simon Madden: One of the benefits of a trusted research environment or a secure data environment that extends beyond the use for research is that we would like to get to a position where data is accessed in situ. You can have various linkages. We see this as an end to the copying and shipping of data, which exists in the system to a degree, where data can be sent to places. That in itself will be an enhancement of the current protections. It is much more about data access than data sharing. That is generally the principle that we have been working on for some time. We trailed it in the previous version of the data strategy, but also, as you know, it was a key feature of Dr Ben Goldacre’s review. We have accepted his recommendation in respect of how trusted research environments should operate.

Q135       Aaron Bell: You are looking to further improve it. What particular steps should be taken to ensure that the TREs have the right infrastructure, so that you have interoperability between the data systems and can ensure that accredited researchers can access the data quickly? What specific steps are we taking right now?

Simon Madden: The success of such an environment depends on not only the technical infrastructure but, as Dr Byrne said, the right level of governance. Having privacy-enhancing technologies as a central part of the environment itself does not replace the need for strong and robust governance and, crucially, transparency. In the environments that we are talking about, we would look to ensure that every run of every code, any analytical event, would be transparent so that the public could see who had accessed the data and for what purpose, and what question, essentially, they had posed that environment in their analysis.

Q136       Aaron Bell: Thank you. Dr Ferris has been nodding along to most of what you have been saying. Can I put the same question to you, Dr Ferris? Do you think that the corollary is that health data should only be shared in these environments? You said in your opening remarks that it can always be safer and more secure. Is that basically in line with what Mr Madden just said?

Dr Ferris: It is. I might add a heuristic that I find useful in thinking about healthcare data. We often talk about it in a particular general sense, but there are many sources of data used in healthcare, and there are also many uses. I use a fairly simple algorithm in my head of four types of sources and four types of uses. The question you are asking, I believe, goes to the extent to which the same solution, the trusted research environment, applies to all the four uses, and I do not think it does. Let me articulate them.

Q137       Aaron Bell: Yes, go through them.

Dr Ferris: EPRs are the place of record for most personal health information.

Q138       Chair: What are EPRs?

Dr Ferris: Electronic patient records. That is the term used in this country. In this country, there are dozens of vendors of EPR systems, and they do not all format the data the same. We have heard some of the challenges of merging data across the country. That has to do with the fact that there are lots of EPR systems. There are also LIM systems—laboratory information management systems—and PAC systems that handle imaging. All of those are different systems. It requires us to connect them in the use cases. They all have different data requirements for their storage. I know you have a background in software engineering. Those are the systems.

There is another set of sources, which I call national data registries or repositories. That is where we store the information for your NHS number, for example, or the demographic files or audit files on cardiac surgery outcomes. This has personally identified information in it, but it is stored at a national level. The third is our administrative data. The fourth is something I heard referred to by someone in prior testimony as omics data—genomics, proteomics, all the scientific biological information. Those are the four classes. All of them have complexity.

Then there are four uses. The four uses are direct care, population health like delivering the vaccine programme and many others, and, very importantly, planning. Something that is important to understand is that very often planning data does not need to involve identified information at all. In fact, almost all the summary statistics that the NHS looks at are very high level and would not necessarily involve a TRE-type environment. That would be overkill in that context.

Q139       Chair: When you say planning, it is how many stroke beds are needed in a particular region.

Dr Ferris: Exactly. Thats right. The last one is research. Among those four use cases, TREs are very specific to the fourth use case. I want to underscore something that my colleague, Simon Madden, said. The principles of the right data in the right, secure environment—the Five Safesapply to all of those use cases, but the specific technology that we use to best solve the problems could vary across them.

Q140       Aaron Bell: Thank you. That is very helpful. We have also heard, as you have been hearing as well, of the complex trade-off ensuring that data can be anonymous without rendering it less useful for research. How can this conundrum be addressed, and what does NHS England do to approach the issue?

Dr Ferris: One of the Five Safes is safe data. Safe data, in my understanding of the Five Safes, is looking at the specific question that you are trying to answer and saying, “Are you providing all the data that is necessary, but just the data that is necessary?” I love Dr Byrne’s admonition about doubt. You should always have doubt. Do you actually need to answer that question? Are you minimising the dataset that you need, and could you do it in an anonymised way?

My academic background is in health services research. I almost always used anonymised data and was able to make terrific progress using that data. I heard in one of your prior testimonies a little bit about the statistical principles, so I will not go into detail about it. You can anonymise non-biologic data—administrative data—as long as you make sure that your individual cell sizes do not get too small. I am sorry if I am going on. My answer to your question is: be specific to the question that is being asked. Use all the information at your disposal to answer the question, but just the information necessary.

Q141       Aaron Bell: That is a policy that you promulgate throughout NHS England and NHSX.

Dr Ferris: That is correct.

Q142       Aaron Bell: Finally, to both witnesses, what lessons can the NHS and the overall UK health data ecosystem learn from overseas about this area, especially sharing safely and quickly at the same time? Dr Ferris, you are from overseas, obviously.

Dr Ferris: Yes, and I have some experience with the US data system. I want to align my comments with the prior testimony. This country’s data assets and its management of them are world leading. That does not mean we cannot learn from others, but they are world leading. For example, the US Government hold a compendium of all their health data and make it public access, and do it in a way that is statistically valid and does not allow any possibility of identification. We should do more of that here. I would like to see more opportunities to open data to the public, given all the safeguards and ensuring those safeguards. Those kinds of initiatives answer a type of question. They do not answer all questions. We should do everything we can to make the assets available to the public.

Q143       Aaron Bell: Is that a learning from Covid as well?

Dr Ferris: It is a learning from Covid. I have seen dozens of important papers published about the Covid experience that used private analyses of public data.

Q144       Aaron Bell: Mr Madden, do you have any thoughts about what we can learn from overseas?

Simon Madden: These are problems that every developed country is trying to grapple with. I agree fundamentally that it is not a binary choice between privacy, innovation and advancement. As Tim said, we are world leaders. We have a unique data asset that is almost part of our national infrastructure, to a certain extent. Inasmuch as we can give advice to others, we should also be looking at what other countries are doing, particularly other common-law jurisdictions that have a similar legal framework.

Aaron Bell: Thank you very much.

Q145       Carol Monaghan: Mr Madden, could you tell us a bit about the lessons that have been learnt from the general practice data for planning and research initiative?

Simon Madden: As I hinted earlier, the fundamental lesson has been the need to bring the public with us. I do not mean that in a trite way; I mean it in a genuine way. It is not just about better communications. Communication is an important aspect of how we move forward with that particular programme, but I do not want to detach the programme from the broader work that we are doing on data anyway. As I said, we want to reset the relationship with the public about data, which means that that engagement needs to include real involvement and to explain the benefits of how data is used. There are some excellent examples of deliberative events that you can hold. The OneLondon programme was a great exemplar of how you can do that and really involve patients and public in not only capturing their views but shaping future decision making.

I do not want to announce too much here. Ministers will not thank me for trailing too much of the strategy. It is fair to say that the data strategy focuses heavily right at the beginning on ways in which we can increase public engagement, increase transparency, build public trust, including through having a data covenant with the public that would build on Caldicott principle 8 around no surprises—a contract with the public around health data—and look at how, through the NHS, we can set a national standard for public engagement. Not only are we having a platinum or gold standard for public engagement from the centre nationally, but at every local level, so that every healthcare organisation is doing the most they can to engage with the public at every level.

Q146       Carol Monaghan: What are the challenges of engaging with the public?

Simon Madden: There is an obvious challenge of resource. It is expensive. We are resourcing a set of initiatives that will begin shortly, around the publication of the strategy. As Dr Byrne hinted earlier, this is an ongoing thing. Part of the reset of the relationship is building sustainable structures, whether that is regional assemblies or local fora where you can actively engage, and those are challenges to set up. They were particularly challenging during the pandemic when we were trying to think about how we would do them. As we see Covid receding, that is less of an issue.

Q147       Carol Monaghan: Has social media been used at all to engage?

Simon Madden: It is being used to engage. Social media can have great traction. We have seen that. The power of social media can help drive the opt-out message quite clearly as well; sometimes, it is very difficult to counter some of those negative or at least challenging voices. One of the lessons that we have learnt is that we need to be much more proactive and not reactive. All avenues are being looked at in how we engage with the public. It cannot just be a letter landing on the doormat. It has to be face to face. It has to be through the health profession. It has to be working with the health profession and all the various agencies, including the social care sector as well.

Q148       Carol Monaghan: Dr Ferris, do you have any comments on good public engagement and the challenges it might have?

Dr Ferris: I concur with my colleague’s comments. I might add one additional dimension, which is the importance of the integrated care system, and using the ICSs and local place. It is such a critical part of all the use cases that I described. It is the extent to which the decisions you make with respect to your data are part of your community, so that you see direct community benefit in your local community. You see that the research is not just abstract, other and distant, but is going on in your local community. You see the integration of health and social care around direct care and providing the optimal services to take care of patients, and people see the advantages to them and their local community of the use of data for public benefits.

Q149       Carol Monaghan: Quite a few of the pieces of evidence we had talked about asking private companies to provide some sort of benefit to the local community so that people could understand and there was a tangible link to them—maybe a new cancer ward funded by some private company that had used data. Is that being considered? What are the advantages? Are there challenges with it as well?

Dr Ferris: I would be cautious about quid pro quo arrangements around data sharing, just to make sure that they did not undermine the Five Safes, as Dr Byrne mentioned earlier. Having said that though, I have seen partnerships where a partnership arrangement came to terms where a local community need was fulfilled and there was a relationship with a private or commercial organisation. I would not rule out the possibility; I would just want to approach it with caution.

Q150       Carol Monaghan: How else could private companies demonstrate that they could contribute to the general good of a community?

Dr Ferris: Most directly, it comes from what they are trying to accomplish. For example, if a software company was dealing with helping social care to be more efficient at going from house to house, in that case, what they are trying to produce is a service for the people in the community. That is an example of how the data share directly serves the interests of that company. Of course, you could go to the other extreme and monetise it, and I think that came up in an earlier conversation about intellectual property. You can certainly take that approach as well. It is a very common approach in the United States. I am sure you are not surprised to hear me say that. That comes with its own set of challenges.

Q151       Carol Monaghan: Mr Madden, do you have any final comments?

Simon Madden: Some of those issues feel slightly above my pay grade. I will take this opportunity to mention the brilliant work of the Centre for Improving Data Collaboration, which we established a couple of years ago. It has been specifically set up to help healthcare organisations develop more robust data partnerships so that fair returns can come to the NHS as a result of those partnerships, and there is the local and national capability to be able to negotiate fairly with private companies so that those vital elements are not lost in any commercial contract.

Carol Monaghan: Thank you.

Q152       Rebecca Long Bailey: How does NHS England ensure that individuals are aware of opt-outs to their health data being shared, Dr Ferris?

Dr Ferris: I will need to turn to Simon Madden on the specific point. Let me just say that the commitment to making sure that they are informed is there. I am counselled to be careful about predictions about the future because they are not always correct. Dr Byrne noted the potential of the NHS app to be a direct delivery of that kind of information and choice process transaction. It is absolutely my aspiration that that be a vehicle for those kinds of transactions. I will turn to Simon. Is that okay?

Rebecca Long Bailey: Yes, that is fine.

Simon Madden: It is of vital importance that patients and the public understand that they can opt out. First of all, I should say that we acknowledge that the current opt-out system is complex and not easy to navigate, and there are too many types of opt-outs. There is work under way to look at how we can simplify the opt-out landscape.

There is perhaps a degree of sophistication that we should introduce so that it is not a blanket opt-out but is, as hinted at earlier by Mr Westmore, project by project or category by category. Certainly, if we are doing our job properly of engaging with the public and involving them, their understanding, hopefully, will be improved. It is incumbent on us to enable them to opt out of specific categories of data so that they can continue to contribute to planning, but there may be elements of specific types of research that they may not want to participate in. There is an opportunity, as Tim said, through the NHS app where you can already use the national data opt-out. As part of the approach that we need to take around simplification, we may need to make the opt-out a little more sophisticated.

Q153       Rebecca Long Bailey: Thank you. How can individuals at the moment find out how and by whom their health data is being used and what the benefits are?

Simon Madden: At an individual level, I am not convinced that that is possible at the moment. Generally, a patient or a member of the public would understand, if they have not opted out, that it can be used for planning and research. In the longer term, we have an ambition, as we trailed in the draft data strategy, for greater transparency around this, not only through the means of a national annual transparency statement but increasingly to get more granular as the technology allows us, to the extent that perhaps through an individual NHS account people can see directly what their data has contributed to—the specific projects or broad project areas or broad research areas. That remains an ambition and a goal for us.

Q154       Rebecca Long Bailey: Thank you. Dr Ferris, is there anything you would like to add?

Dr Ferris: This does not connect at the individual level, but there are multiple mechanisms by which patients can see, through notifications of the research that is being conducted, what is actually happening. The data-sharing agreements, of which there are about 1,000 in the NHS, are all publicly available. There are privacy notices at the local level; if a trust is sharing data for non-patient care reasons, they post a public notice of that sharing event. The notices go into considerable detail about what the specific research question is and what data is being used. It does not connect to your individual data, but it does if you are part of that trust. It could make some assumptions about your data being included in that process.

Simon Madden: Chair, could I build on that?

Chair: Of course.

Simon Madden: Tim is absolutely right, but I would summarise it as, currently, the individual has to do much searching to do that, and that does not feel right. It should be something that people can access automatically, and that is why that remains our long-term goal.

Q155       Rebecca Long Bailey: Thank you. Finally, I have a similar question to one posed earlier. We have heard concerns that opt-outs can lead to biases in health data research and medical outcomes. How can that best be addressed, Dr Ferris?

Dr Ferris: To save time, I thought the prior testimony we heard was excellent on this point. It is a concern. There is one element of the prior answers that I would build on. It is actually possible to figure out whether there are biases in the missing data in the people who have opted out by simply checking the distribution of the demographics in the opted-in group with national statistics. It is the discord between those two that tells you whether or not you have a biased sample. Table 1 in most research papers compares your sample to a national demographic. That is a standard research technique. It is actually possible.

Q156       Chair: Has that been done? Do you know whether the 5% is representative of the overall population or skews the sample?

Simon Madden: It is a different thing that Dr Ferris is talking about. Sorry, I am speaking for you, Tim. Each individual research project would look at the demographics and do the comparison. On the actual demographic characteristics of those who have opted out, it would not be appropriate. We cannot look at the demographics of those because they have opted out, and therefore we cannot collect the information on them. We have some idea of geographic spread.

Q157       Chair: Let me go back to Dr Ferris on that.

Dr Ferris: At my peril, Simon, I am going to disagree. You can just compare those who have not opted out to the whole country.

Simon Madden: I think we are agreeing.

Dr Ferris: Okay. By inference, you can determine whether or not you have a—

Q158       Chair: Has that been done? As you say, you can compare the 95% with the 100% from ONS and see whether it looks broadly the same. Do you know whether that has been done?

Dr Ferris: I don’t, but I would be pleased to come back to the Committee with a written response to that question.

Chair: That would be helpful, thank you.

Q159       Chris Clarkson: On that outbreak of consensus, can I turn to the legal certainties around data sharing? What advantage, if any, do you perceive in the new legislation proposed in “Data saves lives”? To follow up on that, I am going to ask you the same question I have asked everybody so far. Do you think the answer is more legislation, or do you think it is better guidance and training?

Dr Ferris: On legislative matters, I generally turn to my colleague to the right.

Simon Madden: Thank you. I think the legislative framework that we have is sufficient. It is quite a permissive but also a protective framework. It is probably worth highlighting that there are three elements to the legal framework protecting health data. First, there is the common-law duty of confidentiality. There is the Data Protection Act, which obviously applies to all UK data. Then there is specific health legislation, which either empowers or sets limits on the sharing and processing of data, usually bestowing powers on either NHS England or NHS Digital. The measures proposed in “Data saves lives”, which are now going through Parliament and have had some large degree of scrutiny throughout that journey, are to underscore the importance of sharing data for the benefit of the health and care system.

We certainly have no plans to legislate in greater detail. There are some elements of secondary legislation that we will use primarily to address some of the benefits that we have seen in using the control of patient information regulations during the pandemic. The primary focus of the secondary legislation that we will begin work on shortly, which we will need to consult on, is on the transfer of functions from NHS Digital to NHS England, where Lord Kamall has committed at the Dispatch Box in the Lords to create a data safe haven in NHS England.

If you accept that NHS Digital is the current baseline for the level of protection that should exist, the Government plan to go further by putting the safeguards and the actual design of the data safe haven into regulations, so that it is transparent and avoids the risk in some of the current oversight provisions of being a management construct, and therefore able to be swept away at the stroke of a pen.

Q160       Chris Clarkson: On that, we have heard concerns that the proposals from “Data saves lives” and “Data: a new direction” could potentially weaken personal privacy and affect the adequacy of data-sharing agreements with the EU. How should we look to avoid that?

Simon Madden: The health sector is an important part of data sharing. We work closely with DCMS, which has the lead on this work. I do not believe that there is any evidence, from what I have seen, that the privacy aspects in relation to the sharing of health data will be weakened as a result of any adequacy decisions or our leaving the European Union.

Chris Clarkson: Thank you.

Q161       Aaron Bell: Returning to the area we were talking about earlier about data sharing, I want to talk specifically about ethics because that is very important to the public at large. How does NHS England ensure that data is shared ethically? More substantially, what do you understand by ethics in terms of data sharing and who you share it with in intermediaries and end users?

Dr Ferris: Obviously, it is a really broad topic. The ethical frameworks are multiple ethical frameworks that one could apply to the sharing of data. Dr Byrne presented two different frameworks, both of which I would say were reasonable and fit for purpose. I will highlight a couple of things that she touched on in that consideration. It is particularly important to understand that the creation of the data is a by-product of the delivery of care. That, to me as a practising physician, borders on being a sacred trust, because of the critical nature of the trusting relationship, and therefore the trust with data. Clinical care cannot occur without that as primary.

For me, everything that we do with data must acknowledge the context in which the data was created and make sure that it does not undermine that trusting relationship. That is a fundamental principle that I apply in all these situations. The other thing that often gets lost, as Mr Westmore said, is that it turns out that people’s feelings about how their data is being used are really specific to the situation. It becomes very challenging to have conversations about generalities in the context of the ethical use of data because you can make broad statements and you can almost always find a person who can agree to a specific exception. Then how important was the broad statement?

Those two things are critical foundational principles in the ethical considerations of the use of data where you start thinking about public benefit and where you start thinking about the ethics of your commitment to the community. My opting out makes it more difficult for the community to cure cancer, for example. I am being extreme for dramatic purposes, but to make the point that there are ethics both around the use and sharing of the data and around participation in your community and making sure that we are doing our duty as citizens and contributing our data to a public good that may or may not benefit us specifically.

Q162       Aaron Bell: Sure. There is obviously a public benefit, a public good, and there should be in all of these circumstances, but in some cases there is obviously potential for private benefit. The most obvious is drug research or treatment research. Do you treat those cases differently? Do you think a different ethical framework needs to be applied to cases where you are sharing with trusted intermediaries or trusted end users, but who also have a private interest in it as well as the public interest?

Dr Ferris: Two thoughts come to mind in response to that question. One is the critical nature of consent for participation in a therapeutic trial. When I think of pharmaceutical companies and their access to the NHS, the principal access that they are searching for is not to data per se, but to trials that allow them to prove and test, and that always must involve fully informed consent. That is a very specific situation that is well demarcated. When you consent to a trial—I have consented myself and my family to multiple trials—you are consenting in full knowledge of the sharing of the data that is involved.

Q163       Aaron Bell: Mr Madden, do you want to add anything?

Simon Madden: I will not repeat what Tim says. I agree. The fundamental consideration of access to any data issue that we operate by is strict adherence to the law. The law determines whether or not data can be shared, what purpose it will be used for, and how it should be shared, and so on. The law is our overwhelming compass.

Q164       Aaron Bell: We are parliamentarians. We have the power to change that law.

Simon Madden: Of course.

Q165       Aaron Bell: If there are any issues where you think it is stopping data being shared that should be, or conversely, we would be very interested to hear that as well.

Simon Madden: I was going to come on to something that is not always known, and it speaks to the point that Mr Clarkson made around the use of guidance. You have the law as the framework, but how often that law is applied is seen through what we call information governance. Often, that is a term that is quite peculiar to health. Other people might call it data protection or data security. We recognise that there is some way to go to simplify the information governance frameworks and the guidance that is used to enable data sharing and provide data access. There is overlapping guidance that can often conflict and contradict, so we have set up a portal and a panel, which the national data guardian also belongs to, that oversees and supervises the production of simplified information governance guidance.

We found that often within the system there was an over-legalistic interpretation of how the data should be applied, which in some cases led to system paralysis, and in the most extreme cases meant that data was not even shared for direct care because some clinicians were worried about whether they were doing the right thing. It is a combination of the law being as clear as it can be, and the guidance, through the information governance mechanisms that we oversee, to make sure that the system is taking a proportionate, safe and secure view, but not an over-legalistic view that would prohibit the sharing of data when it absolutely needs to be shared.

Aaron Bell: Thank you. That is very helpful.

Q166       Rebecca Long Bailey: Witnesses told the Committee that the UK’s data ecosystem was slowing down access to health data, sometimes by several years. What will the Government’s proposals in “Data saves lives” do to address this, and what specific progress is being made across NHS and Government Departments to speed up health data sharing and link it with other datasets? Social care is one example.

Simon Madden: The two-year case is an outlier, and it is an extreme one. When there are cases where there are problems with getting access to data, it is for a variety of reasons. Sometimes it is a perfect storm of events. Often, I have to say that it is because the process has not been followed correctly. We can always do more to improve processes and make them clearer for those applying. There is sometimes a real problem in the process and the forms—effectively as basic as that—being filled in incorrectly, and they go so far down the line and then have to come back.

In terms of what we are doing to improve it, the way in which we set up trusted research environments will greatly enhance access to data from a researcher’s point of view. It will require a different way of working. There is potential. Provided that an organisation is part of an accreditation framework—the secure environments are accredited, but there is an application process—and that the organisation or individual researchers have a legitimate purpose to access the data, the whole process should be speeded up. There will be a degree of passporting. That process is still to be worked out. We are working closely with the research community to ensure that their interests and their needs are reflected in the process design.

Q167       Rebecca Long Bailey: Dr Ferris.

Dr Ferris: To build on that a little bit, maybe with a few specific numbers, the Data Access Request Service has had between 31 and 216 requests per month over the past six months. They average about 50 a month. The average time to approval is 41 days. The approval process, as Mr Madden said, is typically not the big barrier. It is actually getting the data together. The ones that take the longest are those with the most complicated data requests, where you are merging multiple different systems and data curation across those different systems.

I mentioned earlier to Mr Bell the four types of systems. Within those types of systems are multiple vendors. Every single one of those vendors formats the data internal to their system differently. When we have a national data request that wants to bring systematic data together from all over this country, it is a really big data curation task because none of the datasets sits in a way that is easily merged.

Mr Westmore and Mr Madden pointed out that the TRE is potentially a big advantage, and I want to underscore that. In the future, what we would like is that when you do a curation process you would hold that for the next time you need it. We have not had the capability to do that in the past. We would like to build that capability. The more complex data requests, which are, dare I say, the most exciting and cutting edge, potentially have the largest impact. We want to be in a position to facilitate those complex data requests. We have a way to go, but we are setting up all the components to make that better.

We recently set up an account management system team and methodology, so that there were specific people shepherding each individual application through the process. We are hopeful that that account management process will deliver greater timeliness to the data requests.

Q168       Rebecca Long Bailey: Thank you. On the question of the merger between NHS Digital, NHSX and NHS England, what benefits will it bring? The merger plans were criticised recently in the House of Lords as they were seen as undermining NHS Digital’s role as a statutory safe haven for patient data. What safeguards will be in place to ensure that that role is not undermined?

Simon Madden: The overarching aim of the reforms themselves is to get the right organisations and the right structure in place to help drive transformation of health and care. We now have a situation where we have a backlog. There is continuing pressure on the NHS and on the health and care system more generally. The real urgency around digital transformation is ever more acute; therefore, the Secretary of State took the view, having read the independent report from Laura Wade-Gery, that some structural changes needed to be made in how digital transformation, technology and data were governed and operated in the NHS. We have Health Education England and NHS Digital merging into NHS England. NHSX was a joint unit between the Department of Health and Social Care and NHS England. The NHSX brand has been retired. In its place is a new joint policy unit that sits at the heart of the transformation directorate to still have a link between the two organisations.

To the point about the Lords’ objections, Lord Kamall worked hard to listen to peers about their concerns. First of all, it is important that people understand that, in transferring the functions of NHS Digital to NHS England, the same obligations and constraints will apply to NHS England in any event. In addition, over and above that, Lord Kamall made an undertaking that the transfer of function regulations that will actually transfer the functions from Digital to England will also contain provisions for a data safe haven and set out in those regulations how there will be independent oversight and independent scrutiny, and how it will be distinct from the rest of NHS England’s delivery operations. We are working with Lord Kamall on developing those proposals as we speak.

Q169       Rebecca Long Bailey: Thank you. Dr Ferris, do you have anything to add?

Dr Ferris: I want to make an offer at some point before we finish to be available to any of you for further conversations on any of these topics. It is really important for us to get this right. We are very pleased that you are having this conversation. It is actually part of the national conversation that we need to have. We are committed to answering any and all of your questions now and in the future.

Chair: Thank you very much. That is very kind of you. We will be making a report with some recommendations, and we hope that you will use your influence in the Department, both of you, to ensure that they are enacted. We have not made them yet, but they will draw on the expertise that we have had, including from yourselves.

Q170       Chris Clarkson: Very briefly, I want to touch on the use of AI with health data. What challenges, if any, does the use of AI pose for the use of health data, particularly in the NHS? Do you want to start, Dr Ferris?

Dr Ferris: It is interesting to think about the way in to such a broad question. As multiple testimonies have said, AI is a tool, and it can be applied in so many situations. Its most effective use, as you probably know, is in high-volume decision-making settings where decisions are made in a patterned way many times that allows the computer to learn and then apply its insight.

In the context of medicine, it is really important. I think of its most powerful use as—these words are important—decision support. It is not the AI that makes the decision; it is the AI that presents to a clinician or a patient an option. Just as when using a search engine you are presented with a bunch of options, they are not making the decision; you are making the decision based on the options presented to you.

If the options presented to you are biased, that is a problem. That is the central problem. The context for whether or not a decision is biased matters. If as a clinician I am presented with decision support tools that are run off an AI algorithm, and someone says, “In this situation, you should consider this other pharmaceutical because it would be better for them, based on this algorithm or AI learning,” that is something I can then check and make my own independent decision.

I find the use of AI in those settings incredibly helpful. It makes me a better doctor, honestly, to have the computer point things out. I am human. I cannot know everything that is in everyone’s chart. Do I know, if I see someone for a cough, that they are asplenic and have not had a Pneumovax? What I did for their cough may or may not help them over the next few days, but if the computer reminds me that they have not had a Pneumovax, that could save their life. I would not normally go looking for that information if I was seeing them for a cough. That kind of decision support, whether it is applied to imaging or clinical decisions, is incredibly valuable, but it is in the context of decision support. An individual is making a decision based on that information.

Operational issues are a very different context for the use of AI. Being presented with options for the optimal way to fill beds is a really different context. In that context, it is more difficult for me to see how, for example, demographic bias would play into that kind of algorithm. Bias is really important to understand the context, and to understand where it is and is not applicable. As my colleague, Mr Westmore, said, the tools used to understand whether an algorithm is biased are becoming better and better. I am hopeful that in the future the current concerns about bias and AI will diminish as those tools become more and more commonplace.

Q171       Chris Clarkson: Is there anything you would like to add, Mr Madden?

Simon Madden: I agree with Tim and the previous testimony. As part of the NHS AI Lab, we have two particular programmes around regulation and ethics. The regulation piece, as Dr Byrne mentioned earlier, is about the formation of the multi-agency advice service that will help to bring regulators together to provide real guidance in this area. I have nothing to add.

Q172       Chris Clarkson: Dr Byrne mentioned the building in of doubt, which presumably is the ongoing piece. At the moment, is it fair to say that clinicians are, effectively, the element of doubt? When you get that advice and support, you can question it at a human level.

Dr Ferris: Yes. This pertains to all algorithms and not just AI, which is a specific subtype: when I first started practising medicine, computers were reading EKGs—heart rhythms—and they were terrible at it. I could do better. Thirty years later, there is no question but that computers are better than humans at reading EKGs, far better.

The importance of that example is the timeframe over which we did it. We started 30 years ago asking computers to read them and recognising that their output was mediocre at best, so we did not generally follow them. Now we have learned to trust them because they are better than people. We have that journey to go through, but we have to allow ourselves the right context and the permission to go on that journey. If we do not start on that journey, we will not take out the biases and inaccuracies that will recede with time.

Q173       Chris Clarkson: I appreciate it is a refined thing and it is a continuum. AI is not a static thing. How, and to what extent, is AI currently being used in the NHS? How essential is it to making clinical decisions? How integral is it now to the day-to-day operation of the NHS?

Dr Ferris: I can give you one great example from a couple of weeks ago. I was at Barts, where they run the largest mechanical thrombectomy unit in Europe, which covers much of the east of England, both north and south of the Thames. It is critically important because a CT scan is done on a patient with an acute stroke, and whether or not you can remove that clot mechanically is a highly specialised decision. Computers are quite good at it. The AI Lab funded a commercial company that now uses them. Each time a patient is considered for a helicopter flight to Barts for mechanical thrombectomy, that AI algorithm is used to enable communication between the specialist experts and the emergency room doctors who are treating the patient. That is just one example of a truly life-saving instance of a use of AI.

Q174       Chris Clarkson: You would say it is already quite an important part of—

Dr Ferris: It is already quite an important part of the delivery of healthcare.

Q175       Chris Clarkson: Thank you very much. Is there anything you would like to add, Mr Madden?

Simon Madden: My understanding is that we have around 183 projects live at the moment across 163 primary and secondary care trusts across the country. That reinforces Tim’s point.

Chris Clarkson: Thank you very much. That is very helpful.

Q176       Aaron Bell: I want to challenge Dr Ferris a little bit. One use of AI is obviously pattern recognition in previously unsuspected patterns. Is there not a tension with what you said earlier about only making the data that you need for something available, because AI is sometimes about finding previously unsuspected patterns?

Dr Ferris: You are absolutely right to challenge me on that. It is one of the complexities. In the situation of the use of AI, in fact, the more data the better because you do not know which signals the computer will pick up as predictive. You are absolutely right. AI is a case where it is not all and just; it is just all.

Q177       Aaron Bell: You would justify sharing it all on that basis.

Dr Ferris: Yes. Again, I would want to go through the Five Safes and make sure that we really thought through ensuring that we were optimised on all Five Safes.

Simon Madden: It is also why, speaking to my point earlier around information governance and guidance, we are about to issue some new guidance for information governance practitioners across the health and care system so that they understand the distinct needs around data sharing and data access in relation to AI.

Dr Ferris: I want to add one more thing. Again, this is where the TRE will be incredibly helpful. If the data is sitting in a TRE and you are not sharing the data, but you are bringing the analysis to the data and analysing it in a secure environment, it reduces concerns around—

Aaron Bell: Over-sharing.

Dr Ferris: Over-sharing.

Aaron Bell: Thank you.

Q178       Chair: I was interested in this point as well, given that you introduced your remarks by saying that in your practice you just had the information that was necessary. That was to introduce the thought that the default of information being more richly available than was needed is something that could be avoided and resisted. The exchange that you just had with Aaron has big implications, doesn’t it? Certainly, for research purposes at least, we need to prepare ourselves, or there is advantage in preparing ourselves, for a world in which we share data more than is indicated by the particular research question. Have you thought about what the policy implications of that insight are?

Dr Ferris: Is that to me?

Chair: Yes.

Dr Ferris: I would go back to the importance of the secure data environment and the trusted research environment, and getting as much into those environments as possible, because that will enable not only hypothesis-testing science but hypothesis-generating science. When you are doing hypothesis-generating science, you want much more. You are right to contrast the narrow all-and-just principle on hypothesis testing where you know what data you need. Hypothesis generating requires access to large amounts of data. In order to do that, we need secure data environments that will make it very clear, consistent with the Five Safes, but will allow the broader view to data that you are rightly highlighting.

Q179       Chair: Finally, I am sure you heard Mr Westmore in the earlier session talk about the ratchet of withdrawal of consent. In a world where more is better in terms of data, how concerned are you, whether it is over a short period in response to particular outbursts of public attention causing a surge of people to withdraw their consent, or, frankly, just the accretion over time, that you are developing a research environment that is predictably going to be progressively worse? Is that concern sufficient that we, as policymakers, should be introducing some firebreaks so that it is not the ratchet that Mr Westmore described?

Dr Ferris: We need to be very vigilant on this topic. As we saw last summer, it has the potential to move in a direction that would be really unhelpful for the benefit of the entire public. I align myself with the comments of Mr Westmore and Dr Byrne around the need to bring the public with us and the level of engagement that that will require. We should be humble in the face of the practical challenges of doing that. The concepts are all easy to describe, but practically doing that, the specific 1, 2 and 3 of how we will accomplish it, is something that Government, the NHS, the ALBs, the charities and the press need to work on together. There are real practical barriers to accomplishing it, so we need to be vigilant.

Chair: Thank you, Dr Ferris and Mr Madden, for your evidence today. It is very helpful as we continue a fascinating inquiry, and we will make some recommendations shortly. Thank you very much. This concludes the meeting of the Committee.