Science, Innovation and Technology Committee
Oral evidence: Governance of artificial intelligence (AI), HC 38
Wednesday 13 December 2023
Ordered by the House of Commons to be published on 13 December 2023.
Members present: Greg Clark (Chair); Chris Clarkson; Tracey Crouch; Dr James Davies; Katherine Fletcher; Rebecca Long Bailey; Stephen Metcalfe; Graham Stringer.
Questions 756 - 822
Witnesses
I: Rt Hon Michelle Donelan MP, Secretary of State, Department for Science, Innovation and Technology; Sarah Munby, Permanent Secretary, Department for Science, Innovation and Technology.
Witnesses: Michelle Donelan and Sarah Munby.
Q756 Chair: The Science, Innovation and Technology Committee is in session. We are very pleased to welcome today the Secretary of State for the Department Michelle Donelan and Sarah Munby, her Permanent Secretary. Both witnesses were appointed in February this year when the Department was created.
We are very grateful for their time this morning to answer some questions in two sections. We are going to talk first about artificial intelligence, and then secondly more generally about the work of the Department.
Before we begin, I just want to welcome Dr James Davies MP to our Committee, who was appointed by a resolution of the House this week. He is very welcome.
Perhaps I could start, Secretary of State, with a question on AI. We have had the King’s Speech. There was no AI Bill in the King’s Speech. Are we expecting a Bill during this Session of Parliament?
Michelle Donelan: No. As you would expect, the King’s Speech lays out the parliamentary plan for this parliamentary Session.
What we are doing is investing more than any other nation in AI safety. We are really on the front foot. As you know, we were the country that convened the world for the AI Safety Summit. We managed to secure that landmark agreement around pre-deployment testing. That is not the limit of our work. We have also produced the White Paper, which we will be responding to shortly, and have been ensuring that we double down on getting a better handle on what exactly the risks are.
It is important to remember that this is an emerging technology, which is emerging more quickly than any technology we have ever seen before. No country has a full handle on exactly what the risks are. That is why we have prioritised that. We have brought a huge range of experts into Government from industry. We have packed the benches, if you like, to be able to do that.
We have also focused on that in the international track. One of the things that we secured in Bletchley was the “state of the science” reports. We have a full handle on the research. To legislate properly, we need to be able to understand better the full capabilities of this technology.
With that said, we are not afraid of legislation. It is obvious that every nation will eventually have to legislate, but we identified that we needed to be able to act quickly. The next set of models will be coming out within the next six months. On average, it takes about a year to legislate in this country. We did not really have a year, but we have managed to take tangible action now that will ensure we are on the front foot when it comes to safety.
Q757 Chair: We will go into some of those further aspects in a bit more detail. In that White Paper, which was published in March, the UK was proposing to take quite a distinctive approach, which was laid out as working through existing regulators. In advancing that broad approach, the White Paper said that the Government expected to introduce legislation to place a statutory duty on regulators to have due regard to the principles outlined in the AI White Paper.
Given that that was expected in the White Paper, why is it not being followed through now with the necessary legislation?
Michelle Donelan: The key here is timing. We are not saying that we would never legislate in this space. Of course we would. As I said a moment ago, every Government will have to legislate eventually. The point is that we do not want to rush to legislate and get this wrong. We do not want to stifle innovation. We want to be creating the jobs of today and tomorrow. We want to ensure that our tools can enable us to deal with the problems in hand, which is fundamentally what we will be able to do when we evaluate the models.
We can compare and contrast our approach with what other nations are doing. If we look at the EU AI Act, for instance, there are deep concerns among some of its member states that it will stifle innovation. I was over in Brussels just the other week. We were signing, sealing and delivering Horizon, which I am sure we will talk about today. They were telling me that their version of our AI Safety Institute, the EU AI Office, will not be fully functional for two to three years.
We had already set up a task force in our Department months ago. That has become an institute already. It is the world’s first AI institute and it is in a position to evaluate models now.
Q758 Chair: How would it stifle innovation if there was a one-clause Bill to require the existing regulators to have due regard to the principles set out in the White Paper?
Michelle Donelan: First of all, all the regulators have to adhere to legislation anyway. With the White Paper, we tried to create more consistency and cohesion across the board and establish the common-sense principles that we want our regulators to be working to in this space. That helps industry to know the rules of the road as well as creating that cohesion, as I said.
Some of them were around things like safety, transparency and fairness. That is just a starting point. We have to come back to that White Paper, flesh out some of our responses and our further thinking, and push the debate forward. We will be doing that in the coming weeks.
Q759 Chair: What would be the problem with regulators having due regard to the Government’s own principles?
Michelle Donelan: That is only one aspect of what people might suggest that we need legislation on.
Q760 Chair: That is the aspect in the White Paper that the Government themselves said would be needed.
Michelle Donelan: Yes, eventually we will have to get to that point, but it would be very peculiar for a Government to do a one-clause Bill on just that aspect of AI. If you were going to legislate in the AI space, surely you would be tackling some of the other issues.
We are saying that the best way to do that is to get a full handle on exactly what the risks are. We have brought on board the experts. We have done that in the task force, which is now morphing into the institute. The institute does not just evaluate; it also leads on research. We are doing that on an international track as well, with the “state of the science” report and our work with international counterparts.
This is part of a bigger picture. We will have to legislate, yes, but it is the timing that is important. Rushing to legislate will not help anybody.
Q761 Chair: I see. That is instructive. You anticipate a bigger Bill, a broader Bill than what was envisaged in the White Paper, which was a narrowly focused duty on existing regulators to have regard to the principles that the Government have set out. You envisage something broader than that.
Michelle Donelan: I have been quite clear that every country in the world will at some stage produce a piece of legislation on AI. Exactly what we put in that Bill has not been determined as of yet. The timing has not been set because we in this country are taking a really proportionate and agile approach, which is going to be based on evidence. We are going to gather the information and properly understand the risks before we lurch to legislate. We have seen the impact that that can have. Look at the EU AI Act, the ramifications of it and the response by industry to that.
We do not want to be stifling innovation. We want to be a science and technology superpower. That is why my Department was created. We recognise the huge benefits that AI can bring. While we have just spent a while talking about safety, the reason why we want to grip safety is because of the incredible benefits that AI can bring both for our public services and to transform people’s lives. We can only draw down on all of that if we have not stifled the innovation in the first place.
Q762 Chair: In thinking about that legislation and in getting it right, do you expect to have concluded your thinking sufficiently to introduce a Bill during this parliamentary Session or will it be for the next Parliament?
Michelle Donelan: I answered that at the beginning. It was not in the King’s Speech. There will not be a piece of legislation that is planned during this parliamentary Session. We have been quite open about that. Otherwise, you would have seen it in the King’s Speech.
There is sometimes a tendency to presume that legislation is the only answer, whereas what we have done as a nation is to invest more money than any other country in the world on AI safety and convene the world to tackle the risks and the opportunities of frontier AI for the first time.
We managed to get a landmark agreement around pre-deployment testing. I asked the companies to publish their own safety policies. They did that. That is the first time in the world anybody has ever seen all of the safety policies of the top frontier companies.
We published an emerging processes document so that people could compare and contrast, to push for a move to the top. This should not all be on Government. We need the companies to do more as well. This is not as simple as, “Legislation is the only tool in the toolbox”. It certainly is not. There are downsides of legislation. It takes so long. The EU and America will be looking to us to fill that gap in time because we already have our task force, which in effect has become the institute and is in a position to evaluate the models.
Q763 Chair: Let us go on and talk about those things, but the presumption that there should be legislation, so far as it is, is only the presumption that is in the Government’s own White Paper, which states explicitly that the Government expect to introduce legislation to impose a statutory duty on regulators. That is where the presumption is drawn from.
Michelle Donelan: Yes. It does not state “within this Parliament” and it does not state “in isolation” either. It is a comment about the work that we are—
Q764 Chair: It says “when parliamentary time allows”, specifically.
Michelle Donelan: Yes.
Sarah Munby: I wonder whether I might just make a brief comment about the work we are doing with regulators on this topic.
Chair: Can you save that? We have some questions on that. You will absolutely have the chance to describe that.
Q765 Chris Clarkson: Secretary of State, you mentioned having to come back to the White Paper. When can we expect the Government’s response to the White Paper consultation?
Michelle Donelan: We will be doing that in the coming weeks. It will be in the new year. The magnitude of the summit, the information that we gathered in the summit and the conversations that we had really did help inform our response to that White Paper, along with all the people who formally responded to the White Paper.
We wanted to take our time to get it right. This is a monumental topic that we are talking about here. We also want to push forward the conversation. That is what you will see when we come back to that.
Q766 Chris Clarkson: When you say “new year”, are you talking about January, February or is that more of an aspiration?
Michelle Donelan: No, it is not an aspiration. You will see it in the new year. I am not going to give you a specific date. You know only too well, Chris, that times can alter in politics, but I can assure you that we are talking about the beginning of the new year.
Q767 Chris Clarkson: In that case, can you update the Committee on your efforts to establish a central regulatory function?
Michelle Donelan: Yes, we have already started that. The central regulatory function is in DSIT. That has many functions, which we outlined within the White Paper. One is around horizon scanning, which is about helping the regulators fill in some of the gaps that I know the Committee has raised before, supporting the regulators and acting as a support function to the regulators. That team is already growing within DSIT.
Q768 Chris Clarkson: Do you envisage that being a hub-and-spoke model with the existing regulators or do you see this as a separate regulatory function?
Michelle Donelan: It is not a separate regulatory function. It is not a regulator per se. It is a support function, in essence, to the existing regulators. Our thinking on this may evolve over time as to exactly what support that the regulators need.
What we are doing with our existing regulators is speaking to them—that is not just our Department but the lead Departments for those regulators as well—to make sure that they have the correct support in place and that we are using that central risk function in the right way. We also have the DRCF, to which we gave £2 million, to support the regulators in addition to that.
Q769 Chris Clarkson: Is this more of a bespoke approach for Britain rather than an attempt to replicate what the Europeans have done or what the Americans are trying to do?
Michelle Donelan: There are two schools of thought. You could set up a standalone AI regulator, but the problem is that AI is in every different sector. We have taken the approach that you need to be driven by the context in which you use AI. If we were to set up a standalone AI regulator, there would be a lot of duplication across the board. We are not convinced that it would be the best and most efficient way of doing it.
It also takes a long time to set up a regulator. We have an existing fleet of regulators out there. We need to make sure that they have enough support. We are talking to them about making sure that they are getting the skills they need. A number of them have already been really on the front foot on that. I am thinking about Ofcom or the CMA, for instance. We are making sure that they can understand and process this topic.
Q770 Graham Stringer: How many people in the Department are working on AI?
Michelle Donelan: It is expanding by the minute, so I am going to turn to Sarah on that one.
Sarah Munby: There are around 120 people in dedicated AI teams, but then we have other teams across the Department, such as the online safety team, that also spend a significant amount of time on AI. As the Secretary of State said, it is an issue that affects everything. I cannot give you a number for that, but we have around 120 people in teams completely dedicated to work on AI.
Q771 Graham Stringer: Could you give me a number for the percentage of departmental resources that are dedicated to AI, even if it is not in terms of personnel?
Sarah Munby: It is quite hard to define that precisely. If you think about headcount, we are a Department of a little over 2,000 people. I have told you it is 120 people, so we are talking about somewhere between 5% and 10%, depending on exactly how you define it. That is not surprising, given the quite fundamental nature of the technological change that we are currently undergoing.
Michelle Donelan: You also have to remember that it is a foundational technology that interlinks with all the other technologies. For instance, we just put out our National Vision for Engineering Biology. That interlinks with AI quite a lot. There is a clear link with semiconductors, which is one of our five critical technologies. We do not really see there being such defined silos.
Q772 Graham Stringer: Have you begun a gap analysis of the existing regulators’ powers, as we recommended in our last report on AI?
Michelle Donelan: That speaks to what I was just talking about a moment ago. That is very much at the heart of the work of the central risk function in supporting those regulators. We are in regular contact with the regulators, as are the lead Departments. This is something that we will come back to in more detail in our response to the White Paper. We also have the DRCF.
Q773 Graham Stringer: Will you be offering additional support to the regulators, given that AI is bound to increase their workload?
Michelle Donelan: Yes, that is what we are doing by regularly talking to those regulators and utilising the central risk function. In essence, that acts as a support function. I know that the lead Departments are also particularly working on this, as we are with them.
Q774 Graham Stringer: You previously made the completely sensible point that, if you set up one body for AI, it would end up replicating what was happening elsewhere. There are already forums for co-operation. There is the Digital Regulation Cooperation Forum. Should it be expanded and given extra resources?
Michelle Donelan: We have just recently given it £2 million to assist and help with some of the issues you are alluding to that could become issues. I agree it is performing a very useful function.
Q775 Chair: The Digital Regulation Cooperation Forum is a self-selecting group of regulators that work together. To the point you have made, AI touches many more of them. What is the process for considering who should be part of that in future? Is that for you or is it for them?
Michelle Donelan: As you say, it is self-selecting. We will continue to look at these issues in the round. If we found that there were gaps with our regulators, we would be open to thinking of other fora or support functions to be able to assist them. We want to make sure that we have the regulation and the right mechanisms in place to be on the front foot on this agenda.
Sarah Munby: Those conversations are already happening. It is not like everything has to be in a defined forum and setup. Regulators that are involved in employment or education issues will be coming together to discuss how AI affects their area of work. Those are not all codified systems, and they do not necessarily have to be.
Q776 Chair: In its interim report, the Committee was broadly positive about the intention to work through existing regulators but agreed with the Government that there would be gaps there. Do you expect the gap analysis that is being undertaken to be published as part of the response to the White Paper?
Michelle Donelan: We have not done it as formally as you are identifying, but we are looking at this in general as we go along all the time. We will be talking about this topic within our response to the White Paper, yes.
Q777 Chair: Should it not be done formally?
Michelle Donelan: This is an ongoing process, surely. AI is an emerging technology. If we were to do one gap analysis now, in a few months’ time it might be totally different. We need to be constantly doing that gap analysis—that is what I am saying—by working hand in hand with the regulators, by utilising the central risk function and by utilising forums like the DRCF. That is why we gave it £2 million of public funding.
We will be constantly re-evaluating this. There will be further information in the White Paper. I do not want to pre-empt everything today.
Sarah Munby: It is a systematic approach, in the sense that we are going through all the regulators systematically and asking the right questions about capabilities, powers, principles and so on.
Q778 Chair: In Government, we all learn lessons on how to do things. We had the Covid pandemic. Preparedness against an unexpected threat is one of the key reflections that we make on this. When it comes to AI, we do not know precisely where the threat is going to come from. It seems to me that systematic preparedness would be a good administrative principle.
Ms Munby, you have said it is systematic. Can and should this gap analysis be shared and no doubt kept updated?
Sarah Munby: This comes exactly to the point the Secretary of State has been making, which is that it is a continuous process. Doubtless there may come a point where it is appropriate to share more details about it.
If I may just give you a little bit of a sense of what those conversations look and feel like, sometimes when you are talking to regulators the discussion moves from an initial discussion about capability and whether they have the right expertise on their board, for example, into them saying, “As we have built our capability, we have reflected more on the questions about our powers. The thing that we originally thought we might have needed we now no longer think we need”.
It is a genuinely iterative approach. If I thought at this point we were in a position to say, in a sufficiently clear and robust way, “These are exactly the three gaps”, of course we would act on them. In fact, that is not the way the conversations are going in the main.
Because our fundamental framework is about outcome-based regulation, typically, the regulators already have the fundamental legislative underpinning that they need about outcomes. This is more about guidance. For example, you already have obligations under data protection around fairness and non-discrimination. The question is about how you comply with those obligations in a more sophisticated data environment. That is a question for ICO guidance, not a question about ICO powers, enforcement levels or something like that.
Q779 Stephen Metcalfe: Good morning. Just following on from Graham’s initial questions, recognising that AI is a foundational technology, you said that 6% of staff within the Department are working directly on AI. Is there any upskilling or training programme for the other 94%? Everyone needs to know about this technology. As the Department charged with looking after it, we would expect all staff to have some sort of knowledge.
Sarah Munby: Yes, absolutely. It is an excellent point. If I may, this is not just about AI. That is only one of the technologies that we want to see upskilled across both the Department and the wider Civil Service.
You might know that there is an initiative this year to ensure that every civil servant across the system gets seven hours of training related to data. That is the One Big Thing initiative. We are making sure that we implement that in the Department. That is a really important foundation, but we also have a whole range of different levels of skill-building on these issues, including apprenticeships, which can be at postgraduate level for our staff.
However, it is also important to come back to what the Secretary of State said about bringing in truly cutting-edge expertise. We need a balance, where everybody is sheep-dipped and we have people who have a background from outside the service and who are really close to the cutting edge of this frontier technology.
Michelle Donelan: That is one of the things that our Department has done differently. We really have brought on board a number of experts, including into the task force that is now the institute. We have also started an expert exchange programme. We are going to make sure that we are getting civil servants out into industry a lot more, so you have that two-way street and you are updating skills.
We do not just rely on our in-house expertise. If you look at the summit, for instance, we were really clear that we needed civil society, academics, experts and scientists. We are taking the same approach across the board. We have an expert advisory panel to our task force that is now our institute, with people like Yoshua Bengio, who is world leading and known as one of the godfathers of AI.
We have that same approach across all the areas that we represent. We want to be a Department that is really listening and engaged. These technologies are emerging so quickly that, unless we have that approach, we will not have a skill set that is really up to date.
Q780 Stephen Metcalfe: That is right. Did the UK Government and the Department take that approach with reference to what other countries have been doing or was it standalone? “We will just look at what we need”. I am particularly thinking about and would be interested in your views on what the Americans have said, particularly with the White House Executive order back in November. Of course, the EU has published its AI Act. I would like views on both of those and on how you came to the decisions you have.
Michelle Donelan: It is really important that we do not see this in silos with each country doing its own thing. Every country will have a slightly different approach. That is expected, but we were very clear—that is why we created the summit in the first place and convened the world—that AI does not respect geographical boundaries.
There needs to be some synergy in our approaches, and we need to collectively gain a better understanding of the risks, especially for frontier AI, because it presents the biggest risks as well as the biggest opportunities. That is why we carved out that path at the summit. There are going to be future summits, which have already been announced, first in the Republic of Korea and after that in France. We got those agreements at the summit, including the Bletchley declaration.
When it comes to the domestic policies that countries have taken on board, I see them as complementary. It is no coincidence that the White House published its Executive order in the same week as the summit. To quote its team, we are working in lockstep together. It is also no coincidence that it announced that it would have an institute like we will have an institute. The two can complement one another.
We are slightly different in our approach to these things—we have made no secret of that—because we want to be fostering innovation. We want to be the home of jobs in AI. We already are the third market for AI. We want to retain that and do even better, but we want to do it in a safe way.
Q781 Stephen Metcalfe: I should have made a declaration earlier. I co-chair the All-Party Group for AI. It is something I have had an interest in for the last seven years. I will just put that on the record.
The EU has taken a slightly different approach, as you said. It has taken this “level of risk” approach. I think the Government were right not to bring legislation forward in the King’s Speech. I was quite pleased about that because, from all the discussions we have had, it is not clear yet what we are trying to legislate. Everyone has been running around going, “Legislate, legislate”, but what? People are not giving us that answer.
With that said, there must be some good ideas out there. Categorising AI tools by the level of risk is potentially one of them. What is your view on that? There is also this question that, if we are not going to legislate—as I said, I think that is correct at the moment—there are those who say that will give the EU and the US an advantage over us in this rapidly changing environment over the next year as new models emerge. Do you agree with that?
Michelle Donelan: On your first point, you are right: we have the high-level information of what the EU AI Act will do. The full thing has not been published and it is subject to change. I caveat my remarks in that regard.
You are right. They are basically categorising sectors based on risk, which is quite a blunt tool and does not allow much scope within those sectors, whereas we are taking an approach that is a much more context based. They are also setting up the EU AI Office, which will regulate AI and be able to do evaluations. Like I said, that is going to take two to three years. This whole thing will have moved on dramatically by that point.
We are not going to be left behind; nor will we lag behind. In fact, we are leading the way at the moment. We have invested more than any other nation in this agenda. We are the only ones who have an institute up and running that is capable of evaluating the next generation of models that is about to come out in the coming months.
We should not see ourselves in isolation. Our policies all interlink and support one another as well. The vast majority of these companies are based in America. It is helpful that they are also shining a spotlight on this issue. That was the whole point of Bletchley and the summit. We wanted the rest of the world to recognise the huge risks and opportunities around frontier AI and for us to all act together in ways that work best with our constitutions and approaches.
Q782 Rebecca Long Bailey: Secretary, when can we expect the AI Safety Institute to be fully operational and to have a management team in place?
Michelle Donelan: We already have Ian Hogarth, who was leading the task force. He is continuing in his role. He has had a fantastic career and he has a deep understanding of AI, including at the frontier. We have a team of well-established experts with the expert advisory panel. As I mentioned before, that includes Yoshua Bengio and people like Anne Keast-Butler from GCHQ. These incredible minds will be able to support on this work.
They have already done evaluations of models. They did demonstrations at the AI Safety Summit. I would like to offer an invitation to the Committee to see those demonstrations in private in the new year, which I am sure you will find very useful and illuminating. They will be in a position to evaluate the next generation of models when they come out.
We will constantly be adding to that team of individuals and developing our skill set. They will constantly be developing their approach and the way that they tackle this, as models change over time.
Q783 Rebecca Long Bailey: Have we begun the safety test work that was announced at the summit?
Michelle Donelan: Yes, that is the evaluation stuff. We have not evaluated the next generation of models because they are not ready to be evaluated yet. We have already evaluated some of the previous models. That was why we were in a position to do those demonstrations that you can be party to in the new year.
Q784 Rebecca Long Bailey: In relation to the leadership team, can you describe what mitigations have been put in place to address potential conflicts of interest? For example, you mentioned Ian Hogarth. There may also be other members of the institute who are conflicted.
Michelle Donelan: I will bring in Sarah in a minute, but this is at the cutting edge of AI. This is about the existing highly capable models or those that are about to come out. People allege that you could fit the people who truly understand this area into one train carriage. That just illuminates how sparse these skill sets are. We are talking about a really specific area.
If you want people who are experts in this field, who have the knowledge of industry and academia, and who can bring the adequate skills to the table, they are naturally going to have some conflicts of interest. Otherwise, they would have nothing to do with AI, and those are not the type of people you would want. We have to start from that premise to understand, but we absolutely need all of that to be cleared.
Sarah Munby: I am happy to talk about the process in a bit more detail. We have taken this exceptionally seriously for the reasons that the Secretary of State outlines. A robust conflicts process is necessary to get the right talent in while also protecting the public.
Somebody like Ian, for example, would inevitably have a wide range of holdings in companies. Our principle is that, first of all, we begin by defining the scope of the task force. Ian, as is on public record, has agreed to divest all his holdings that relate to that scope. By the way, this is at a significant personal cost to Ian. All of those divestments will take place at the value of the holding at the time of Ian’s appointment. He has not been able to make any financial advantage whatsoever from any of the interactions he has had as part of the task force. I personally sat with Ian and went through every single line of his portfolio. I am absolutely satisfied that we have taken a robust approach.
As you can imagine, we then scale that approach to other members of the task force. It is not identical for every individual because people will have different levels of exposure to policy, for example, but we have built a conflicts capability dedicated to the task force exactly so we can have these conversations in a robust, systematic and, if I may say, pacy way and we can bring people on quickly.
Q785 Rebecca Long Bailey: Will the Department be publishing a declaration of outside interests for any of the leadership team?
Sarah Munby: We would not usually do anything different from what we do in the normal structures for civil servants and public appointments.
Q786 Rebecca Long Bailey: There will not be any documentary evidence, shall we say, of all the divestments you have just explained.
Sarah Munby: It may be that, at the point at which those divestments are complete, we provide further details, but it is not usual to provide an ongoing update on the individual shareholdings of members of our team not least because, if we did, it would be market moving in some of these cases. If people have large holdings in individual privately held companies, it would not be appropriate to give running updates on where those things stand.
Q787 Rebecca Long Bailey: We just have to take your word for it that there is no conflict of interest, then.
Sarah Munby: That is ultimately how the conflicts of interest process in Government usually works.
Q788 Chair: It is fair to observe that Ian Hogarth is unpaid for his role, is he not?
Sarah Munby: That is right.
Q789 Dr Davies: Returning to the AI White Paper and the Government’s response to the consultation exercise, do you expect that the AI Safety Institute will have a role in that?
Michelle Donelan: It is a part of the Department and it will certainly support that, as other civil servants and officials will. There are sections of that response, without me detailing exactly what is in it before it is published, that relate to the institute, which it will have fed into.
Q790 Dr Davies: In terms of the institute, do you expect its remit to expand over time? That is a question to either or both of you, really.
Michelle Donelan: We have said that it basically has three core functions. The first is to evaluate. We have spoken at length about that. That is really important. Secondly, it has a research function to make sure that we can stay at the top of the pack in terms of understanding the risks. Its third function is in information exchange in order to support other countries. As I said before, this is very much about us working together internationally, which was at the heart of the summit.
We will constantly have to evaluate the work of the institute and what else it should or should not be doing. Again, that speaks to the heart of our approach, which is to be very agile. This is an emerging technology. It is emerging more quickly than any technology we have ever seen. Therefore, we need to make sure that our systems and processes can keep pace with that and respond to those changes where needed.
Q791 Dr Davies: In the autumn statement, there was further funding for the UK AI Research Resource. I wondered whether you could update the Committee on preparations relating to that. When do you expect the first research project to get underway?
Michelle Donelan: This is basically our compute capacity. We have secured several lines of funding. One of those is the AI Research Resource, which has already been funded. That is going to be based in Bristol. That will come onstream in summer next year.
We are just working out the details of how that will be accessed. We are talking about increasing our computing capacity by at least 10 to 100. It will be game-changing. There is also going to be a facility that will come on stream within a few weeks in Cambridge. We also have the £900 million exascale programme, which is going to be based in Edinburgh.
In the autumn statement we announced additional money on top of all that to turbo-charge our compute capacity. We all keep coming back to this. It is important that we have the compute capacity to be leading the way on this. You will also have seen that Microsoft has just announced a sizable investment here in the UK, which again helps support our compute capacity as well.
Q792 Dr Davies: In terms of that supercomputer capacity, how will we stay up to date with technological advances?
Michelle Donelan: Compute capacity is just one aspect of AI. I have heard it described as the AI quartet. One factor is compute; one factor is algorithms; one is data; and one is skills. We are trying to make sure that we are investing in all of those, including in skills, where we have invested £290 million since 2014.
We are not going to miss a trick, if you like. That is why we are also investing in compute. We are doing that with taxpayers’ money but also working with private industry.
Q793 Dr Davies: Should the investment in compute be viewed as a precursor to the UK establishing a sovereign large language model?
Michelle Donelan: No, investing in compute gives us more flexibility and capacity, but our priority needs to be in making sure that we are deploying AI appropriately throughout our public services so the general public is seizing the benefits of this. That is the whole point as to why we need to grip the risks.
This is transformative. It already is in our NHS. Just a few weeks ago, for instance, I saw research into how surgeons can utilise AI to perform more knee surgeries in a day, which will help cut waiting lists. We already know that it is utilised in hospitals to detect heart disease and prevent strokes. This technology is already in our public services. We are talking about being able to intensify and amplify that and make sure that we get the benefits to the public.
The Cabinet Office, with No. 10, has set up an Enterprise Incubator Unit, which will be looking at this very question. It has been hiring additional people to facilitate that. We will be supporting that with the expertise that we have gathered in this area.
Q794 Chair: What is the purpose of this publicly provided compute capacity? Is it to make Britain more attractive to big players like Microsoft and Google? In effect, will it subsidise them and encourage them to be here? Is it the opposite? Is it there to rebalance the playing field so that smaller players are able to access compute resources so they can compete with those with bigger pockets? What is the intention?
Michelle Donelan: Just to be clear, it is not so the taxpayer can fund the likes of Google to have access to compute. This is about enabling researchers to have access. We will look at the exact mechanics of how smaller businesses will be able to access that. We are working through all that and we will publish some details.
Q795 Chair: In our interim report, we talk about the access to compute challenge due to smaller players not having the deep pockets. This is squarely about that.
Michelle Donelan: Yes, exactly.
Q796 Chair: You said you are thinking about the access requirements. I assume, therefore, that you will be excluding those that have adequate resources of their own and making it available to those that need help.
Michelle Donelan: We are going to come forward with the exact details of how this will work, but the name is on the tin: it is a research resource. It will do what it says it is going to do.
Q797 Chair: When you say “research resource”, is this for academic research or for small commercial developers?
Michelle Donelan: We are looking at how exactly this will work. We will publish further details on it, outlining exactly how that will work.
Q798 Chair: Will that be in the White Paper response or separately?
Michelle Donelan: It will be separate.
Q799 Graham Stringer: In September, this Committee went to the United States and talked to tech companies, academics, the private sector and politicians about AI. My view—I am not sure whether it is shared by the Committee or not—was that most of the people we spoke to were quite relaxed about the development of AI.
The exception was the academics at Harvard and other places, who were not worried about the technology itself but were very worried that the tech companies are controlled by a small number of very clever people who are libertarian and have scant regard for democracy. You have thought deeply about AI. Would you agree with the analysis of the Harvard academics?
Michelle Donelan: No, absolutely not. I regularly meet with the leading AI companies and their leadership teams. They have been very uneasy about the current situation. You could describe it as a democratic deficit; you could describe it as them marking their own homework.
The proof is in the pudding: they voluntarily signed up to that landmark agreement at Bletchley Park to allow for pre-deployment testing. When I asked them to publish their safety policies, they did, which was the first instance of pure transparency.
In addition, we keep working with them because we want to make sure not only that the Government, the EU and the US are taking action in this space but that they are continuingly raising the bar on their safety policies. That is why I have called for them all to adopt as a minimum something called responsible capability scaling. Your characterisation is not one that I have seen.
Q800 Graham Stringer: It was not my characterisation.
Michelle Donelan: No, your second-hand characterisation, then. I was in America after you a few weeks ago. I was speaking to academics in Stanford. That was not their interpretation.
Q801 Graham Stringer: This country has had a very poor record on productivity over quite a long time. I am not trying to make a party political point on that, but we have had zero increase in productivity. Your Department is contributing to the public sector productivity programme led from the Treasury. When do you expect that to deliver? When will we see the impact in increased productivity?
Michelle Donelan: I will bring in Sarah on this one.
Sarah Munby: It is probably going to vary because there is a whole series of initiatives across Government working on the topic of how you use AI to improve public sector productivity, some of which have been established for some time and predate much of the discussion we are having here, and some of which will be in the early stages and will take longer to come through. It is not possible to give a single answer to that.
Our colleagues who are responsible for digital and data implementation across Government—AI is the next phase of that ongoing work—would say there have already been significant productivity improvements from those efforts in particular areas. We are likely to see that continue. It is going to be one of the absolutely core planks of that productivity review, as you rightly say.
Michelle Donelan: Looking away from AI, we are on track to meet our commitment to be investing record levels of research and development funding, over £20 billion a year by next year, which we know is a key driver for increasing productivity.
Q802 Graham Stringer: I have been in politics long enough to have seen word processors introduced into offices, which were going to massively increase productivity. If you walk through any factory, business or office now, there are probably two or three computers at people’s elbows. Yet we do not see any increase in productivity in the national stats.
You can see an increase in productivity in one particular industry—you have outlined how AI can improve productivity in medicine; there is no disagreement there—but we never seem to see it come out in the final GDP productivity figures. Why will AI be different from this huge increase in computing power that we have seen throughout the economy, which has not effectively changed productivity?
Michelle Donelan: This technology will be truly transformative. The opportunities that it presents are arguably bigger than we have ever experienced before. If you just look at the scale of the change, four years ago large language models could barely string a coherent sentence together. They can now pass the Bar exam; they can pass medical exams. Who knows what the next set of models are going to be able to do?
That just highlights how they will be able to turbocharge everything that we do. The key will be making sure that we are securing adoption throughout not just the private sector but the public services, as we spoke about before, and that we are using this in the right way so that it does increase our productivity and efficiency.
Q803 Chair: To pick up on one of the questions that Graham put to you, he relayed a characterisation that a lot of the players in this space are concentrating on the tech, do not have much interest in public policy and take a rather libertarian approach. You rejected that as being not in your experience. You said, to the contrary, that some of the leading players are very keen to engage with public policy and voluntary commitments.
In the words of Mandy Rice-Davis, they would say that, wouldn’t they? If you are a large player and you have made some advances, in competition law, which Ms Munby had responsibility for in a previous incarnation, blocking the path for others to follow is something that incumbents tend to do. Are you alive to that risk?
Michelle Donelan: Yes. First of all, we do not just engage with those that are leading at the frontier. That is evident from the approach that we took in the summit. The businesses that were represented had from five employees up to thousands. We also had civil society experts from across the world at the summit. When we are developing our policies, it is really important to listen to that cross-section of views.
This is a concern that was originally presented to me quite early on when I took the role. I worked hard to engage with that cross-section of individuals and make it really clear that we were not pulling up the drawbridge and stopping everybody else from getting to that same level. We were talking about the safety mechanisms, policies and mitigations we need at the very frontier, not to stop everybody from getting there, but to highlight things that companies need to do more of. That includes responsible capability scaling, access and transparency to enable us to do evaluations, ensuring that we have a better understanding of the risk, updating our mitigation policies and working with those companies.
We definitely should be really clear that it is not about pulling up the drawbridge and stopping innovation. It is the opposite. In fact, our whole policy in this country is pro-innovation. It is about supporting industry and supporting job creation.
Q804 Chair: Accepting that that is absolutely the intention—
Michelle Donelan: It is also the policy.
Q805 Chair: Yes, and the policy is under development, as you were saying. You described the number of people that you have in the Department. Do you have any concerns and what can you do about the fact that inevitably the largest players have many times more people working to influence you than you have to respond to them and many times more than the smallest players. How do you deal with that imbalance?
Michelle Donelan: We deal with it in a number of ways. First of all, we have brought some members of civil society into the Department. It is not just industry that we have brought in. We have also brought in people from those smaller businesses. We engage regularly with them and we use them as sounding boards and stakeholder engagement, et cetera.
As Ministers, we make sure that we are responsible about the time we dedicate across the whole AI ecosystem. It is the same with officials.
Q806 Chair: I happen to know some people who work for your team. It is worth saying in public that you have an impressive record in drawing some really excellent people from the private sector into public service.
Let me ask one further question on this. One of the big questions for policy-makers, on which different companies have very strong views, is the question of open source. Some very large players say that it would be very dangerous to allow open-source access and therefore access should be restricted to trusted providers. Some other big companies say quite the reverse: to have full contestability and innovation, open source should not only be allowed but may even need to be mandated.
How are the Department and the Government going to resolve this dilemma?
Michelle Donelan: You are quite right. That is certainly how the debate is currently framed. We need to continue to do more thinking on this. It has not been dealt with by the EU AI Act, as far as I can see from the headline information we have been given. It is not only something we need to think about; other countries do as well. It is something that we put on the agenda at various points of the summit for discussion.
We need to reframe this debate, because at the moment it is seen as very binary. Open source is either a very good thing or a very bad thing. I was speaking to Irene from Hugging Face, and she represents the view that I share, which is that surely our concerns should be around the capability. Sometimes, depending on the capability, you will be less concerned if it is open source. If the capability presents more risk, you will be more concerned.
We need to be more mature in our approach to the conversation and therefore in our policies responding to it. This is very much a nascent policy area in every country at the moment. It is one we are continuing to explore further.
Q807 Chair: Insofar as Britain has a distinctive take on this—it is fair to describe the tenor of your evidence to this Committee and your statements in public as being pro-innovation—it would be reasonable to infer that open source is more likely to encourage innovation than something that is very restrictive. Is that a reasonable reflection?
Michelle Donelan: We are not, as a Government, anti-open source. We recognise the huge benefits it can bring, not just in innovation but in terms of equality in AI, which is another concern, or the concentration of power.
We should be less binary in our approach to this and focus more on the capabilities, which is what is at the heart of the issue. We need to further delve into this topic. That goes back to what I was saying before. This is why it is so important that we have brought the experts on board and we are really delving into some of these issues and the risk agenda.
Q808 Chris Clarkson: I would like to turn to a subject that nobody around this table has really thought about today, which is elections. We are all aware that, for example, a few months ago there was some audio purporting to be the Leader of the Opposition engaged in a sweary rant. That was a deepfake. There is a decent chance that there is going to be some AI involvement in the next election, whenever it comes, whether that is in the form of deepfakes or generated content. Has your Department given any consideration as to whether the Electoral Commission requires any further power to deal with that?
Michelle Donelan: On elections in general, at the moment people could go and do Photoshop, for instance. AI will enable much more sophisticated deepfakes. It will make it easier for people to think that they are authentic or for people to do them in their home without requiring really expensive software. It amplifies an existing risk. It is not an entirely new risk.
This is something we have been concerned about as we approach an election, as has every other country, which is why I personally made sure that it was on the agenda at the summit and that we spoke about it with our allies, who are equally facing elections. We will continue to work with them on our approach and our learning in this area.
We are also working very closely with the leading tech companies. While tech presents some of the problems here, it also presents some of the solutions. There are things like watermarking. There is also technology that can detect whether something is AI-generated or not. There are debates over whether these will be robust enough or whether they can be jail-broken. If you put that aside, those could be some of the solutions. We are also in talks with social media companies as to what agreements we can get for when an election is called.
The core of this work is being led by the Security Minister in the defending democracy taskforce. He will further examine our current regulations and legislation in relation to the Electoral Commission, which has a great deal of power already in this space. That is certainly a topic that he will be leading on.
Q809 Chris Clarkson: Is DSIT actively involved in the defending democracy taskforce?
Michelle Donelan: We have a seat at the table.
Q810 Chris Clarkson: Are you getting adequate input into it in terms of your remit?
Michelle Donelan: Yes. Not only do we have a seat at the table, but I regularly meet with the Security Minister. There is a lot in our agendas that crosses over, looking at social media in particular and other topics. We regularly have stocktakes. I have one next week.
Q811 Chris Clarkson: Going back to the piece about regulation that we discussed earlier, what other organisations will have to play a role in ensuring there is a robust way of monitoring where AI is used in electoral campaigning? You have that convening power as a Department. You are looking at how we are going to regulate this. The general direction of travel is that we will enable existing regulators.
Do you have the adequate bandwidth to ensure they have the ability to fill the knowledge gaps, or will this require a certain level of industry input as well?
Michelle Donelan: It needs some industry input as well, which is why we are working very closely with social media companies. That is the way in which these AI deepfakes will spread extremely quickly. We will need them on board to be able to act quickly.
We are also working with industry, as I said a moment ago, to ensure that, as and when the solutions to these problems are developed—things like watermarking and AI technology to detect whether something is AI-generated—we are in a position to deploy those, including on elections. We are staying on top of this agenda.
Every country is facing this issue. It is not, as I said, a new risk. It is amplifying an existing risk. In general, that is what AI does. We published risk documents ahead of the summit. We were the first in the world to do that. One was produced by GO-Science, our leading scientists, and one was produced by our intelligence services. They summarised the risks, but they articulated that AI is amplifying the existing risks.
Q812 Chris Clarkson: Will you be encouraging platforms to follow Meta’s example and require advertisers to flag when AI is used in content?
Michelle Donelan: That is only part of the issue that is of concern when it comes to elections. The biggest threat is not, I would argue, an advert but a crude image or video that is shared multiple times between individuals, rather than formally advertised and paid for on a platform. Things will go viral without that mechanism. We need to make sure that social media companies are looking at this in the round.
Q813 Chris Clarkson: I completely agree with you. It does have an amplifying effect. For example, if Tracey photoshops a picture of me kicking a puppy, it is not as potent as a video of me kicking a puppy, not that I want to kick puppies. I am more of a cat person.
Rebecca Long Bailey: Do you kick cats?
Chris Clarkson: It depends on what colour rosette they are wearing.
Coming off the back of that, is there space within the defending democracy taskforce to set up some sort of regulatory framework for how this is going to work? Is that going to be the eventual outcome of the task force or is it going to come up with some flavour recommendations for what people need to do? How prescriptive is it going to be?
Michelle Donelan: I cannot predetermine the exact end goal of the defending democracy taskforce, but this Government are taking this extremely seriously. We have set up a whole task force dedicated to it, led by the Security Minister. He is supported by my Department. There are many workstreams on this agenda.
That is why I put it on the summit’s agenda. That is why I am talking to my counterparts on it. When I went to America on my departmental visit a few weeks ago, it was one of the topics I was certainly talking to them about as well. We need to work together with our allies on this.
Q814 Chris Clarkson: I completely agree with you on that. The US, Canada and various other countries are having elections, as well as us. Is there any shared learning from those discussions already?
Michelle Donelan: There certainly is in terms of our approaches, and we need to continue to share that learning and make sure that we can grapple this issue together.
Q815 Stephen Metcalfe: There is no doubt that this is a big risk and that it will play a part in the next election. I completely agree. It is amplifying an existing risk. A lot of what can be done could already be done, but this technology allows it to be done at scale.
Michelle Donelan: Yes.
Q816 Stephen Metcalfe: The other part of it is that, for any of this to work, we have to be susceptible to the output of AI. What measures or steps are the Government taking to make the general public, us, more informed about the risks of being misled by AI? Is there an awareness campaign?
We grow up with certain technologies and we become comfortable with those, and then new technologies come along that have the potential to mislead us. We need to be more sceptical. How can we create a more sceptical public who do not believe everything they see or hear?
Michelle Donelan: First of all, you have to be quite careful about that. I have read studies that suggest there is a concern that, if you go too far on that agenda, people turn off all news. You have to inform people and enable them to decipher which sources are more likely to be credible over others.
It is difficult when it comes to AI because it can look indistinguishable from verified sources. One of the things we put within the Online Safety Act was a clear workstream around media literacy, taking Ofcom. This is something we will continue to double down on.
I know industry is trying to do more in this space as well. We are starting from a younger age so we can better inform the next generation upwards. As it becomes more and more sophisticated, this is something we are going to have to continue to revisit.
Q817 Katherine Fletcher: Returning to the measures to mitigate the potential risk of electoral fraud, I just want to push you slightly on timescales. I grant you that, as this technology evolves, we will require evolving solutions and flexibility. For example, we have local elections coming up in May. There are some quite big elections across the country. It strikes me that this is an opportunity to test some mitigations before any general election would be called.
Can you update us on what is going to happen when? The worst-case scenario would be that we have some really great solutions, but they are implemented two months after the next general election when Keir Starmer has been caught kicking a puppy. Do you see what I mean?
Chris Clarkson: That will not be AI.
Michelle Donelan: What is you guys’ obsession with kicking puppies? No, I know what you mean.
Q818 Katherine Fletcher: There is a genuine fear that British democracy is going to get subsumed by some of our enemies who seek to undermine us. Just look at some of the fake news that is knocking around about the Gaza conflict at the moment.
Michelle Donelan: I cannot speak for the exact timeframe of the defending democracy taskforce, because I do not lead it; the Security Minister does. What I can say is that we are working with social media companies to make sure that something will be in place relatively soon.
Q819 Katherine Fletcher: What does that look like?
Michelle Donelan: I am fully aware of the timeframes that we are working to in this country and the potential risk here. Nobody has a silver bullet answer on this topic. As I said before, some of the potential technology solutions are around watermarking and AI detection software. There are many critics of those, who suggest they will never be produced to a level that is beyond jail-breaking.
There is no silver bullet, but the Government are taking this extremely seriously. We have workstreams and are working at pace, along with our international counterparts. We do have the Electoral Commission as well.
Q820 Katherine Fletcher: Is there any chance of seeing some trial efforts to mitigate before May’s elections, for example?
Michelle Donelan: When you say “trial”, what do you mean?
Q821 Katherine Fletcher: I mean something being delivered in the real world as opposed to an assessment or a strategy.
Michelle Donelan: We are working to be able to deliver something with social media companies that will be effective in this space. The defending democracy taskforce has many other strands to it and initiatives that will be pursued as well as that.
I cannot talk to the timeframe of that because we do not lead this work in its entirety. In fact, this is led by another Department, in essence. We are almost supporting with our networks and relationships.
Q822 Chair: Just finally on this point, do you expect that to be in place for the next general election or will the election be some sort of trial?
Michelle Donelan: I expect that by the next general election we will have robust mechanisms in place that will be able to tackle these topics. Absolutely, yes.
Chair: Thank you very much indeed.