Science, Innovation and Technology Committee
Oral evidence: Governance of artificial intelligence, HC 945
Wednesday 25 October 2023
Ordered by the House of Commons to be published on 25 October 2023.
Members present: Greg Clark (Chair); Aaron Bell; Dawn Butler; Chris Clarkson; Katherine Fletcher; Stephen Metcalfe; Carol Monaghan.
Questions 539-652
Witnesses
I: Dame Melanie Dawes, Chief Executive, Ofcom, and Will Hayter, Senior Director, Digital Markets Unit, Competition and Markets Authority.
II: John Edwards, UK Information Commissioner, Jessica Rusu, Chief Data, Information and Intelligence Officer, Financial Conduct Authority, and Kate Jones, Chief Executive, Digital Regulation Cooperation Forum.
Written evidence from witnesses:
Witnesses: Dame Melanie Dawes and Will Hayter.
Q539 Chair: The Science, Innovation and Technology Committee is now in session. Continuing our inquiry into the governance of artificial intelligence, our session this morning will be looking at the existing regulators in the sector.
We are very pleased to welcome Dame Melanie Dawes and Will Hayter. Dame Melanie is chief executive of Ofcom, the telecoms regulator, and was previously the permanent secretary at the Ministry of Housing, Communities and Local Government; she and I served together there, I should disclose. Will Hayter is senior director at the Digital Markets Unit at the Competition and Markets Authority. Thank you both very much indeed for joining us today.
Perhaps I could start with a question to Dame Melanie. The Government’s approach, as set out in their White Paper, is to make use of existing sector regulators rather than to create a new AI regulator. As the communications regulator, Ofcom is obviously very much in scope for that. As an organisation, do you feel equipped to take on the regulation of AI?
Dame Melanie Dawes: Thank you for inviting me this morning. The short answer is yes, but clearly there is a huge amount of uncertainty as to how this latest wave of technology will disrupt the markets we regulate and the services we regulate on behalf of consumers, and what that will mean for the risks and opportunities. I think we have to remain very open to needing to change or to add new functions across the regulatory landscape. But, broadly: yes, I do think that.
Why do I say that? Because, fundamentally, Ofcom’s underlying statutes—the things that Parliament has asked us to deliver on—are technology-neutral. They are not dictated by the type of technology that is being used to power those services. That means that as those technologies change and adapt, our role fundamentally to oversee those markets on behalf of consumers and citizens is still there and we can adapt our approach accordingly.
Q540 Chair: You will need expertise to understand what is being done with AI and its proposed developments. Do you have a dedicated team of people working on AI in Ofcom at the moment?
Dame Melanie Dawes: Yes. We are doing work—and have done quite a bit of work already, which we presented to you when we wrote to the Committee a little while ago—across all our sectors. Actually, there is disruption and innovation going on, from telecoms and how you manage networks right through to our new duties for regulating social media, where it is very obvious that generative AI is going to change the way those services work.
It is very much a cross-Ofcom exercise to look at what GenAI and large language models mean. We have put in place a work programme across the organisation, co-ordinated by our strategy team. It is not about having one team, really; it is very much about a cross-cutting effort that is co-ordinated in one place.
As you say, we do need these skills. We have always needed to keep building new technology expertise over the years. We began building a small team of about 15 people on large language models about five years ago, and overall what I would call our AI expertise is now up to about 50. We also have quite a lot of people with experience in things like data analytics, for example. We have quite a strong function there, and I think that serves us well, but there is a lot of change going on and we need to stay abreast of that.
Q541 Chair: So you have 50 dedicated to AI at the moment. How many employees does Ofcom have?
Dame Melanie Dawes: That is 50 dedicated to data science and machine learning, which includes some of the newer forms of AI. We also have people who understand things like computer vision, which lies behind a lot of the new technology around facial recognition. There are quite a lot of streams of expertise.
Our total headcount at the moment is around 1,350. That includes about 350 people for online safety, which will grow a little bit more, but is now more or less stable.
Q542 Chair: That is quite a small number in what is a large organisation, befitting its large responsibilities. Do you think that it is enough? You said that it is growing; if you are to be one of the principal regulators of AI, what would you regard as an appropriately sized team in an organisation of 1,300?
Dame Melanie Dawes: It is 50 people on data science and machine learning. We have other technology expertise on top of that and on online technologies—things like computer vision, age estimation and so on. We also have a lot of technical expertise in our traditional regulatory remit—things like spectrum engineering and fibre and mobile technologies. It is quite a broad set of expertise that we are managing inside Ofcom, and that is quite an important part of the skillset we bring to bear.
But we are not an enormous regulator, I agree. It is very important for us to build partnerships with our fellow regulators, but also with experts outside, with academics and so on. I am not sitting here saying that we have a major resourcing problem and we need to grow this. It is not a huge amount of people, but I believe that it is enough for now.
Q543 Chair: You say that you are not a large regulator, but 1,300 is quite a lot of people—it is bigger than some Government Departments. All the evidence that we have taken says that AI poses multiple challenges. I guess our prospective concern is whether you have enough people with the right experience and capability to regulate and understand the vast numbers of people in the organisations that you are regulating and to ensure that you are not bested by that. If we are to take an approach that trusts existing regulators, how can you provide assurance that you are going to be up to it, in terms of the number and the capability of the people?
Dame Melanie Dawes: On our online safety remit, which I hope will start tomorrow when the Bill receives Royal Assent, we have scrutinised our numbers very closely with the Treasury and the Department. As I was saying, we have around 350 people right now; that will grow a little bit more, but we do need to stay very active on those numbers, because this is a hugely ambitious regime to implement. There is a lot of risk and a lot of uncertainty, but we have sized it for now.
On the rest of our remit, we have had a flat cash budget cap from the Treasury for many years now. I think at some point that will start to create real constraints for us. We have become very good at driving efficiency, but if the Government were to ask us to do more in the field of AI, we would need new resources to be able to do that. But as far as our existing remit is concerned, our current resourcing is broadly adequate right now. Part of where the Government are coming from is that we have capacities and capabilities that can be expanded and built on. Ofcom has shown over the years that we have been good at doing that, but fundamentally our statutory responsibility is to regulate the industries that are set out in statute as the job that we have to do.
Q544 Chair: What do you see as the regulatory challenges in AI that you will have to respond to as a regulator?
Dame Melanie Dawes: AI has been powering change for many decades. Indeed, it has created some of the services that we are about to start regulating, like social media and search, and online gaming and pornography as well. That disruption continues. I think anyone in the industry would say that this is a moment where generative AI in particular is attracting huge investment. Everybody is thinking about what it means for their business.
Chair: What are the specific regulatory challenges?
Dame Melanie Dawes: What it means for us is that there is a lot of change, right across our remit. It means, for example, that it is much easier for bad actors to scam us, because they can create more plausible scams. It means that fake news is easier to create and promulgate and so on. It also creates some opportunities, for example for better content moderation on social media using large language model technology. It is across our whole remit.
Q545 Chair: Let’s take the misrepresentation commonly known as deepfakes. You will have seen our interim report, in which we set out 12 challenges for governance, one of which we called the misrepresentation challenge, or deepfakes—in other words, people passing off content that purports to be the words of someone but is actually fictitious words. Is that something that Ofcom would regulate?
Dame Melanie Dawes: We have certainly already been working with our broadcasters to find out what they are doing to navigate their way through this and make sure they are not promulgating fake videos, for example, by accident. I don’t think they would do it deliberately, but they could easily get caught out. You can use technology, to some extent, to start to tackle this in future.
Q546 Chair: What about beyond broadcasters? You license broadcasters and you have powers there.
Dame Melanie Dawes: Yes, it’s a very specific remit—you are right.
Chair: What about the millions of people who could, in an amateur way, produce deepfakes? Is it Ofcom that will be responsible for regulating that?
Dame Melanie Dawes: We do not have any statutory remit more broadly for mis and disinformation. We do, in the Online Safety Bill, have a new responsibility for making sure that social media and other platforms take steps to tackle foreign interference in elections. That is a very specific new rule that comes in tomorrow that we will be overseeing, but apart from that, we do not have a broader remit on disinfo.
Q547 Chair: This, as you know, is of interest to the Committee. You said in your written evidence to us that you do not favour a statutory approach—that the approach of using your existing powers is adequate. But you have just told us that you do not have any powers to prevent deepfakes, other than if they are put out by a broadcaster or people you license. So shouldn’t you, in furtherance of your remit and the challenges, actually be telling Government that you need these powers?
Dame Melanie Dawes: Just to be clear, what I am not saying is that all our legislation is fit for purpose and will encompass every possible risk or new problem that is going to emerge over the next few years. I am definitely not saying that. What I am saying is that with our existing remit we can absorb the new challenges of AI, because we regulate sectors and services rather than technology. But I strongly support the Government having a policy function to scan where there may be gaps in the regulatory landscape.
With something like deepfakes, the whole question of disinformation has been much debated through the passage of the Online Safety Bill, and in the end Government and Parliament decided not to describe it as an explicit harm that needs to be tackled, particularly for adult users of the internet. There are some provisions that will allow us to bring more transparency to the big platforms’ efforts to tackle disinformation, and that can cover deepfakes. There has very recently been a big conversation about how far to go and, for example, the need to balance freedom of expression in particular in that regard.
Q548 Chair: You say that you approve of the Government’s ambition to have a non-statutory approach that allows the horizon to be scanned and to look at what is happening, but if you have no powers to act on it, surely that is impotent.
Dame Melanie Dawes: We can act in some areas, where Parliament has already asked us to oversee particular services and sectors. The Online Safety Bill is a really big and ambitious new set of powers for Ofcom, which we will receive tomorrow, I hope. But where Parliament has not yet decided that we should be regulating—regulating is a big deal; it involves using tools to interrupt markets and change commercial decision making—what we do quite often in Ofcom is scan that horizon, do policy work and recommend to Government where we think change can be made. We have done that recently on public service broadcasting. We are looking at the way social media feeds can affect people’s inability to detect fake news. We are active in some of these future-facing spaces, but without a statutory responsibility we clearly cannot start to regulate something that we have not been given a power to regulate.
Q549 Chair: This is one of the points that the Committee made. Things are proceeding at a rapid pace here, which is why I am surprised that in the written evidence you have endorsed a non-statutory approach. Isn’t it rather premature to say that you should do this without statutory powers, when you have said in your first five minutes of evidence to this Committee that you do not have statutory powers to suppress deepfakes, apart from with broadcasters?
Dame Melanie Dawes: It is a decision for Parliament whether there should be new regulations against disinformation, which would include deepfakes. We are doing quite a lot of work, by the way; we have a small technology team on computer vision based in Manchester who are looking at watermarking, which is one of the potential future solutions to help to tackle deepfakes. There is not going to be a silver bullet, but we are investing in that as a piece of future-facing work.
I wonder whether we are slightly talking at cross-purposes here; my apologies if that is the case. There may well be new risks that need to be regulated in future, possibly quite soon. At Ofcom we try to keep abreast of those with our policy work and in close collaboration with the Government, but if there are new areas the Government would need to legislate.
But that is a separate point from my support for the Government’s non-statutory approach to the AI principles, which is a rather different consideration. There, I think they are right to set out some principles that allow us all to co-ordinate and have a common language, but if you put those on a statutory footing right now, I am not quite sure what that would do to solve the problem of deepfakes. I think you would always need a sector regulation effort to change the law in order to give a regulator a new set of responsibilities. I think they are two slightly different points.
Q550 Chair: We will shortly have the last King’s Speech of this Parliament, so isn’t the point that taking the additional statutory powers you might need for these gaps requires legislation? One of the purposes of our inquiry is to guide and advise the Government on what they need to do—we do not doubt their ambitions and intentions—to keep us at the forefront here. Would it not be appropriate, given your purview of the communications sector, to say, “These are the gaps that we have, and these are the gaps that need to be plugged in legislation,” rather than what you have said, which is “We will wait for Parliament to tell us what we should be doing”?
Dame Melanie Dawes: We do forward-looking work to support Parliament in working out where the gaps might need to be plugged next. The media Bill that we are expecting in the next Session is very much a response to Ofcom’s work looking at public service broadcasting and at how we need to update the regulatory framework to reflect the fact that AI now means that people are given recommendations on what to watch online through Netflix or other streamers. That means that our public service broadcasters are no longer prominent on our TV screens and other screens, and that is an issue if we care about public service broadcasting. We have done a lot of work on that. It is absolutely a response to technology-driven change in the industry that requires the regulatory framework to be updated. That is what the Government will bring forward, we expect, in a couple of weeks’ time. That is a good example of what we do at Ofcom.
Another piece of work that we are doing right now is looking at how people are not very well able to navigate the news that they get from their social media feeds. They are more likely to go into a rabbit hole and more likely not to be able to identify fake news. We are doing some research on that and will come forward next year with recommendations as to where legislation could fix that gap. Primarily, I think we are going to recommend more transparency around the algorithms and how they work.
We are quite active in this space, but that is different from whether the AI principles should be put on a statutory footing, which is a slightly different part of the forest, if you like.
Q551 Chair: A lot of people reflect on the regulation of online harms and think that we took a long time in getting to it. Actually, things are more difficult because patterns have been developed and businesses have been established before there was a regulatory regime to assess it. We have the opportunity to do things differently for AI, but isn’t what you have described likely to lead us to follow that same playbook and be catching up later on, rather than trying to equip you with the powers now?
Dame Melanie Dawes: I think the response today is very different from what it was 20 or even 10 years ago. Twenty years ago, when the modern internet was being born, there were no principles. What you see today is not just the Government’s AI principles, but a huge effort for all the sector regulators to update the legislation that we are using. We have the Online Safety Bill for Ofcom; the Digital Markets Bill, which allows the CMA to look at the competition impact of the big gatekeepers; and the refreshing of the data legislation, which I am sure my colleague will be able to talk about later. The Government are acting to update legislation right now, as well as looking in a cross-cutting way at what the principles need to be.
I also think that when you look forward towards new and emerging risks and opportunities, so much of the effort has to be international. It has to be about international standards, international co-operation and agreement between Governments. That is where a lot of the heavy lifting will need to be done.
Q552 Katherine Fletcher: Let me just granularise this slightly. The Online Safety Bill is coming into force tomorrow, fingers crossed. What is AI doing or generating that under that Bill you will—fingers crossed—be given the powers to regulate? Could you give a layman’s example for the Committee?
Dame Melanie Dawes: Artificial intelligence, basically, is the modern internet. It would not exist without that ability to analyse data. If you think about something like a social media feed, that is generated by AI for you on the basis of your preferences, history and everything else that might be feeding into that algorithm. What new technologies and generative AI specifically are beginning to drive the platforms to innovate is new tools like chatbots. If you are on a social media feed, increasingly you see that there might be a chatbot that you can talk to. That might be on whatever topic you want, and it might be to help you through a problem you may have, but it clearly creates risks that you might be drawn down a rabbit hole, because the AI—
Katherine Fletcher: The chatbot is telling you to go and do something nasty to yourself.
Dame Melanie Dawes: Exactly. We know already that all this technology needs testing a lot more before we can say that it is safe. Some of those exchanges with generative AI-powered chatbots can lead to some quite dark places quite quickly, because the data on which the bots have been trained can drive them to operate in a certain direction. All of that is just very new.
Q553 Katherine Fletcher: As a regulator, how will you assess that? I was struck by your comments to the Chair that you were minded to think about mandating the transparency of algorithms, if I heard you correctly. As we have heard in previous evidence, the problem with AI is that nobody knows what the AI is doing. By its very nature, it is black box. I am interested in whether you would basically have to have evaluation, problem spotting, reporting and “Can you take this down, please?”, or whether you can, with your existing powers, get further up the food chain.
Dame Melanie Dawes: I think we can get further up the food chain. The Online Safety Bill requires all the services in scope to assess their risks. If you introduce a big change such as bringing a new chatbot on to your service, you will have to do a new risk assessment and share that with Ofcom.
We will then set out codes. We are going to consult—literally within a day or two of the King’s Speech—on our first set of proposals for tackling illegal harms, because we really want to be quick in getting the Bill implemented. We will set out in our codes what we think are appropriate mitigations that you can take to address the risks that you have found. Then, for the largest platforms, the Online Safety Bill requires transparency on how they are meeting their wider terms and conditions, which is where they will be required to deal with misinformation, if doing so is part of their policies. There is quite a big toolkit here for us to get into what the platforms are doing and respond to change as it happens.
Q554 Katherine Fletcher: In simple terms, if—I don’t know—BBC News’s new chatbot starts recommending that somebody reads a proscribed terrorist organisation’s homepage, BBC News will give you the risk assessment and will say that it has thought about it, and then you will have the power of sanction if that risk materialises.
Dame Melanie Dawes: Yes, and we will set out codes as to what we believe are appropriate measures, for example for social media platforms to make sure that people do not stumble across terrorist material.
Katherine Fletcher: Or have it shoved at them.
Dame Melanie Dawes: Exactly.
Q555 Katherine Fletcher: I just want to return to the idea of the black box of the AI. Is there not a risk that you are just kicking the problem to the companies by saying to them, “You need to do a risk assessment and have some transparency on what your algorithm is pushing out”? If the algorithm is in a black box, they can just turn round to you when something bad happens and say, “Whoops—we didn’t know.”
Dame Melanie Dawes: A company that says, “Whoops—we didn’t know,” will need to explain to us why it did not know. The Online Safety Bill is very thorough and very ambitious and is ultimately aiming to change that culture in the industry so that companies do the risk assessments, take care, have proper governance and, particularly for the biggest platforms, have accountable people in their business at a senior level. Mistakes will always happen, and we are not going to be taking down or requiring take-down of individual bits of content. We are requiring them to have appropriate systems and processes in place and, in the case of the bigger companies, to be more transparent.
Your point on the algorithms is absolutely right. The Bill does provide for us to require audits of specific algorithms. This is an area of work where the DRCF—the Digital Regulation Cooperation Forum—is very important for us. The regulators you have before you this morning have come together to work together, and how we audit algorithms is a really good example of something that we are trying to do together as one, rather than all doing it slightly differently, because it is a central part of the future regulatory toolkit.
Q556 Katherine Fletcher: That makes good sense. It almost comes back to the point that the Chair made in his opening questions about resources. AI is forcing us to work in mysterious and different ways, and in part it is forcing co-operation across the regulators of different sectors.
I promise we will come to you, Mr Hayter, and I will ask the same question of you shortly, but let me ask Dame Melanie first: can you get the right expertise in, or are you being forced into working together because of the scarcity of the expertise that can read the algorithmic code and go, “That one’s going to start pointing to someone with swastikas”?
Dame Melanie Dawes: From an Ofcom perspective, we have flexibility over how we use our budget. We have used that to recruit some really excellent experts on the technology side. Our overall complement of technologists is pretty big for an organisation like ours. People are coming to us because they really care about the mission, and this is a place where they think they can get change to happen over the next few years. Interestingly, cyber-security skills are the hardest thing to find in the market; if you speak to any company, it will say the same. It is not easy, but we are managing it.
For me, it is very much a both/and, not an either/or. Three years ago, four regulators took the decision that we needed to collaborate even more intensely than we had ever done before, because there are so many things that we need to work on together. That might be horizon-scanning at the strategy end about where technology is going, but we also sit and compare notes on what it is like to work with particular companies; we use our statutory information exchange rules to allow us to do that, and it is quite informative. We are finding that collaboration works at every end of the spectrum.
Q557 Katherine Fletcher: Fair enough. Before I come to Mr Hayter, I just want to chase one thing down. Part of the Committee’s role is to make this accessible to anybody who is watching. There was recently quite a nasty fake about Keir Starmer; it was an audio clip, and it was very good. Addressing it relied on Members of Parliament passing the word among themselves on WhatsApp groups saying, “Don’t fall for this. It’s fake. Don’t share it.” Most people got that message before they had amplified it.
What, if anything, can Ofcom do right now to address that issue? As you have mentioned, there is a fear of hostile interference with anything from the “Strictly Come Dancing” result to the next general election, which is very possible with these technologies.
Dame Melanie Dawes: You are right. The Online Safety Bill will require the biggest platforms to be transparent about their terms and conditions. It will not mandate them to have terms and conditions that address disinformation—Parliament decided not to go that far—but they all have terms and conditions on disinformation, so for the first time Ofcom will be able to hold them to account for delivering on those. That is quite a big step forward. It will take a while for us to get all the secondary legislation and codes in place, so we will not have those tools in place before the next general election, but the Bill is quite a big step forward in some respects.
Q558 Katherine Fletcher: So right now nobody in the country can mandate the taking down of that fake from social media.
Dame Melanie Dawes: No. The other important part of this—which, again, we will be working on with the platforms—is that they have much better user complaint mechanisms. A big part of this is to allow you to flag that something has gone wrong, to be listened to and to know what is being done to respond to that.
I think we should all be very worried about deepfakes, but quite a lot of the videos going around that are deliberately misinforming people about Gaza and Israel are just old videos from previous conflicts that are being claimed to be about this conflict. This is an old problem, but one where there is a new power to make it worse.
Q559 Katherine Fletcher: I agree. Mr Hayter, in terms of large language models, AI has been around for a while, so how is it affecting your work at the Competition and Markets Authority?
Will Hayter: The broad answer to that is across lots and lots of our activities, from competition enforcement, consumer enforcement, mergers and markets to the forthcoming digital markets powers that we are expecting under the Digital Markets, Competition and Consumers Bill.
There are a range of terms here. There is AI very broadly. Actually, that is a subset of algorithms, which are a broader term. We produced a paper in 2022 about the whole range of different harms that can emerge from algorithms, recognising that there are lots of benefits to be had in terms of efficiency, innovation and so on from some of these technologies. What we have specifically done recently is this piece of work on foundation models, which also incorporates the large language models concept. We have thought there, both from a competition and a consumer point of view, about what might be some more positive outcomes, and some more negative outcomes, from these technologies. That is where we came to a set of proposed principles to try to guide the market towards the more positive outcomes, but we are trying to inform ourselves, through work like that, on how these technologies are affecting markets in all kinds of scenarios, using many of the same kinds of skills that Dame Melanie has already talked about in the Ofcom context: behavioural scientists, data scientists, data engineers, digital forensic specialists and so forth.
Q560 Dawn Butler: Mr Hayter, if the advancement of AI is at 10, where do you think the CMA is with regard to regulating it all?
Will Hayter: We are certainly not aiming to regulate AI. We are thinking quite carefully about where specific problems, in terms of competition and consumer protection, might emerge, driven by AI. As I said, that might be on the competition side if this technology contributes to positions of market power. It might be on the consumer protection side if this results in consumers being misled. As I said to your colleague, there are a whole range of different kinds of effects—some positive, some negative—that we could have. I would probably hesitate to put a number on it.
What we were trying to do with the recent review was do better at understanding the market as it is developing, rather than having to come along later and work out what has happened. That applies more broadly, in terms of our horizon-scanning programme, across a whole range of technologies, which, again, we also do jointly with our DRCF colleagues. We picked out this set of technologies because they seem so likely to have significant effects on markets, but we need to have some humility here. This is a market that is developing week by week, month by month. We are trying to keep up, as other people are too. We think we have now given ourselves a good starting point and a framework for ways to think about the way the market might develop, and we are doing more work over time to try to build upon that.
Q561 Dawn Butler: I hear you. Is it a matter of keeping up, or is it a matter of catching up? That is what I am getting at, because it is already so advanced. How have you previously accounted for other uses of long-established AI in your regulatory activity?
Will Hayter: The broad term “AI” could pop up in all kinds of contexts. In fact, I was looking back at when I gave evidence to a House of Lords Select Committee on AI in 2017. That was the year that the transformer technology that underpins foundation models was being invented. That was in the specific context of digital comparison tools, price comparison websites and auto-switching services. Even then, you could see the hints of how some of these machine learning techniques might have been used. Now, six years on, we are seeing a much more advanced level of those kinds of technologies through the various generative AI services. The reason why we spent some time on that in the review we have just been doing is because it looks to be such a general-purpose technology that it might have an effect on all kinds of markets, potentially—whether in tech-driven markets or more broadly.
Q562 Dawn Butler: It is difficult, because it is moving at such a pace and we are all playing catch-up. As you said, “AI” is a very wide term and encompasses so many things, but so much is happening that it is difficult to catch up, so we have to legislate around it. Will the Digital Markets, Competition and Consumers Bill give you enough powers to respond?
Will Hayter: We certainly think it would help an awful lot, both on the—
Dawn Butler: But does it give you enough powers, or would you like more powers?
Will Hayter: We think the Bill is in good shape, is flexible and contains the right concepts. It is technology-neutral. It does not talk about AI, but it does not talk about other technologies either; it talks instead about broad concepts. On the digital market side, this concept is strategic market status, consisting of substantial and entrenched market power and strategic significance, which could apply to an AI-driven market or another type of digital activity. As Dame Melanie said, the kinds of technologies you are talking about here are underpinning all kinds of markets and will apply very heavily in search, mobile, social media or any of the other kinds of markets that we might consider under the digital markets side of the Bill. There is also the rest of the Bill, which is about the CMA’s competition and consumer protection responsibilities. It’s really important there, particularly on the consumer protection side, that those powers are strengthened to allow us to take action, and that is what the Bill looks to do.
Q563 Dawn Butler: If I gave you the Bill right now and said, “You can add a couple of bits in there,” what would you add?
Will Hayter: I am genuine in saying that the Bill at its introduction was very consistent with both the genesis of the digital markets framework in the Furman review and our advice to the Government under the Digital Markets Taskforce in 2020. We really think it is in good shape. There are, of course, specific points of detail. For example—if you’re really pushing me, which I think you are—there are points about the ability to investigate algorithms that apply on the digital markets side of the Bill but are not in the competition and consumers side of the Bill. But it’s a Government Bill; all kinds of trade-offs are being considered as part of that.
Q564 Dawn Butler: We can push Government if you tell us what you need. That is the heart of this probing. We are all on the same page: we all want this to work in a way that is ethical and does not entrench bias or persistent danger. We get that information from you, and that informs what we do next; it informs the report that we write. That is why this is so important. Thank you, Mr Hayter.
Dame Melanie, you talked about transparency. Where does this transparency start from? Does it start from the organisations that have already started, or does it start from now? Where does your transparency start from for organisations?
Dame Melanie Dawes: The Online Safety Bill, if it receives Royal Assent tomorrow, which I hope and expect it will, does provide for Ofcom to be required to produce reports on the industry as a whole. That is one form of transparency, and we will use our information powers to gather data so that we can publish information that gives the public more—
Q565 Dawn Butler: But transparency from today doesn’t help us to understand what the information or data within the black box is built on. So is it retrospective? Does it go further back? Do you have the power to go further back?
Dame Melanie Dawes: We do have the power to investigate algorithms and to understand how they are working. It may be worth unpacking what that question means. The things you have to ask are these. What is this algorithm trying to achieve? What is it there for? For example, many algorithms are there to give you a really great social media feed so that you will spend lots of time on it. That is the first question: what are the algorithms aiming to achieve? Secondly, what data are they using? How are they using your personal data? In the case of generative AI, they will almost certainly be using data that they have been trained on; that will be determining what they do. And then what is the impact of the algorithms? What are the outcomes of this?
Q566 Dawn Butler: The data that they have been trained on is fundamental to all these large language models, right?
Dame Melanie Dawes: Yes.
Q567 Dawn Butler: What can you do about that—in very simple terms?
Dame Melanie Dawes: What we can do is ask the questions that I was just outlining of the platforms about specific algorithms that we think matter. Take, for example, the algorithms that feed teenagers’ social media feeds. The Bill is very clear that platforms are required to provide an age-appropriate experience for under-18s; that is one of the central pillars of it. Even where content is legal, it mustn’t be harmful to minors. That takes you straight to the algorithms that are serving up that content to teenagers, so we will be able to ask those questions. What are you trying to achieve? What data are you using? And then we will be requiring in our codes appropriate systems and processes that we think are needed in order to reduce harm from those—
Q568 Dawn Butler: That’s not working very well at the moment, is it?
Dame Melanie Dawes: Well, it hasn’t started yet.
Q569 Dawn Butler: Yes, but in terms of what social media companies do at the moment and what we can ask them to provide. So my question is still about data. Talk to me about data—just data. What can you do about the data? What do you think you will be able to do about the data that these systems are built on?
Dame Melanie Dawes: Specifically, we will be focusing on the outcomes, rather than on the inputs, which would include the data, so—
Q570 Dawn Butler: That is problematic, right? If you are not worried about the input, then you are not worried about what it is built on, and that is problematic.
Dame Melanie Dawes: Our focus will be on what actually is harming users online. That takes you to the content that is being served up. But then we go back from that—the Bill allows us to do this, but it is not starting until tomorrow—to ask, “What are the algorithms that are serving up that problem?” Then, we can go back to, “What is the data that is feeding those algorithms?” So, if we feel that we need to, because we have a problematic outcome, we can go right back through that chain—we absolutely can. But our focus will be on what is harming the public at the moment. And you are quite right: this is not good enough at the moment; there is not enough transparency.
Q571 Dawn Butler: But the problem is that you might not see the harm until it is too late, because of how the system actually works.
Dame Melanie Dawes: Parliament has set out pretty clearly the harms that we are concerned about. We have done a huge amount of research on this, and we will be coming out in a couple of weeks with what we believe are the systems and processes that will create change for the public. You are absolutely right that some of the harm will change and migrate. The Internet Watch Foundation has published research this morning showing that child sexual abuse imagery is even more rife online, because generative AI is powering it to an even more troubling extent than before. So, we do have to keep adapting, and we will look at data, but as part of the chain of how the systems and processes that are serving up services to us at the moment are built.
Q572 Dawn Butler: So, basically, the systems that have data at the moment will just be left as they are? But thank you; I am out of time.
Dame Melanie Dawes: No, we will be requiring change.
Dawn Butler: Thank you both very much.
Q573 Chair: Mr Hayter, in answer to one of Dawn’s questions you were talking about whether the Bill before Parliament has all the powers that you need. One of the Government’s statements in their White Paper was that there was going to be a gap analysis to see whether, in the case of AI, there were any gaps—any kind of deficiencies—that could be corrected. Have you conducted that analysis in order to inform the later stages of the Bill?
Will Hayter: Yes, in effect. I can only really answer that question regarding our competition and consumer protection remit but, as I said to your colleague, we think that the Bill is in genuinely good shape to be flexible enough, being couched in terms of economic concepts of market power rather than of specific technologies.
Q574 Chair: So in the AI foundation models report that you produced just a few weeks ago, with some high-level principles, you are satisfied that the Bill before Parliament gives full ability for those principles to be translated into the powers that you need?
Will Hayter: First, I would say that it is the Government’s Bill; it is not through Parliament yet. We have to see how it eventually reaches Royal Assent, but then we are ready to implement that as it comes into play.
Q575 Chair: No, that is not my question. Once it has Royal Assent, it cannot be changed. The question is that when you have a Bill before Parliament and you have published a report with high-level principles, they clearly need to be translated into practice. You have a legislative vehicle before you. It is obviously essential that you communicate whether there are any gaps now, before we all vote it through and you then turn around to this Committee and say, “Well, sadly we don’t have the powers.” Are there any gaps that you have identified, and are you communicating those clearly for the remaining stages of the Bill?
Will Hayter: With, I suppose, the slight caveat that we are still trying to understand this market as it develops, as I have mentioned—that is what the report is trying to do—and that the Bill is yet to go through, we feel very confident that the Bill gives the right flexibility to make it possible to handle the market power that emerges in digital markets. That could include, as I said, AI-driven markets. There is an important improvement on the consumer protection side as well. We are working closely with Government on the Bill, and, as that goes through Parliament—I have mentioned probably the one specific point, but—
Q576 Chair: You have, and it is very useful, but are there any others? You have produced this report in good time and hopefully, reasonably, it starts at the level of principle, but things then need to be translated into practice. You have a vehicle here; have you made an assessment of whether applying the conclusions—the analysis—of that model would benefit from any change in the legislation?
Will Hayter: Aside from that one specific point, I don’t think there are others, but I hesitate to profess perfect foresight.
Q577 Chair: You are the expert on this. We are relying on you. You are the Competition and Markets Authority; you are responsible for consumer protection and competition. You have been invited to look at whether you have the powers for this. You have a Bill before Parliament. I don’t think one can be passive about this. There is an opportunity to say, “Are the powers adequate or not?” and if not—you have given one example to Ms Butler—you can say what needs to happen. I think the whole of Parliament would be all ears. What is your response to your own report and the current drafting of the Bill?
Will Hayter: I repeat: we thought the Bill was in really good shape at introduction. It was well balanced, contained the right checks and balances, and was flexible enough to look at AI-driven markets—others less so—in the digital markets space. It contains important improvements on the consumer protection side as well. There is a lot of work for us to do in translating the Bill, once it reaches Royal Assent, into guidance, assessments of strategic market status, and potentially requirements that could give force to some of the principles that we talked about. The principles also refer to broader competition and consumer protection law, which all businesses need to stick to, and we are looking to enforce those in AI-driven markets just as we do elsewhere.
Q578 Chair: The Bill is something that you could influence. Of course you will have work to do once it attains Royal Assent but, if I may say so, I am not hearing from you a very active appraisal of whether there are any gaps there. That surprises me, given that not only have you got a ready-made Bill available for this, but we are in the last year of a Parliament. There seems to be a kind of sluggishness, which is surprising.
Will Hayter: I am sorry if that seems surprising. I am genuinely not trying to be evasive. The fact that we can be so positive about the Bill as it was introduced reflects years of working with the Government on the underpinning proposals. An awful lot of work has gone into the Digital Markets Taskforce advice that I mentioned, and, in the preparation of the Bill, working with officials on policy instructions and so forth, and giving evidence to the Bill Committee and the House of Lords Select Committee. I am not concealing anything when I say that the Bill, if it were put through as introduced, would put us in a very good position to take forward this new framework on digital markets.
Q579 Stephen Metcalfe: My question is away from the Bill. Listening to you, this is a rapidly developing area and this is complex technology. How easy or difficult is it to recruit people with understanding of the technology in what must be a very competitive AI market?
Dame Melanie Dawes: As I was saying earlier, we have been encouraged by how people want to come to Ofcom right now, particularly to work on online safety, because they see the opportunity. We have built some really great teams, including in some really quite granular technology expertise. So far so good, I would say, from Ofcom’s perspective. That is partly because people look at Ofcom and see that they are going to have agency and be able to get things done. I don’t know about the CMA experience.
Will Hayter: I think it is a challenging environment. Some of these people are the most hotly sought-after people in the recruitment market in tech—data scientists, data engineers and so on. Like Ofcom, we make a strong pitch in terms of the interest and impact of our work, and the public interest motivation that lies behind what we do. We find that that can pull in some really good people. There is always the risk that those people see bigger salaries elsewhere and move, but we think that we are doing reasonably well as long as we keep pushing hard on that effort.
Q580 Stephen Metcalfe: Do you have to pay a premium to attract these people away from the tech sector into the regulatory sector?
Will Hayter: You will appreciate that with civil service salaries, you can’t pay a premium compared with the tech sector. There are various allowances under the civil service HR arrangements to allow for particular specialisms to be reflected in salary improvements, and we take advantage of those where we possibly can.
Q581 Stephen Metcalfe: So it’s not a barrier. You are getting the people you need to be able to regulate this.
Will Hayter: I would always hesitate to be complacent. It is a tough challenge both attracting and retaining people, but we think we are doing okay so far on that front across the range of skills.
Dame Melanie Dawes: I think it is much harder on civil service scientists, to be honest. I spent 30 years working within those pay ranges. It is very difficult, not just for technology specialists, but increasingly for anyone with real expertise or ability. The salaries are uncompetitive.
At Ofcom, we are not paying that much more. Sometimes it is just about having a little bit of flexibility. Quite a lot of people with fantastic skills are joining us from research institutes and academia. It is not a race towards tech industry levels of remuneration. We are finding it is about relatively small balances, but I do think it is very hard on the civil service.
Q582 Aaron Bell: Mr Hayter, as you look to how we are going to regulate the next wave of technology, do you think we got it wrong before, with the first wave, with Google search and with social media? Do you think the UK got it wrong with its regulation?
Will Hayter: A large part of the rationale for the digital markets part of the DMCC Bill is that the traditional tools have not proved very effective at handling some of these fast-moving markets and the ways that market power can build up so quickly. That probably makes the short answer: yes.
With work like the piece we have done on foundation models and on the framework, we are looking to move more quickly and be more effective, moving away from the ex post enforcement model to more of an extended regulatory framework where there is this acute market power, to enable us to set requirements up front and try to head off some of the harm before it happens.
Q583 Aaron Bell: Recently, with the Microsoft Activision deal, you have taken a different view from other regulators. Is that because you think you have a better understanding of where we want to go with this stuff, or is it genuinely just a philosophical difference between you and other countries’ regulators?
Will Hayter: I would hesitate to talk about philosophical differences. We always look at the evidence. We have to look at the UK market, and we have a specific legal framework that is different from the framework in the EU or US. We take the decision that we think is right based on the evidence in front of us and within our particular legal framework.
Q584 Aaron Bell: Under the new regime in the Digital Markets, Competition and Consumers Bill, your unit will be able to designate firms as having strategic market status. That can happen practically overnight in this space—someone comes up with a new technology, suddenly everybody is using it and downloading the app, and it is becoming relevant. How quickly do you think you can react in the future world, and say, “This thing that was launched only three weeks ago, we have now decided is strategic.” Is that realistic or is it going to require months before you get that stage, even though you are trying to be quicker?
Will Hayter: Good challenge. A statutory time limit of nine months for a strategic market status assessment is proposed in the Bill. That might sound like a long time. Once you consider the kind of evidence that we would need to build up in order to make an assessment, the kind of consultation needed, the fact that we have placed a great premium on a very participative, collaborative approach with all firms concerned, you can see it is going to take the nine months in many cases.
The point also applies the other way round. If a new service springs up and, in an unforeseen way, challenges an existing SMS designation, we would want to remove that SMS designation quickly, if the market power has been somehow undermined.
Q585 Aaron Bell: We are a bit short of time, so I would like to get both of you to talk about co-ordination and how you work with your fellow regulators. We are going to hear from the Digital Regulation Cooperation Forum next. Could you both briefly describe how you co-operate with fellow regulators on AI, and what role you think the Government should have going forward in encouraging and ensuring regulatory co-ordination and cohesion.
Will Hayter: The Digital Regulation Cooperation Forum is clearly really important here. I am sure Kate will give you chapter and verse on that shortly. I personally have been involved in DRCF since its inception. It is really important, in terms of specific set-piece projects, like the work we have been doing on algorithmic auditing, but also in the broader culture and collaboration it has helped inculcate.
There are lots of connections between our four organisations at all levels. Whether it is CEOs, my level, or teams working on specific things, we all understand throughout our digital regulation activities that we need to talk to one another, sharing expertise and attempting to be more than the sum of our parts. I really do think that is genuine.
Q586 Aaron Bell: I am glad to hear that. There is always a danger, though, that these things sound worthy and they are a good talking shop. Can you give us specific examples of things you have learned from other regulators, or that you have picked up and applied inside the CMA, through this forum?
Will Hayter: Absolutely. We have done a lot of work, particularly with the ICO, on the intersection between competition and data protection. That has gone through a number of phases. We did a broad report on that intersection in spring 2021, which talked about the potential synergies and tensions between competition and data protection issues. I think that was a really important thing on both our sides for increasing our collective understanding of that connection.
That has actually borne fruit in a particular case that we have, the Google Privacy Sandbox case, under the Competition Act. Google announced its intention to remove third-party cookies from its Chrome browser. That was a privacy-enhancing move, at least in principle, but we were worried that it might cause competition problems, because Google itself would not be subject to those same changes in the same way. We worked very closely with the ICO through building that case to the acceptance of commitments from Google for us to be able to oversee those changes. That co-operation is happening week by week, month by month, as we go through the oversight process of those changes from Google.
Q587 Aaron Bell: That is very helpful. The same question for you, Dame Melanie, have you co-operated through that forum or in any other ways, and what specific learnings has Ofcom picked up from other regulators?
Dame Melanie Dawes: Let’s go back to what AI means for regulation. It means that everything is moving faster and that technology is driving the change. A lot of what we have been doing together is trying to stay abreast of all that change—we have been horizon-scanning. We followed up a recommendation from a House of Lords Committee that we set up a joint programme, which we did a couple of years ago. That has helped us all, because if you do it together then you can do it quicker and engage with the academics together rather than in four different ways. Whether that is getting on top of technologies like GenAI, Metaverse or Web 3.0, or whether it is understanding how algorithms are used, there are new challenges here that we have been able to get much faster on because we are acting together. That is one part of this.
In the middle for me, AI means that we do need some new regulatory tools. To Ms Butler’s point about data, the ability to analyse and audit algorithms is a new thing for regulators to have to do, as is understanding the data that is feeding into that. We have co-operated on that, rather than doing it individually. Some of my colleagues have been doing this for longer than Ofcom has. We are getting the powers to audit algorithms tomorrow through the Online Safety Bill, I hope. Again, that is a piece of work where we have been building one capability—one shared understanding. Some of this will be about building a private sector market to be able to do this effectively for the platforms, with proper guidelines and standards.
Finally, by sheer dint of the leadership effort we have put into this, we have built connective tissue in our organisations that is stronger than it was before. At the most operational end, an example for me is the work that we are doing already regulating platforms like Snapchat and TikTok, through the relatively limited but already-in-existence video sharing platforms regime. We have been working really closely with the Information Commissioner’s office, because they are also looking at the data issues around those platforms and what that means particularly for children through the Children’s Code, which they are responsible for.
We are finding that there is work to do together across the breadth of our responsibilities, and it is a very deep partnership, but it will go further in future, I am sure of that.
Q588 Chris Clarkson: This question is to both of you, but I will start with Mr Hayter. You have both expressed support for the Government’s principles-based non-statutory approach to regulation. I am interested to know what you think are the advantages to that approach versus, for example, the EU’s fairly heavily regulated model. Do you think it puts the UK at an advantage having the existing model and adapting it, or do you think it leaves us behind? Do we need a specific AI regulatory model, or are we doing the right thing?
Dame Melanie Dawes: That is a big question.
Will Hayter: Yes, it is. Again, to set the scope of my answer, we are focused on competition and consumer protection. I am conscious that there are all these other big safety, security, and data protection questions that are being thought about elsewhere. Lots of the thinking behind the EU’s AI Act is about those. From a competition point of view, a point we made in our recent report was that it is important, even where regulation is needed, to make sure that that itself does not become a problem for competition, because it can, if done badly, create barriers for smaller firms to entry and expansion if the only firms that can cope with regulation are the really big ones.
Q589 Chris Clarkson: Would you say that there is an enthusiasm for the regulatory model from some of those larger firms for that reason?
Will Hayter: There has certainly been speculation to that effect. We all, when thinking about these markets, need to be alert to the kinds of risks that might play into the hands of bigger firms.
Dame Melanie Dawes: I think the Government’s approach is the right one here, for two reasons. First, there is a huge potential benefit from the latest wave of technology and huge potential for innovation, and we really need to be mindful of that. We have in Ofcom as part of our fundamental law the need to support innovation and investment. If you are too heavy-handed at this stage, there is a real risk that you do not get the benefits that we could all achieve through this.
As I said at the very beginning, I think that regulation is the most effective when the laws that drive it are technology-neutral, because that is what drives flexibility. A prescriptive approach that says this or that technology cannot be used in this or that circumstance risks being behind the curve in terms of innovation. It is better to look at the outcomes and look at the services, and then understand how the technology is driving that, rather than to regulate the technology itself. That is why I think the Government’s overall approach is the right one.
That is a separate question from whether there should be a statutory framework at some point in some aspects, but I think it is right for now not to be going too fast for the regulatory pen. It is absolutely right that one of the risks of that is that you favour the incumbents, because they are the ones with the resources to cope with all our information requests.
Q590 Chris Clarkson: Our interim report suggested that a regulatory gap analysis be done to make sure you have all the tools at your disposal to regulate this new area. I completely appreciate the point you made about being technology-neutral—I think that is the right approach—but do you think you have the full suite of tools at your disposal to regulate this area, or do you foresee a situation where you will require additional powers?
Dame Melanie Dawes: There are tools, and then there are responsibilities. One of the new tools is the ability to audit algorithms, and we are receiving that in the online safety world. Do we need to apply that tool to other parts of Ofcom’s remit? Possibly. We are doing work now to see whether or not we need to have more transparency in particular and the ability to audit perhaps around algorithms for social media, which is not covered by the Online Safety Bill; it will be more a question of news and media. It may be that we need to apply those new tools to new sectors in the future, and we are doing some work on that.
The second question is whether there are harms, problems or issues that need to be solved that are not currently provided for. We have had a conversation about fake news. Parliament debated the question whether disinformation should be included in the Online Safety Bill very intensely over a number of months recently, and concluded, for reasons of freedom of expression, not to do that for now. But these are questions that we have to remain abreast of and remain open to. It is about distinguishing between what you are trying to regulate for and how you do it—both are important, and you need to get both of them right for effective regulation.
Q591 Chris Clarkson: I am interested in how the auditing of algorithms works in practical terms. I am not a scientist—I’ll put my hands up: I’m a lawyer. If you put an algorithm in front of me, it might be proprietary code that belongs to Google or Microsoft. How do you analyse whether or not it is doing something that is likely to lead to a harmful or prejudiced outcome or an untoward event that is not intended? It might be designed to give you the best social media feed ever, but what it is actually telling you is that fascism is fantastic and will help you lose weight. How do you audit that? What people do you have for that?
Dame Melanie Dawes: You might find that my fellow regulators have been doing this for longer, particularly the Financial Conduct Authority. One of the things that the DRCF is helping us with at Ofcom is that some have gone before us, and we are doing some of this work together now about how you answer these quite difficult questions. We as the DRCF have published quite a bit on how you ask those questions: what is the algorithm for? What data is it using? What are the outputs?
On online safety, the key thing for us would be: what is the outcome? That is what the platforms are not currently looking at. They are looking at what goes in and whether it drives attention and therefore helps them drive advertising revenue, but they are not looking at whether it then results in very inappropriate experiences for teenagers—getting intensely drawn into self-harm, for example—or whether it leads to things like child sexual abuse material or terrorism being widely available in ways that content moderation is not able to pick up. That is why the focus on the outcomes and what it is like to be a user online is going to be central for Ofcom.
We will be talking a lot to children and teenagers about their experiences, because they often know exactly what the harms being driven are. Then we will work back from that through the systems and processes to have a really granular conversation with the companies about what needs to change.
Q592 Chris Clarkson: You mentioned the DRCF. Do you think that is properly resourced to do what it needs to do? Would you see an expanded role for that?
Dame Melanie Dawes: We keep that very open among us. We currently have, I think, 18 people—Kate will be able to confirm whether I have that number right. That is the size of the unit. We have really gone out there to find a really high-calibre chief executive in Kate Jones, who you will hear from in a moment. That is just the central unit, though. When we are working on our individual projects, we draw in resource from the regulator that is still there. So that resource is not the only thing.
We also have some budgets that we share on elements of research or events and so on. As I say, I would be surprised if we do not do slightly more in future rather than slightly less. It is something that we debate quite often. There isn’t really a different set of views across the regulators. It is more about where is it effective to put our resource together and how shall we do that.
Chris Clarkson: The joys of regulating a moving target.
Dame Melanie Dawes: Yes.
Q593 Chris Clarkson: Changing to the online safety forum that will take place later this year, what involvement have both of you had? What outcomes would you like to see from it? And a curveball question: do you think AI safety is the right name for it? Or should it be AI opportunities?
Dame Melanie Dawes: The summit is taking place next week. We will have to see how it goes.
Q594 Chair: Are you invited to it?
Dame Melanie Dawes: I was, yes. Unfortunately I can’t go, which is extremely disappointing, but I was invited to the day that the Department is organising around its particular remit. We have been involved and seconded someone from Ofcom to the Government’s team to help them understand some of the technology and industry issues that they need to have a granular grip of.
It is about opportunities as well as risks. Because we are regulators, we often end up talking about the problems that Parliament asks us to solve, but there are huge opportunities, particularly in sectors like healthcare and education, potentially in creativity, and in some ways in news. It is just that there are a lot of risks as well. I think that what the Government are trying to do is to get all of that on the table next week, but some of the big, problematic potential risks around the technology getting into the wrong hands need to be tackled internationally. Having a summit to bring people together with the industry is a really constructive step forward, and the UK is leading on it.
Chris Clarkson: I’m all for us leading. Mr Hayter?
Will Hayter: On the AI safety summit next week, we are not part of the main summit. That is because that is very much a frontier safety-focused discussion, as the Government has described, and we are focused on competition and consumer protection. You will be aware that there is an AI fringe around that, which is a whole host of events. I did one yesterday and we are going to be appearing in a couple of capacities at that next week. The main summit, it seems to me, has spurred a broader set of discussions around it, which themselves are very helpful. That seems very positive.
We are also with you on the need to balance opportunities and risks. That is exactly how we framed our report in terms of more positive and more negative outcomes, trying to push towards the positive to realise all those benefits. That is the kind of balance that we have been trying to strike.
Q595 Chris Clarkson: What do you view as an optimal outcome from this? What should people come away from the summit with?
Will Hayter: It is probably more a question for the Government on the actual summit. We are waiting and seeing as well, a bit like Dame Melanie. From the combination of the summit and the broader discussions, we can be building greater shared understanding of those safety issues, some of the issues that we are flagging, and other aspects of risks and opportunities that will be discussed at some of the other events, and all of that is to the good.
Chair: Thank you. We have a couple of brief supplementaries, starting with Carol Monaghan and then Dawn Butler.
Q596 Carol Monaghan: May I take you back a little to the talk about civil servant salaries and not being able to match the tech industry? Surely for something as important as this, there should be special requirements or interventions. Would you welcome that? Is it something that you have asked for, Mr Hayter?
Will Hayter: This is clearly a civil service-wide challenge. These skills are not things that the CMA is looking for uniquely; it is the same in a number of Departments. There are some central allowances and specific allocations for particular skills, which we have absolutely tried to push to the maximum, if you like, in trying to secure the right people.
Carol Monaghan: But if you are not matching the tech sector, you are not going to—
Will Hayter: It is never going to be realistic to match tech sector salaries.
Carol Monaghan: Why not?
Will Hayter: Our budget would have to multiply by 10 in order to hit that benchmark. Also, I do not think that we should be—it would probably not be a responsible use of public money. A really important public interest focus drives people in the CMA and elsewhere in the civil service, and that can go a long way. It does not always necessarily make up for the shortfall, but it certainly attracts some really good people. People might see this as a long-term career in the CMA or the civil service, or they might see it as coming in to Ministries, spending a couple of years to learn an interesting set of other skills and going back out to industry. That is all to the good. Again, it is not straightforward, and sometimes money talks.
Dame Melanie Dawes: I left the civil service three and a half years ago, so I can perhaps look back at it with a little longer perspective now. I think that salaries matter but, as I was saying earlier, I do not think we are far away. The very best machine-learning experts will be designing amazing new services to offer to the public, and a lot of what those people want to do is to create and produce services. The people who are interested in the public policy concerns around safety and risk, however, are very interested in coming to work with regulators in whatever form. That is a slightly different type of person, who has a lot of expertise but is also interested in the public policy issues. We are succeeding in attracting those people.
More generally, I do not think that the UK is good at getting that flow between the private and public sectors. Civil service salaries—not just in data, and in fact, the Cabinet Office has put quite a lot of effort into building a data profession in recent years, paying a bit more for those skills—are definitely a part of it, and that has got a lot worse in the past 10 or 15 years.
I also think there is something about culture. People are not always clear what the role is in a Department, but if they come to a regulator, they can see a little more what our role is and that there is something specific that they will be able to do. Some of this is also about more movement between Government and regulators, and generally edging our system up a bit.
Q597 Dawn Butler: I have a quick question—I might have misheard. There was a focus on outcomes, but the problem with large language models is that when you focus on outcomes, it has already happened. Through our investigations and our interim report—as Mr Clarkson said, it is a moving target—it is evident that we cannot regulate how AI is used, but we can regulate what AI is used for.
Dame Melanie Dawes: Yes.
Q598 Dawn Butler: Is that your focus? For example, facial ID—it is going to be used, and we can regulate how it is used and what it is used for, so is that your focus?
Dame Melanie Dawes: Yes—I would almost take it one step away, though. So from an Ofcom perspective, using biometric facial-recognition software can be a helpful tool for preventing children from being able to access pornography, for example, which is one of the things that we will be doing under the Online Safety Bill. There, I would say that the thing that we are most focused on is the outcome of children not being able easily to access porn. Facial-recognition technology can be one of the solutions to allow that to happen, but it needs to be done in a way that protects people’s data. The outcome is almost even further away; the outcome is to keep kids off that material.
Q599 Dawn Butler: What if we do not legislate about what it can be used for? Facial recognition, for instance, does not identify black people very well. If you do not regulate what it can be used for, it will be too late if you are just focusing on the outcome.
Dame Melanie Dawes: Yes, exactly; I completely agree with you. There is a lot of bias in some of these technologies, and that is partly because they have not been tested properly and partly because the data they use is not adequate. When we come forward in December as Ofcom—to give you a concrete example—with our requirements for porn sites to properly age-verify at age 18, we are going to say that in some circumstances, facial recognition technology can be adequate, but it needs to be effective. One of the tests of effectiveness and fairness is that bias is dealt with and the datasets are improved upon.
Some of what has been going on in the last few years is that new things have been introduced into online services without that testing. That is where I agree with you that the outcome has happened and been in place for ages before anything has actually been done about it. However, what the Online Safety Bill will do is require much better governance and risk assessment. It may sound quite boring, but these are actually processes that mean that safety is up front and testing is done, so that when you actually give it to the public—particularly children—you can be more confident that you are not going to have a poor outcome in the way that we have experienced before. I hope that explains where we are coming from a bit better than I did earlier.
Dawn Butler: I could delve a bit deeper, but we do not have time.
Dame Melanie Dawes: We could maybe follow this up in conversation.
Q600 Chair: We have to leave it there. I have just one final question to Mr Hayter from the CMA. In our report, we note that there is a big debate about whether open source should be either mandated or refused. It has risks that people attach to it. Does the CMA have a view as a competition regulator on open source?
Will Hayter: We did comment on open source in the report and the view that open source can be an important part of supporting effective competition, because it can reduce some of the barriers to entry that innovators can come and build on without having to make such a great up-front investment themselves. There is clearly a debate to be had because a lot of the worries about safety and security and so on—
Q601 Chair: Who has that debate? Who decides what the doctrine is? As a competition specialist, you pointed out that there are some advantages in terms of lowering barriers to entry, but if there are safety aspects, who decides whether the UK should favour or suppress open source?
Will Hayter: That is a broader Government decision or discussion. Partly through this report, and also through discussions we have been having with the Department and so forth, we have been trying to feed in some of these viewpoints about the importance of open source. We are not going to be the out-and-out champion of open source to the exclusion of all other considerations, but we do think it is important to keep in mind the potential benefits of that way of setting up the technology.
Chair: Thank you very much. You have both been very generous with your time. We have run over slightly, but that is because your evidence was particularly important and rich. Thank you, Dame Melanie Dawes and Will Hayter.
Witnesses: John Edwards, Jessica Rusu and Kate Jones.
Q602 Chair: I will introduce the next panel as they take their seats, if they will forgive me for doing it in that way. I see joining us at the table John Edwards, who is the UK’s Information Commissioner, having begun the five-year term of this post in January 2022. Previously, he was New Zealand’s privacy commissioner, and he has advised the Committee on previous occasions. Thank you for coming. I am glad to welcome Kate Jones, who has been referred to already. She is the chief executive of the Digital Regulation Cooperation Forum and has been in that post since May of this year. I am also pleased to welcome Jessica Rusu, who is at the Financial Conduct Authority, where she is its chief data, information and intelligence officer. Thank you to all three of you for coming to give evidence today.
I will start with a question to Mr Edwards. Privacy is obviously a very important matter when it comes to AI. You are the principal, if not the only, privacy regulator in the UK. How have the development of AI and, in particular, large language models influenced the work of the ICO?
John Edwards: It is an area that we had been focused on long before I took up the post—the ICO has been working in this field for 10 years—and we have seen the developments that have swept across the economy coming and we have been prepared for them. Our approach has been to ensure that we are communicating to all parts of the supply chain in AI—whether they are developing models, training models or deploying retail applications of them—that, to the extent that personal data is involved, the existing regulatory framework is already there and applies. That requires certain remediations and identifications of risk. There are accountability principles, transparency principles and explainability principles.
So it is very important that I reassure the Committee that there is, in no sense, a regulatory lacuna in respect of the recent developments that we have seen in AI. We have issued guidance on generative AI, we have issued a number of blog posts, and we have guidance on explainability, which is co-badged with the Alan Turing Institute. So I believe that we are well placed to address the regulatory challenges that are presented by the new technologies.
Chair: Thank you very much. I am going to go to my colleagues, starting with Carol Monaghan, who has some further questions on this, and then to Stephen Metcalfe.
Q603 Carol Monaghan: Mr Edwards, may I ask you a little about the Data Protection and Digital Information Bill? The Bill will give your office new powers. Are you set up to engage properly with those new powers?
John Edwards: I believe so. We have an extensive project to implement the Bill, and we have been preparing for that for quite some time. That is right across the organisation. I believe that from day one we will be ready to deploy the new powers that we are given to regulate this area.
Q604 Carol Monaghan: May I ask you specifically about this? One of the new powers that you will have relates to facial recognition surveillance cameras. Just to give a bit of background for those who might be watching, a Surveillance Camera Commissioner and a Biometrics Commissioner were appointed in 2012, I believe, under the Protection of Freedoms Act. They were then merged in 2021 into a Biometrics and Surveillance Camera Commissioner. That whole role is now going to come under the auspices of your office. What expertise does that require, and is the expertise there?
John Edwards: I believe so. We have already done extensive work in relation to biometrics and facial recognition technologies. I should give a little background. The ICO is a whole-economy regulator. We regulate in relation to all personal data, and that includes every kind of human activity that you can conceive of. The cameras that are already deployed in non-police contexts are subject to our remit. In fact, we already have oversight of police deployments of live facial recognition. We work with police services that are trialling those technologies to ensure that risks are identified and that they are trialled in a responsible way and in ways that are likely to identify any systemic problems, such as the kinds of bias and discrimination that we see our colleagues in other jurisdictions identify.
Q605 Carol Monaghan: In previous sessions of this inquiry, we have had quite detailed evidence that shows that facial recognition technology, and the AI algorithms that process it, have intrinsic biases and that wrongful identifications have been made. How will you deal with that, given that we are losing a role that was specifically looking at that issue?
John Edwards: It is important to emphasise that we have a number of roles, and there are a number of expectations of those who deploy these technologies. We can require that certain risk assessments are undertaken and require that those are produced to us. When you describe a misidentification, there is an outcome-based accountability, and we can look ex post and review whether there was a discriminatory factor or whether the technology is not fit for purpose.
However, what we are increasingly doing is requiring the accountability ex ante, so that we are ensuring that the technologies that are being deployed have been thoroughly tested, are fit for purpose, do not design in discrimination, are trained on a representative—
Q606 Carol Monaghan: At the moment, we know that they are. We also know that these have been rolled out to a far greater extent and they are being used far more widely than they have been in the past, so it seems like a strange time for the Government to decide to abolish the role of the commissioner that is set up specifically to look at that area. Do you not have any concerns about that, because the public have concerns about it?
John Edwards: Speaking candidly, I think it is stranger to separate those functions out, because we are, as I say, a whole-economy regulator that is equipped to look at all ways in which personal data is used and which may be prejudicial. So in my view, it does make more sense that that is a centralised function.
The challenge that we have, which I think you are alluding to, is that as a whole-economy regulator, we have constant resource-allocative decisions that we have to make, and some of those are demand-driven, which we cannot really control. So where we allocate resources in one area, they are not available in another, and we need to retain some discretionary capacity for emerging technologies, for example, as well as the technologies that are here and present.
However, in relation to absorbing the functions of the surveillance and biometrics commissioner, we will ensure that the resources are transferred across and that those are allocated in the organisation in a ringfenced kind of way.
Q607 Carol Monaghan: You will be aware that a report has been produced by Professor William Webster and Professor Pete Fussey; it is not yet in the public domain, but it has been produced for the Home Office. They highlight a number of concerns. They say that the Bill proposes to remove the legislative need for the Government to publish a surveillance camera code of practice, which will have major implications for everybody who uses a surveillance camera.
They also raise other concerns. They say that under the Protection of Freedoms Act 2012, the Surveillance Camera Commissioner and Biometrics Commissioner were required to lay a report annually in front of Parliament. That has all been removed. Surely, when we are looking at this, transparency must be absolutely critical. If we are to get the public on board with things like surveillance cameras, surely we need to have more transparency, not less.
John Edwards: I would agree with that, and I think the ICO is positioned to provide that transparency. But it is interesting that you say, “If we are to be on board with surveillance.” The UK is the most surveilled country in the world, I think—
Q608 Carol Monaghan: That doesn’t mean that the public fully support it.
John Edwards: I mean, it’s interesting—I think there is a greater level of comfort with public order kind of surveillance in this country than there is in other jurisdictions.
But it is certainly important that there are oversight mechanisms, and the ICO will continue to be an important oversight mechanism for the deployment of surveillance cameras, and particularly, the incorporation of new technologies, such as live facial recognition. There, I think we need to be very careful that a meticulous data protection impact assessment is undertaken before technologies are deployed, and that risks are identified, because there is a risk in the rush to market in these areas that vendors upsell the benefits and downplay the risks. So I think the accountability really needs to come at that point of commissioning.
Q609 Carol Monaghan: Surely, then, we need the code of practice.
John Edwards: There is provision for codes of practice under the UK GDPR, and there will be under the new Bill.
Q610 Aaron Bell: Thank you, all three of you, for coming. I wanted to talk high-level: the UK Government’s approach and the White Paper, and where we are going, and what you think—I will ask you all individually. You have all expressed support for what we are trying to do, in terms of the principles-based non-statutory approach to regulation. Do you think there is some risk that we may fall behind the EU in particular? Let’s say the EU brings in its Act. I don’t happen to think that is the right way to do it, but is there not a real risk that that becomes a de facto standard and we fall behind? In that light, is there anything that you would like to see us do in the legislative space—very narrowly rather than trying to take the EU’s approach—in the field of AI? I will start with you, Mr Edwards.
John Edwards: I believe that there are real risks in regulating for specific technologies, because the landscape changes. The legislation that my office administers is technology-neutral; it is principles-based. And when you look at the principles set out in the Government’s White Paper, they map very closely to the principles that underpin the UK GDPR and which will remain under the Data Protection and Digital Information (No. 2) Bill. They are accountability, transparency, explainability, security, fairness, which is really important, and accuracy. These are existing regulatory requirements, and we have the ability, and have exercised the ability, to require those deploying AI technologies to demonstrate the extent to which they have complied with these regulatory requirements.
Kate Jones: Shall I explain a little now about the DRCF?
Aaron Bell: Please do.
Kate Jones: You have heard a bit about us already. Essentially, we are, as Dame Melanie described it, a connective tissue between the four regulators who are here before you this morning. It was created because the work of the regulators in the digital economy is coming closer together in ways that you have heard about. There are both areas of interplay and areas where it makes sense to do things together because we all have similar interests, such as algorithmic assurance, which has been discussed already. I think, in talking about the DRCF and its value, it’s quite interesting to think about the counterfactual, if it didn’t exist. In fact, you can see that for yourself if you go to Brussels—as I did a few weeks ago—where there isn’t any equivalent. There has been quite a bit of digital legislation, but there isn’t yet any connective tissue bringing it together, and there is quite a bit of frustration in some quarters. So the UK is really ahead in having the co-ordination mechanism of the DRCF. I think that is serving us well, and others are now adopting similar models internationally.
As regards your question, it follows that the DRCF’s view is simply the view of the four member regulators. You are hearing from them individually. I would simply add, on your question about getting ahead of other jurisdictions, that if businesses are operating in the UK, they will have to comply with UK law whenever it is enacted. So in a way it doesn’t matter who goes first. Just as with other regulatory regimes, businesses will have to comply whenever the law comes in.
Q611 Aaron Bell: But is there not a risk that we end up having to comply—because people are selling to the EU—with whatever the EU does? And if the EU does something that we don’t think is wise, do we end up with the worst of both worlds: having to comply with that and with whatever we decide ourselves—and, indeed, with what the Americans decide? I will come to the summit in a minute, but presumably the point of the summit is to try to get some co-operation and co-ordination on this.
Kate Jones: As you say, international co-operation is very, very important in this space.
Jessica Rusu: On the work that I do with firms, they are very open about giving us feedback on an outcomes-based approach to regulation. Overall, through the work that we have done on the AI discussion paper—we will be publishing the feedback statement from that tomorrow, I believe—there is a very positive and welcoming approach to that outcomes-based approach. Through the work that we are doing, as you can see and as Kate has just said, with the DRCF, there is a lot of collaboration both domestically and internationally. I have spent quite a bit of time with my European counterparts on the topic.
Q612 Aaron Bell: Our interim report recommended that regulators conduct a gap analysis to see whether there are any additional powers that they would need to implement the principles outlined in the White Paper. Have you done that, and have you identified any gaps, or are you content with your current suite of powers?
Jessica Rusu: I believe, from an FCA perspective, we are content that we have the ability to regulate both market oversight and the conduct of firms. Over time we have done quite a lot of work looking at algorithms—for example, looking at firms’ assurance of cyber-security and algorithmic trading, so we are quite confident that we have the tools in the regulatory toolkit at the FCA to step into this new area. I am thinking in particular of the consumer duty, but also the SMCR.
Q613 Aaron Bell: There is a huge amount of uncertainty because it’s future technology, but what do you see as the biggest risks in your space? We heard from Dame Melanie about scams, and that is one that you must be very focused on.
Jessica Rusu: I spent a long time as a data practitioner and a technologist before coming to the regulator, and I think it is important not to get overly focused specifically on AI. AI has been evolving for quite some time—it is part of machine learning and advanced statistical methods. What we should be concerned about is our overarching digital infrastructure, our reliance on the cloud and those large providers, and cyber-security and other risks as they emerge. It is quite important that we don’t overly focus on the technology itself, but take a step back and look at how those risks are playing out in the market. If we look at the role of social media, the change in consumer behaviour, a lot of services moving to digital, to the cloud—we have to look at all of that in the round.
Q614 Aaron Bell: Mr Edwards, the same question to you about the gap analysis. Have you conducted a gap analysis on the back of the White Paper, and are there any additional powers you might want?
John Edwards: We did not specifically do a gap analysis against the White Paper, but we have been working closely with DCMS and then DSIT in relation to the Bill. Had we seen any gaps, we would have been advocating for those to be plugged with the Bill. We are quite happy with where the Bill has landed in terms of the powers, and while I think there is a really important task in having a whole economy gap analysis, we haven’t identified regulatory gaps in our oversight framework.
Q615 Aaron Bell: Thank you. Ms Jones, what role has your organisation had in planning the summit next week? Are you going to be represented at the summit yourself, or will individual regulators be?
Kate Jones: I am not going to be at the summit myself. I think you heard that Ofcom—Dame Melanie—was invited to the summit. As the summit focuses primarily on frontier risks, we have been less centrally involved in its organisation, although as Will Hayter mentioned we are very much involved in some of the fringe events—I am at a fringe event tomorrow, and we have a DRCF fringe event next Thursday, where all of the regulators will be represented, in order to talk about our work and explain what we do to a wider audience. As Will said, bringing in that wider audience and everything around the summit has been really very interesting. We have also had constructive discussions—each of us and collectively—with DSIT as they shape their work. That has been ongoing as well.
Q616 Aaron Bell: Do you think frontier AI is the right topic to centre this summit on? Regarding the question that Mr Clarkson asked earlier about it being called a safety summit rather than an opportunity summit—do you think we have got that right?
Kate Jones: Yes. The focus of the summit is that set of risks, which have been in the news a lot. There has been a lot of concern about them domestically and internationally, and there is perhaps a lot of work to do on them internationally, so seeing the Government take that up is a positive thing. It falls slightly outside much of the remit of what we are doing, as I say.
Q617 Aaron Bell: We have already discussed international co-operation, but from the regulatory perspective generally, are there specific outcomes you would like to see from this summit—anything that would enthuse you and make you feel that we were getting to a better place?
Kate Jones: The more international co-operation and shared understanding of risk we can have, the better. In the meantime, and in parallel, we absolutely need to continue with responses to the White Paper, implementation of the White Paper—as you are hearing all of the regulators doing. I think that is really important domestically.
Q618 Aaron Bell: Mr Edwards, are you represented at the summit?
John Edwards: No.
Q619 Aaron Bell: The same question I put to Ms Jones—what would you like to see in terms of an outcome from the summit?
John Edwards: Closer international co-operation is really important. I am here fundamentally to reassure the Committee that, in relation to existing applications which are available to be deployed, there is a sufficient regulatory framework. We don’t know what will come next—we don’t know what the next generation of AI will produce—so having a forum which can ready itself for what that might be has to be valuable. I think having some international principles must be a useful contribution.
Q620 Aaron Bell: Finally, the same question to you, Ms Rusu. Has the FCA been involved in preparing for the summit? Do you have an invite? What would you like to see from it?
Jessica Rusu: I do not have an invitation. I will be going with others to the fringe events. On your question about whether the focus is right, I think it makes sense in the first instance for the international community to come together and focus on the risks. We do a lot of work on global topics, for example, with the Global Financial Innovation Network, which is a group of more than 80 regulators, and we recently focused on one such topic, which is greenwashing. We often find that global problems require global solutions and global collaboration, so it is a good thing that that group is coming together to focus on those topics.
Chair: Thank you very much. Back to Carol Monaghan and then Stephen Metcalfe.
Q621 Carol Monaghan: Thank you very much. Ms Jones, if I could come to you again on this issue of the kind of oversight—I think that you said the threads pulling it all together—is that central co-ordination of AI regulation something that should or could be done by DRCF, or should a new body be set up specifically for that?
Kate Jones: I’m sorry: what do you mean by central co-ordination of AI regulation?
Carol Monaghan: Looking at, I suppose, AI regulation that all different bodies would need to adhere to.
Kate Jones: AI, of course, is not a sector; it is a technology, among other technologies, that is used in existing sectors, and to which existing cross-economy regulation applies. I think that the strength of the approach that the Government are taking in the White Paper is that it allows each of the existing regulators to draw on their existing strengths and expertise in regulating AI, and then the DRCF provides a complement to that by, in the case of the four centrally involved regulators, drawing them together. It is not about co-ordinating everything that they do but collaborating on some things that they do and providing coherence where their remits come close together. I do not see a role for the DRCF to become a sort of AI super-regulator, sitting above the others.
Q622 Carol Monaghan: Is there a place for its membership to be expanded?
Kate Jones: I think that there are various things to say about that. The first is that we do quite a bit of work with regulators that are not members of the DRCF. We chair a roundtable of regulators who come together to talk about their shared experiences in the digital economy, including AI. We also bring other regulators into specific pieces of work. For example, we recently did some work together on fairness, which is one of the principles in the AI White Paper, and we brought the Equality and Human Rights Commission into that specific piece of work. There have been other examples as well, where we have worked with regulators as DRCF-plus, as it were.
On expanding our membership, the door is not closed to that. It would be for the CEOs of the existing members to decide on any expansion. However, there are a couple of things that I think they would want to assure themselves of: first, that there is enough intersection between the remit of any joiner and all four existing members—that might be the case with one or two, but it is perhaps unlikely to be the case for smaller regulators, which might intersect remits with one, but not all four of us—and, secondly, that the value that we would get from collaboration would outweigh the increased cost of co-ordination, because co-ordination does come with a cost. Those, I think, would be the considerations if there were applications to join.
Q623 Carol Monaghan: Can I can ask the other two witnesses whether you are happy with the current structure of the DRCF or whether you would welcome its expansion? We will start with Mr Edwards.
John Edwards: I am happy with the current structure. It provides flexibility to expand, either on a project basis or, if there is a business case, for a further permanent membership. If you look at the Vallance report, which came out earlier in the year, I think that Sir Patrick Vallance identified up to 27 or 30 regulators that an AI innovator may have to interact with. Now, if that was your use case, that would be an unwieldy permanent body to set up and establish. However, I think that the way in which we have established the DRCF enables us, as Kate has said, perhaps to bring in a medical-devices regulator for a particular use or case to talk about what the regulation would be. Kate, I don’t think you mentioned the digital hub, did you?
Kate Jones: Not as yet, no.
John Edwards: We have received funding from Government, through the DRCF, to deliver a multi-agency advice service to provide a kind of sandbox function for innovation. That will be substantially AI, but not exclusively. It will also allow us to identify and bring in regulators whose remit a particular proposal touches, providing a one-stop shop for those industries.
Jessica Rusu: To add to what Mr Edwards said, the FCA has had a sandbox in place for over nine years. We have had innovation services and expanded that into our regulatory sandbox and our digital sandbox. The digital sandbox is the one that we will be leveraging to support the work described. We have hundreds of datasets and thousands of APIs. We stand ready to support any type of TechSprint or AI-related activity.
Q624 Carol Monaghan: Finally, are there any international examples of regulators coming together that we should draw on the experience of to get best practice applied here?
Kate Jones: On bringing together regulators within a jurisdiction, as far as I am aware, the UK was the first to set up a DRCF or an equivalent to it. As I mentioned, however, that model is now being followed in one form or another in various other jurisdictions. We also get a lot of expressions of interest by other jurisdictions, which are interested in what we are doing and are keen to see whether they might be able to do something similar. To support that, we have recently established the International Network for Digital Regulation Cooperation. We had the inaugural meeting a few months ago. That included the Netherlands, Ireland and Australia, and we now have Canada joining the network. We hope that it will grow.
John Edwards: May I come back on that? At the ICO, we are members of a number of international networks. It is very important that we co-ordinate with our international colleagues. The G7 has a data protection subgroup, and we meet to coincide with G7 meetings. This year, we had a joint statement across G7 regulators in relation to generative AI.
The Global Privacy Assembly brings together all privacy commissioners and data protection authorities in the world, with something like 130 members. We convened last week in Bermuda and settled on a resolution on AI in employment. These are valuable connections to set some international standards on how such technologies are deployed in different industries.
Jessica Rusu: I have touched on this already, but we do quite a lot of international collaboration. We chair the Global Financial Innovation Network, which has over 80 global international regulators. The work that we do together is substantive: it covers AI, greenwashing and crypto. Through that work, we have moved forward quite a few international approaches and solutions. We also chaired IOSCO and convene with many global regulators. For example, they come frequently to share approaches and supervisory technology, as well as AI approaches. There is quite a lot of collaboration.
Q625 Chair: I have some supplementaries for Ms Jones. On the Digital Regulation Cooperation Forum, who decided what its membership should be at the outset?
Kate Jones: I was not there at the beginning, but as I understand it, initially three of our members—the ICO, the CMA and Ofcom—came together and started to talk about establishing a body of that nature. The FCA joined slightly later—
Q626 Chair: At the invitation of the other three.
Kate Jones: At the invitation of the other three, absolutely.
Quite early on, the terms of reference were agreed for the body. We have quite a set of documents essentially setting our constitution, and how we are funded and resourced across the four regulators.
Q627 Chair: It is odd, isn’t it? It is good that regulators talk to one another, but for this to be the country’s foremost interface between regulators and to be a kind of private members’ club by invitation and veto is rather eccentric.
Kate Jones: I have never seen it as a private members’ club. That is a new analogy to me—
Q628 Chair: Essentially, membership is by invitation, and as I think Ms Rusu said, you can be vetoed by an existing member if you want to join.
Kate Jones: Any regulator could apply for membership. As I mentioned, we are often talking with other regulators.
Q629 Chair: But they haven’t applied to join?
Kate Jones: At the moment, we have not had any other applications for membership. But as I say, the key point here is whether any other regulator would actually have enough in common with the work of our specific regulators to make it worth coming together.
Q630 Chair: Well, let’s think of some; let’s take the Medicines and Healthcare products Regulatory Agency. Even this morning, healthcare has been referred to as one of the most positive aspects. There is a lot, whether in discovery or the use of patient records, that interfaces with Mr Edwards’s work. There is a lot in the health space; why would the MHRA not be part of your discussions to co-ordinate the approach to AI?
Kate Jones: We would absolutely be open to discussion of that, if there were interest in doing it. But I think that the question would be whether the MHRA has enough of an intersection with each of our member regulators’ remits—
Q631 Chair: Who judges that?
John Edwards: May I come in, Chair? It is important to remember that each of the regulators maintains bilateral relationships with other regulators in the area, so the DRCF is not the sole place where this collaboration takes place. We might well work closely with the MHRA regarding how it approaches an issue of regulation in using patient data, for example.
Q632 Chair: In so far as the forum is looked to, to produce or certainly to influence a coherent approach to AI regulation in the UK, that is a bit different from your day-to-day transactional relationships. If that is what, perhaps, a voluntary initiative has become, should it not have a more strategic determination of who is relevant to that, which might include the MHRA?
John Edwards: Well, it may or may not. These are conversations that we have all the time. What is the marginal value of having the MHRA as a member of a body that has a certain number of cross-cutting issues that apply to all four members? The more you add, the fewer connecting points you have across the whole body, so—
Q633 Chair: It seems to me, as a layman in this, that the Financial Conduct Authority has a lot of responsibilities, of which a sliver relates to AI, and one might say the same for the MHRA. It is not that AI dominates everything that the FCA does; it is the financial conduct regulator.
Let us take another example: the Surveillance Camera Commissioner. This Committee has taken a lot of evidence on the use of surveillance, and AI in that. A lot of the questions that Ms Butler has been very effective in pursuing, regarding things such as bias, seem very relevant to that. Why do they not make the cut?
Kate Jones: It is important to remember that the DRCF is not a specifically AI body, nor does it have any remit beyond its member regulators.
Q634 Chair: Should there be a specifically AI body, then, if that is not the DRCF’s core purpose?
Kate Jones: I think the challenge in setting up a specifically AI body would be in how that would interrelate with the way all of the existing regulators are themselves looking at AI in the conduct of their existing statutory functions, whether that is DRCF members or non-DRCF members.
Q635 Chair: That is a bit of a Catch-22, isn’t it? If you are saying that we want to take an approach of empowering existing regulators and encouraging them to come together so that they can co-ordinate, and so there is coherence within that, but then take the view that they are not automatically part of it, so they cannot be co-ordinated because they are not part of the discussions, how can that be resolved?
John Edwards: I would take the counterfactual and say, “Well, we haven’t had regulators knocking down the door.” If you have heard that the MHRA believes that it needs a place at our table and has not got one, and that there is a public policy deficit for that reason, we would be really interested in looking at that. However, the DRCF is not just an AI-policy co-ordinating body; its remit is across a number of areas of common interest among the four regulators. It provides a forum for co-ordination across things like the Google third-party cookie deprecation issue that Mr Hayter spoke about.
There are cross-cutting issues that touch on the ICO and the CMA, where we will be seeing the same. We will put out a joint statement soon on foundation models. Again, the CMA has a legitimate interest there and has done some preliminary work. Some of the options have implications for the uses of personal data, so it is helpful for us to work closely on it. On online safety, the ICO and Ofcom have to be working absolutely cheek by jowl, because there is an overlap sometimes between content harm and data harm. These are not necessarily AI issues. The work of the DRCF provides us with a forum for identifying those cross-cutting issues between the four of us.
Q636 Chair: Ms Jones, the Government describe in their White Paper a prospective need for a central body for horizon-scanning and developing expertise. You do not think that that should be the forum.
Kate Jones: As I understand the White Paper, the Government suggest that they will take on the central co-ordination—
Q637 Chair: But you prefer that? It takes one part of your function and overexpands it.
Kate Jones: I think it is slightly different from our function, to be honest. For example, one of those central functions is monitoring how well the system is working. That certainly would not be a function for the DRCF; that is clearly a governmental function. We can break it down a bit, function by function. But there is clearly scope for both things, with the Government providing the central functions to all regulators, and with the sort of work that we are doing, where our remits overlap on the digital economy as a whole, including AI, in bilateral ways as well as between the four of us.
Q638 Stephen Metcalfe: I am conscious of time, and I know that Ms Butler might want to come in with some supplementaries to my questions. Briefly—specifically to you, Jessica—the FCA has presumably been regulating the use of technology for a long time, and AI in particular. Can you tell us how, historically, you have set about regulating the use of AI and whether the emergence of large language models has changed that practice?
Jessica Rusu: The FCA is technology-neutral, so we do not regulate a specific technology; we regulate the conduct of the firms that are within our perimeter, which is always changing and expanding. We have a vast waterfront. On our regulation of the firms, the firms will potentially provide their own assurances and potentially a skilled persons report. If you think about cyber security, which I mentioned earlier, or aspects of trading algorithms, those are something that they will self-certify through the SMCR and the conduct regulation.
Q639 Stephen Metcalfe: Okay. In regulating the firms and their activity, how much do you as an organisation need to understand how they are creating or doing that activity? Or do you just seek their reassurance?
Jessica Rusu: There is the data technology and innovation approach, but bear in mind that I am a technologist as opposed to a supervisor. From a supervisory approach, what we would want to understand is that the firm has a solid risk management approach. When you are creating an algorithm or thinking specifically about algorithms, you would sometimes need to have what we call a red team approach: you design a model that has a second line of defence to certify the model, you understand both the inputs and the outputs and you are able to point to a robust model risk-management framework. Those are the kinds of things that we would be asking firms to demonstrate. That does not change if the algorithms become more complex. You need to have accountability and responsibility for any area of the firm that you are responsible for. Whether you have insourced the technology or outsourced it, you are still accountable for what it does and its impact on consumers.
Q640 Stephen Metcalfe: Accepting that the technology is becoming more advanced and widespread, and that—I don’t know if it is fair to say this—there is generally an understanding of how the machine learns, that it is a black box, does that approach still work? You can test a system when you know how it functions, but how do you test it when you are not sure how it might react in myriad different circumstances?
Jessica Rusu: I have some experience with this. I can potentially lean on my previous experience at a fintech, where I was responsible for building AI models. You absolutely can control what goes into a model. We can take credit risk management as a specific example. There is a lot of very robust information that can be put into a model, and you need to understand what those inputs are and oversee them with high quality. Then of course, in terms of the outcomes of the model, we say it is a black box, but data scientists do have methods—this is called XAI—and there are also open-source possibilities, which I think you touched on with the previous panel. There are ways to acknowledge which variables have had the most important outcome in terms of the model. So we say it’s a black box, but there are techniques within the data science community to look inside the black box.
Q641 Stephen Metcalfe: And that is what you would examine: how they had potentially trained the black box?
Jessica Rusu: The FCA does not audit particular firms’ models. They would ask a skilled person, if it were necessary, to provide a report on that.
Q642 Stephen Metcalfe: My last question is therefore this. An increasing degree of expertise is needed to understand how, potentially, these systems work. Do you have access at the FCA to the people you need? Are there any barriers? What could we do to improve availability?
Jessica Rusu: I am very fortunate: I have had the opportunity to establish a new division in the last two years. We have grown the data, technology and innovation division to approximately 500 skilled persons. We have, in particular, 75 in the area of advanced analytics. We are also growing our Leeds digital hub, in the north. We have invested quite a lot in skills at the FCA, not just within my division but across the entirety of the FCA. You might be familiar with our transformation programme being data-led and proactive. We have done quite a number of things to upskill not just our own teams, but all the teams across the FCA.
Q643 Katherine Fletcher: In our job, a lot of the time we do inquiries about problems that were caused by groupthink—individual teams of people coming at something in the same way. We are politicians; I note, from your backgrounds, two of you started your career as a lawyer—forgive me, Ms Rusu, I don’t know what your background was, but I want to make a generic point, not a specific one.
Could you individually, just in a sentence or two, set out the mechanism that you have for concerned, knowledgeable individuals to be able to raise an issue? I think the truth that comes out in lots of these AI hearings is that none of us really knows what all the problems are; it’s so new. It strikes me that there is a need for contingent lines of communication, where senior executives or senior regulators cannot get caught by group-think and told it is all right. If I were a young Dawn Butler coding in C++ and I felt that I had found something very concerning, how would I raise that with you?
Jessica Rusu: Personally, I think diversity in teams is incredibly important. You need all variety of perspectives. Whenever you are building something—
Q644 Katherine Fletcher: I understand that, but what steps do you have in place to ensure that that information gets to you? I don’t want to use the word “whistleblower”, because that’s a little bit too strong.
Jessica Rusu: In terms of sources of information that come into the FCA, we have, for example, web scraping, if it’s about online harms, if it’s about scams—we receive all sorts of information—
Q645 Katherine Fletcher: But if I were a concerned and skilled individual and I had a problem and wanted to raise it with you—I used to work in the City myself, pre the 2008 financial crash—how would I do that, mechanistically?
Jessica Rusu: We would receive whistleblowing reports. You can phone the sub-hub if you have a particular concern about a firm, and then we would take action on that intelligence. We have a vast variety of intelligence sources.
Katherine Fletcher: That is fantastic. Ms Jones, how would you get that information in your ear?
Kate Jones: We are simply a co-ordinating mechanism, so it is a little bit different for us. When it comes to thinking about what we are going to work on next, we are planning this year to run something called an ideas lab across the four regulators, where anybody can raise ideas of things that we should work on.
Katherine Fletcher: Superb. Mr Edwards, finally.
John Edwards: I really value the role of civil society in this area—patrolling the boundaries and drawing things to our attention. We have had things brought to us from Big Brother Watch, from the Open Rights Group and from Baroness Kidron’s 5Rights Foundation. They will do research, identify harms and put things in front of us. I won’t specify which group because I can’t quite remember, but someone came to us and said that they are worried about the way AI is deployed in local authorities and the Department for Work and Pensions. We have gone in, had a look and found that those concerns are misconceived. In fact, there is adequate human oversight; there is not automated decision making.
Katherine Fletcher: It would be fair for the man or woman in the street to want to make sure that there was a way of engaging with this highly august regulatory committee. Thank you for setting those out.
Q646 Dawn Butler: On that point, Mr Edwards, you responded earlier to Ms Monaghan’s question on facial recognition by saying that part of your job is to find times when it has failed to be used in a responsible way. Have you found any times when it has been used in an irresponsible way?
John Edwards: We worked with one organisation—I think we publicised it—called Facewatch, which provides a commercial product. We found some deficiencies in the way it was offering its product and ensured that they were remedied.
Q647 Dawn Butler: What were the deficiencies and how were they remedied?
John Edwards: Sorry, I have not come properly briefed to talk about the specifics of a particular case, but there is information about that on our website, ico.org.uk.
In terms of the matters that Ms Monaghan raised, particularly police services’ deployment of live facial recognition, we have worked with police forces in Wales, for example, to ensure that trials of live facial recognition are proportionate, that there are adequate evaluation and monitoring mechanisms and processes in place, and that there is a thorough review after the fact. We have been satisfied that they have been conducted responsibly and proportionately.
Q648 Dawn Butler: Are you satisfied with the court cases that have been pursued in regard to some facial recognition cases?
John Edwards: I am not sure which court cases—I am not sure whether I would be satisfied or not satisfied with a court case. I take the guidance of the judiciary very seriously.
Q649 Dawn Butler: You also said that people are used to being surveilled quite casually. There is a very big difference between people being surveilled and people being wrongly identified, as has happened with some facial recognition. What we are doing to our judicial system, which you have just said you take really seriously, is turning innocent people into guilty people. They are now having to prove that they are not the person they have been identified as through facial recognition. What are your views on that?
John Edwards: I have seen that in other jurisdictions. There is a book that has just come out called “Your Face Belongs to Us”, by the New York Times journalist Kashmir Hill, which looks at a number of facial recognition technologies and identifies instances in which people have been wrongly identified and have been detained as a result. I am not aware of specific instances in the UK of people being misidentified and detained as a result.
Q650 Dawn Butler: You are not aware?
John Edwards: I am not.
Q651 Dawn Butler: I thought you said that you had been in communication with Big Brother Watch. It has a whole document on that.
John Edwards: As I said, I am not aware of instances where the technology has resulted in people being misidentified and—
Dawn Butler: Wow. That has come as a really big surprise, because even in the evidence that our Committee has received, that has been highlighted. I am really quite flabbergasted at that. I don’t know what to do with that, Chair.
Chair: Perhaps Mr Edwards could write to the Committee and we could have an exchange of correspondence about it.
Dawn Butler: Thank you very much.
Q652 Chair: Can I end with another case that may be illustrative? In the case of Snapchat, with which you will be familiar, a provisional notice has been served because it was thought that the platform’s use of AI would put the privacy of children at risk. It is interesting for us because it is one of the early examples of enforcement using your powers. I think it has the chance to make representations, so it is provisional, but what was it that caused you to act? What were your concerns about it?
John Edwards: It is really important that I emphasise and caveat that it is a provisional notice and we should not assume that Snapchat is in breach. We have told it that that is our provisional view; it has the opportunity to make representations; we will make a formal decision having received them.
Having said that, we were concerned that there were risks that were identified in relation to the processing of data of young people, and that those risks were not sufficiently identified. There are concerns about the level of transparency in that product. We took the step of publicising the provisional notice because I think it is very important to send the market a message that it needs to be accountable for the risk assessment that it is doing before deploying. In this stage of a new technological deployment, there is a rush to market. We want to ensure that corners are not cut in ways that put people at risk, in a way that has perhaps happened in other deployments such as in social media and the like.
Chair: Thank you. The Committee will take a close interest in this. It is notable that this is one of the early uses of existing powers explicitly in AI. It is interesting that you felt moved to do that, but also that you think you have had the powers to serve that notice.
We have gone over our time, but as I said earlier, that is because the subject matter is so important and so interesting. I am very grateful to Mr Edwards, Kate Jones and Jessica Rusu for joining us today.