Treasury Committee
Oral evidence: AI in financial services, HC 684
Wednesday 7 May 2025
Ordered by the House of Commons to be published on 7 May 2025.
Members present: Dame Meg Hillier (Chair); Dame Harriet Baldwin; Chris Coghlan; Bobby Dean; John Glen; John Grady; Dame Siobhain McDonagh; Lola McEvoy; Dr Jeevun Sandher; and Yuan Yang.
Questions 1 to 92
Witnesses
I: Jana Mackintosh, Managing Director, Payments and Innovation, UK Finance; David Otudeko, Director of Regulation, Association of British Insurers (ABI); and Amandeep Luther, Artificial Intelligence lead, Association of Financial Markets in Europe (AFME).
Examination of witnesses
Witnesses: Jana Mackintosh, David Otudeko and Amandeep Luther.
Q1 Chair: Welcome to the Treasury Select Committee on Wednesday 7 May 2025. Today we are looking at AI in financial services. The Government seem to be telling us AI is the solution to everything but there are huge implications in financial services. This is our first foray as a Committee into how this might affect businesses, consumers and the financial sector more generally.
I am delighted to welcome Jana Mackintosh, who is Managing Director, Payments and Innovation, at UK Finance. David Otudeko is the Director of Regulation at the Association of British Insurers—welcome. Amandeep Luther is the Artificial Intelligence lead at the Association of Financial Markets in Europe, sometimes referred to as AFME. A very warm welcome to you, Amandeep. I am going to ask Chris Coghlan to kick off.
Chris Coghlan: Thanks so much for coming in this afternoon. We are all very worried about the UK productivity crisis in particular. Ms Mackintosh, how confident are you that AI will boost the productivity of financial services?
Jana Mackintosh: Thank you very much for the question. Good afternoon, everyone. I think there is a lot of evidence from the years of consideration, exploration, pilots and our early adoption of AI that we have seen that can lead us to feel quite confident in saying that we think there will be a positive impact on productivity. Some of the recent estimates have shown that productivity can be improved up to 30%.
Q2 Chris Coghlan: In financial services?
Jana Mackintosh: In financial services. It very much depends on the use case and on the organisation and the application thereof within that context, but I think we can be quite positive about that.
Q3 Chris Coghlan: Do you know how long those productivity gains are likely to take to materialise?
Jana Mackintosh: The productivity gains of 30% that I am talking about are for the use cases that we have already seen being implemented. If you see scale adoption and greater use of the technology you may very well see that gradual increase in terms of its breadth but also potentially in terms of its impact.
Q4 Chris Coghlan: Should we expect generative AI to be a major step change on that?
Jana Mackintosh: It certainly has had a significant impact. I think we have seen predictive AI being adopted earlier. We have been exploring that for longer. Generative AI is maybe a little bit more nascent in its application so as we progress with that most certainly.
Q5 Chris Coghlan: Are you expecting any particular breakthroughs or I presume it is quite hard to forecast?
Jana Mackintosh: It is a little bit hard to forecast. With all technologies we always consider if there will be some form of breakthrough use case, that forced mass adoption of scale. At this time what we are seeing is a gradual adoption of the technology as we become more comfortable with understanding the technology, understanding the risk and implementing it. Over the 10 years that we have been looking at this it has been gradual. We can assume that there may well be an uptake.
Q6 Chris Coghlan: Mr Otudeko, there is a very high percentage of firms using AI in the insurance sector—apparently 95% according to the Bank of England. What makes the insurance sector so well placed to adopt it?
David Otudeko: Hello everyone. First, we are an innovative sector that embraces technology, and we are also very data centric. That is the opportunity that AI presents in terms of how we interrogate and use AI to structure data and to drive insights from data.
To Jana’s point about generative AI versus predictive AI, I would say the insurance and long-term savings industry have been using predictive AI in advanced models of that description for quite a while and it has primarily been focused on data analysis and trying to drive trends.
When we talk about generative AI, for me it is a bit of a step change in terms of the adoption of it and trying to understand the risks before you go headlong into adopting generative AI. The 95% figure is focused on existing technology that we are aware of.
Q7 Chris Coghlan: Presumably it is the point where you can probably no longer be competitive in the insurance sector unless you use AI?
David Otudeko: It will be a factor, but I cannot hand on heart say it would be the only determinant of competitiveness in the sector, whether locally or internationally.
Q8 Chris Coghlan: We have also received evidence that fintech firms are quicker to adopt compared to traditional firms because they are more agile, they have fewer legacy systems, and they have a culture of innovation. In your experience would you agree with that?
David Otudeko: I would. Fundamentally the business model of a fintech hinges on the use of technology so I would expect them to use technology considerably more than your traditional insurers. The 95% adoption figure in the sector proves that clearly is a sector that is looking to innovate but looking to use AI safely and as I mentioned earlier before going into it making sure that we understand the risks fully before we adopt, which is what we are seeing in terms of the generative AI space and the gradual approach to its adoption in the insurance industry.
Q9 Dr Jeevun Sandher: Mr Luther, I want to talk to you about the ways in which AI may change employment within the financial services sector particularly around the kinds of tasks that may be carried out. What tasks do you see in the financial services sector being—I was going to say “automated”, but that seems a little bit old—done by generative AI or any kind of AI in the future?
Amandeep Luther: Thanks for the question. If I may take a moment to give some context to some of the answers that I give today. I am the AI lead at AFME. As the Chair mentioned, AFME is the Association of Financial Markets in Europe. Our members are the largest global and regional bank to have a presence in the UK or Europe. Collectively we encompass approximately 90% of European issued debt and 80% of European issued equity, so I guess I come to it from that lens. Our work primarily focuses on advocating for the adoption of AI within financial services and within that we speak with regulators, governments, and large and small tech firms to help banks get the information that they need to move forward with the technology.
Going more directly to your question, it is important to make the distinction that we have to a certain extent already on what we mean by AI. AI typically nowadays in the conversation tends to refer to generative AI, which is the technology that has only really been in existence for the last two and a half years. It is pretty nascent. In terms of where AI adoption is with banks and how it is changing and how it may change employment and skills in the future, I think we must look at it in the round.
Generative AI is more than iterative technology. It had a breakthrough moment to the public perhaps two and a half years ago but in terms of how banks have been using it it has been around in various guises for a while. What that means is that it is not perhaps quite the step change in the wholesale market as it may appear to be perhaps to the general public.
From a skills and employment perspective, we have seen that gradual upskilling among our workforce for the last decade or so to incorporate technologies such as this and, while there is certainly an element of the new technology that does require perhaps external skills to be brought in or further upskilling, we are not seeing a dramatic shift there.
In terms of the employment profile or the staffing profile in the wholesale market now we are not seeing a dramatic shift that we can attribute directly to AI. It is fair to say that if we look forward to the next two to five years we imagine we will start to see perhaps a change in the level of skills that are in demand, and I can imagine that certain roles may change. But at this point it is difficult to see a radical change to the staffing profile in the wholesale markets.
Q10 Dr Jeevun Sandher: Looking forward then in terms of having new skill creation as the new task creation from AI that you can do, do you think that new task creation is going to outweigh some of the older tasks that would no longer be done because AI is now helping to perform those?
Amandeep Luther: That is a good question. It is hard to quantify whether it will outweigh existing tasks. If I may clarify, is your question referring to whether there are certain types of roles or skills that may become obsolete in light of the new technologies? Is that what you are referring to?
Q11 Dr Jeevun Sandher: Yes. Thinking about the automation revolution, people had computers, and they could do different things, so you did not have to sit there with a slide rule. Excel does it fast. It meant that jobs changed but jobs were not destroyed because people became more productive. What happened in the middle labour market was different.
The question I am getting to is: are there enough new tasks being created inside the industry going forward that you see or is it that AI is going to do so much and there will not be that much of a demand and the employment will fall in the sector as happened in farming 200 years ago, for example?
Amandeep Luther: In the wholesale markets perspective I would not expect to see the dramatic shift that you describe there. We have a history as an industry of incorporating new technology in a gradual process. There has been a gradual electronification of market over the course of the last 20 or 30 years, and AI is probably just a natural evolution in that process.
There are certainly roles that we see already today and activities that are being performed by staff, typically the more procedural and repetitive tasks, where AI and generative AI has certainly made things quicker and easier but for the most part at the moment what we see is that that has freed up staff to work on more value-added tasks as opposed to making that role redundant in its entirety.
Again, it is important to clarify that generative AI, in particular, is very nascent and the advancement is extremely rapid, probably in a way that we have not seen technology advance in recent times. It is very difficult to make a prediction several years out but at this point from a wholesale markets perspective we are certainly seeing it as something that augments our existing workforce as opposed to changing fundamentally the roles and responsibilities within organisations.
Q12 Dr Jeevun Sandher: When it comes to these new tasks and the skills required for it, how are you finding the training to help your employees? On the other side, how do you feel government are working with you to help create those skills?
Amandeep Luther: I will take each of them in turn. In terms of how our members are looking to upskill their employees there are two elements to it again. There is an element where this technology is moving at such a rapid pace that the knowledge and skills are being developed outside banks to a great extent.
Where that is the case, our members are working as hard as they can to ensure that they can attract those individuals with that knowledge to work in the sector to enable us to inherit some of that knowledge and purvey that throughout the organisations that we work with. That does not take away from the existing employees that we have in the organisation. The main knowledge is extremely valuable. To that end our members engage very actively in upskilling existing members of staff.
Again, this can take different angles. There are some staff who may just need a general awareness of the technology, what the limitations are, how it can be used and so on, and there is one level of training which most firms are now rolling out almost across the board to all employees. There are other employees who may be involved in developing these tools and for them more technical training is being provided either internally or by external certification and so on.
That is certainly an active area, and we have definitely seen an uptake in terms of both hiring into the sector, so in terms of job creation in the wholesale market sector from AI in the last couple of years, and we are definitely seeing an increase in the training that we provide of a more technical nature which perhaps five or six years ago we may not have been doing.
In terms of government support in supporting firms to engage in the upskilling, at this point I think it is fair to say that has not required us to take significant government support on. That support is there when we need it in conversations such as these. At this time the conversations are of a reasonably technical nature such that there are providers out there and skills out there that we can adopt within firms.
Q13 Dr Jeevun Sandher: My final question is about graduates and non-graduates. Non-graduates have lost out a lot over the past 40 years. Manufacturing disappeared; it was done by machines and benefited people I suppose sitting in this room going forward, but non-graduates lost out. About a third of those in financial services are non-graduates and they are going to need either uptraining or will end up leaving the sector altogether. How do you think we should ensure that non-graduates can remain engaged inside the sector as it changes with AI?
Amandeep Luther: That is a great question. I think AI is a very unique technology and generative AI specifically in that it is very general in its nature and very general in its application. While you are right to point out that graduates may be in a good position to capitalise from AI, I think that is really more in the much more technical areas where perhaps graduates are involved in engineering, data science and maths where their knowledge would be very applicable to developing some of these models.
What is unique about AI is that these models being generally applicable can be used by somebody who is very non-technical. That has become quite key to the wide adoption of the technology. A lot of our member firms have seen an example in software engineering—software engineering typically and historically has been a very technical profession, typically involving somebody who had to have a degree at a minimum, potentially even a postgraduate degree, a knowledge of maths and science and so on.
With some of the newer AI technologies, anybody with very little—in fact, to some extent no—technical knowledge, can start to build software. We have now started to see quite a new world that would have been unimaginable five or six years ago where somebody with no technical knowledge can now start to specify to some extent how they want software to behave, which works for their personal use. We are seeing that quite unique situation here where both the graduate and the non-graduate can benefit from the technology.
In terms of going back to your original point about job creation I would fall back to my previous answer. We are still not seeing among our members a significant shift in the roles and responsibilities within firms, but we are seeing that adoption of AI across the board and it benefits all employees, whether graduate or non-graduate.
Q14 John Glen: Mr Luther, can I turn to the culture of the workplace and how AI might impact it? There has been some evidence that suggests that AI is capable of looking at things such as harassment, bullying and I imagine probably performance issues generally. What do you think about the extent of the scope of the impact of AI on behaviours and what you think is likely to evolve in that area?
Amandeep Luther: The first thing to point out is that the monitoring and surveillance of employees in the wholesale market specifically is a regulatory requirement. Trading in particular is one of the most tightly regulated areas of financial markets, so we have surveillance in place by regulation to detect areas such as market abuse, misconduct and so on.
Q15 John Glen: Yes. That is transaction related. I am talking about the stress levels and the wellbeing issues.
Amandeep Luther: Again there are probably two elements to that. Yes, there is transaction surveillance. There is also surveillance around communications and such like. When it comes to the almost emotional surveillance that I think you are getting at, detecting someone’s behavioural patterns, it is an area that your Clerk mentioned may come up in the hearing today, so I canvassed several of our member banks a couple of days ago to ask them about this point. None of the wholesale banks that I spoke to reported using technology of that nature.
There are a very clear set of guidelines around how any kind of monitoring or surveillance can be put in place and there are very clear privacy controls and privacy committees that must be satisfied before any technology is put into place. This technology generally is ring fenced around business platforms, business hardware and business activities. At this point in time none of our firms that we work with are either using any kind of emotional detection using AI or have any plans to implement surveillance of that nature.
Q16 John Glen: I guess there are some applications for people who are not in the workplace at the moment—people with disabilities perhaps, people who are partially sighted using text-to-speech and so on. Do you see any scope or evidence from your members that this is an area where they are considering new options for recruitment?
Amandeep Luther: That particular scenario I have not seen any evidence of, but if I may I will give an opinion to the extent that I am able to. As I mentioned earlier to your colleague, we have certainly seen that generative AI is enabling individuals with different skills to perform roles that they were not able to before. It certainly is the case that AI, generative AI, text-to-speech technology, is coming on in leaps and bounds.
To the extent to which that enables individuals who may have not been able to perform a role before but are able to now due to the technology, our members would certainly be willing to explore that. It is something that from an employment and recruitment perspective they are always open to. I see no reason why anybody would be held back from working in financial services because of a disability like that. It is certainly where AI could help.
Q17 Chair: What about the other witnesses?
David Otudeko: I am happy to give a view. I am proud of my industry and our diversity credentials and the fact that we take diversity and inclusion very seriously. I would see members jumping at the chance to give people who are qualified to do jobs the opportunity to do those jobs and not stop them from seizing those opportunities because of disability or anything else that might be listed under the Equality Act. It is definitely something I would say that my industry would jump at the chance of adopting if the opportunities and the use cases do present themselves.
Jana Mackintosh: If I may add, when it comes to recruitment, emotional monitoring, emotional manipulation, those are very high-risk applications and use cases when it comes to AI.
Q18 John Glen: You mean it is not a reliable tool to use to monitor and evaluate people in that regard?
Jana Mackintosh: Yes. It is not well understood enough or developed enough for us to feel certain that there are not any risks associated with deploying that confidently and comfortably. We do not see—certainly in the use cases that we have explored across banking and payments—that those are being used as use cases.
The technology and the adoption have been focused predominantly on either well understood areas such as risk management and fraud detection or emerging areas where there are low risk use cases with human monitoring over those use cases. I think some of the use cases that you are talking about we certainly have not seen being adopted in that same sense.
Related to that, in terms of understanding vulnerable customers, we do now comply with the consumer duty that has been implemented, so there is an obligation on the financial services firms to ensure that they understand their customers. I think AI can be a useful tool in helping to ensure that we better understand customers, but manage the risks in terms of keeping that data and information private and confidential and it being used in the right way. That is an area where it can be helpful, but it can also be helpful to vulnerable customers in terms of the enhanced products and services that they can potentially receive through the use of the application of AI.
Q19 John Glen: Is there not a risk that, given the profiling that you can now do both of your customers and of your potential recruits, notwithstanding the obligations of the consumer duty there is going to be a huge temptation for large organisations to say, “We do not want people with these characteristics” and they can be screened out without anyone knowing about it? How do we give confidence that the industry will not be doing that?
On the issue of consumers, of course the cost to serve across the whole range of customers can be quite a significant range. There must be an enormous temptation to say, “Well, we know people with this profile, these characteristics, this history will be more difficult so we can prioritise them differently”. Do you not recognise that risk?
Jana Mackintosh: Like many of the risks that you encounter when you think about the adoption of technology, and in particular in AI, many of the risks are not new. I think the application of AI can certainly amplify those in the way you are talking about. As Mr Otudeko said, we feel confident that the industry is acting in the best interests of their employees and customers. Applying a technology tool such as AI should not change that risk culture, that kind of behaviour that you would expect within the organisation.
There may be greater temptations, but within the risk frameworks that you employ to recruit and to manage employees the tool itself as a technology should not change the culture and behaviour of an organisation.
Q20 John Glen: I must just press you one more time on that. It should not, but we also know it clearly could, to gain competitive advantage in what can be a very intensely competitive industry, in parts. What proactive safeguards, assessments, studies or work need to be done to ensure that you do not have to do it after there is some sort of crisis or scandal blows up and a whistleblower tells us?
Jana Mackintosh: Yes. It is right to make sure that we understand and can manage some of these more extreme, high-risk cases in the right way. There are ongoing conversations with regulators in terms of the compliance under consumer duty or other regulations that would be applied in the same way. Hopefully you would be able to monitor and detect some of those risks as they emerge, not after the fact.
You can also be quite proactive. If we think about the way that the regulations have evolved in Europe, they have prohibited some of these use cases. If, as an economy and as regulators of society, we feel that some of the technology should not apply to certain use cases you can go to the extreme and make sure that the regulatory framework or the way that we look at working with regulators consider those use cases—especially those high risk use cases—up front.
Q21 Lola McEvoy: We are going to move on to algorithmic trading and generative AI. I thought we could start off with Mr Otudeko. Where do you put yourself in terms of being a sceptic or a zealot for the adoption of more algorithmic trading, more bots trading on our markets?
David Otudeko: Thank you for the question. I am giggling because I read the evidence given by Mr Woods and Tanya Castell and I thought to myself, “I am definitely going to steal Tanya’s response”. I put myself up on the fence because I said at the very beginning that my industry is very pro-innovation and we can clearly see the opportunities that AI presents and we want to embrace that, not just for the benefit of our business models but also for the benefit of our employees and the benefit of our policyholders.
I am also a risk person, and I am alive to the fact that you need to monitor the risks specifically of technologies that are emerging and for something such as AI that evolves very quickly. It requires you to be on your toes in terms of how quickly you can identify those risks and how you mitigate them. For me, you almost have to keep your eyes on both ends of the spectrum. I would say I am both ends. It is a classic sitting on the fence answer, but it genuinely reflects what I think about the topic of AI.
Q22 Lola McEvoy: With AI, particularly generative AI, in our trading markets and the future of more bot trading what is your analysis of the risks versus opportunities that that brings?
David Otudeko: Probably not one that sits squarely for an insurer but again the point I made at the beginning about a gradual adoption of generative AI is so you can understand the risks before you take the thing on. You do not want to be addressing the issues after the fact. You want to understand them before you implement these technologies.
There are a lot of elements across the insurers’ business models around things such as investments, slightly off topic, where you must understand the end-to-end risks before you place investments. I would almost go as far as saying that with things such as technology AI you must feel comfortable that you understand the risks before you adopt those technologies. Again, it is about risk management and balancing the opportunities versus the risks presented by the technology.
Q23 Lola McEvoy: Jana Mackintosh, in your analysis are you a zealot or a sceptic of specifically algorithmic trading and the use of AI in that?
Jana Mackintosh: Again I would say that when it comes to statistical techniques being used in trading those are very well known and have been used for a very long time. There will naturally be an evolution of some of those algorithms within trading, and technology will aid those. They are being explored, and rightfully so, but would agree that adoption comes with all the other measures, considerations, risk management, that will have to be built into it. There are certainly more complex machine learning techniques with the combination of human oversight that will benefit and provide benefits to the sector and to trading.
Q24 Lola McEvoy: Can I push you on that? What benefits to the sector would we get?
Jana Mackintosh: You do get efficiencies in terms of transactions, speed, intelligence gathering, the way that you inform decision making and risk management that comes with that. This is all information that you can gain by employing AI. When it comes to some of these use cases that have already embedded some form of machine learning or statistical techniques, the benefit of AI just means that you can potentially perform a task faster and with more accuracy.
Q25 Lola McEvoy: There is machine learning and then there is specifically bots trading, replacing—to go to Dr Sandher’s point—existing jobs with AI-generated traders. I think we are all up to speed with the benefits of productivity and in some of the back-office processes but in terms of taking those transactions there are some serious risks. Do you think we have the regulatory framework right in this country to mitigate against them?
Jana Mackintosh: At the moment we do not see those being employed. We do not see that being used. It is an ongoing area of research with the Bank of England, and we are supporting them in making sure that we consider that before any financial services firms consider deploying those.
Q26 Lola McEvoy: They are not being used in this country at the moment?
Jana Mackintosh: Not as far as I am aware.
Q27 Lola McEvoy: Mr Luther, I will come to you because I think you might have something you want to add on this. First, where do you put yourself, sceptic or zealot?
Amandeep Luther: I think it would be hard for me to claim I am a sceptic given the use of AI in financial services and AI is actively being used in trading today. What I would say is it is important initially to clarify what we mean by algo trading and what we mean by AI. Often I find that the two can be conflated in conversations such as this. Algo trading has been around for decades.
In the late 1980s and early 1990s we had a computerised order book so there was the ability for you to leave an order at the exchange to be executed without any human involvement whatsoever. That has been around for a long time. Now technology has developed outside of our sector over the last 20 or 30 years. The sector has continued to adopt that technology and move along that journey. What that means is that machine learning, traditional machine learning—the non-generative AI variant—has been in use in algo trading for the last decade certainly, potentially even up to the last two decades.
Q28 Lola McEvoy: For clarity on that point, that is where somebody operates it but are using a machine for efficiency and to make it quicker and basically using a computer to trade?
Amandeep Luther: Not exactly. I will give some context. As computer technology has improved—and that is both hardware and software—it has enabled more inputs to be provided into a trading decision. In the world before computerised trading, a human being would take inputs from reading the news, watching what is happening in the world, stuff that they hear and would take a decision on whether they wanted to buy, sell, hold and so on. As computer technology has improved we have been able to add additional data points into that computation process.
What AI enabled the sector to do a decade ago is to introduce new data points that we perhaps were not able to use before or to analyse that data much more accurately. That does not necessarily mean that there is a human operating the machine, but the human writes the code for the machine, tests the code for the machine, makes sure that the code does what it says it should and what we intend it to do, and that enables that data to be analysed pre-trade. That is the key thing there. This technology has been historically used pre-trade. It has not been used in the decision-making process when it comes to deciding whether to trade or not. That still is a very deterministic decision.
I will give some clarity to my answer there. As I am sure this Committee is very well aware, trading is one of the most highly regulated and highly controlled areas of banking. We have regulations in place for almost every element of every second of your day on the trading floor. If I give an example, if you go back just under a decade ago a new regulation called MiFID II came into place. MiFID II had a whole bunch of regulations, one of which was around algorithmic trading and there was a very clear set of rules of what firms had to do to engage in algorithmic trading.
I will read you a couple of them just for context. There are rules around governance framework and senior management, oversight and accountability, pre-deployment testing, live monitoring while the algorithm is running; alerts are generated, kill switches, circuit breakers such that if certain scenarios arose the system would switch off or refer back to a human being.
These all served to effectively create almost a perimeter around the arena of trading. Any activity that happens within that arena must conform with those regulations in order to cross over that perimeter, whether that decision is taken by a machine or by a human being. So when it comes to the use of AI in algorithmic trading, that same threshold must be crossed. As traditional AI has been used in the past, as generative AI is starting to be used and may be used in the future, you cannot put that system live without satisfying the regulations to cross over that threshold into that arena.
Q29 Lola McEvoy: From what you have said, you think we do have enough regulation to be able to move with the technology advances in this space. You said about where certain scenarios arise there is a kill switch. What sort of scenarios? Can you speculate with the introduction of more robot traders what that would mean?
Amandeep Luther: To clarify the terminology, when we are talking about robot traders certainly in the wholesale financial markets at this stage that is still pre-trade activity. The robot trader in your description, in the way that technology is currently being used and is planned to be used in the near future, is not going to make trading decisions in and of itself without any human supervision. That is something where the technology is not quite at the level required to be deployed in that manner.
Certainly, the generative AI technology allows us perhaps to incorporate data sources that we were not able to use before or to analyse it in a way that we were not able to do before just because it has a much greater understanding of text-based data than it did before. In terms of the overall frameworks, the frameworks are technology-agnostic. As technology evolves it is incumbent on organisations to ensure that they can meet those regulations to deploy that technology.
Chair: Last question, please.
Q30 Lola McEvoy: Sorry, I am just really interested in this subject. In a recent speech a member of the Financial Policy Committee, Jonathan Hall, expressed concern that AI trading could be incentivised to amplify shocks because of where they get their source material from. Do we have enough regulation in this country to prevent that and what is your analysis of how big a risk that is for us in the UK?
Amandeep Luther: That probably ties into the second half of your question, which I did not answer, around kill switches and circuit breakers. The short answer is yes. The regulation as it currently stands in terms of how our members operate we would deem sufficient as it currently works given that it is technology-agnostic. The reason it works is that, as I mentioned earlier, regulations such as MiFID II and certain other regulations around model risk and how we ensure that the models are being deployed by our members, conform and do what they are meant to do.
Part of that involves defining scenarios under which that system would either alert a user and carry on running or would switch off entirely and require the user to take over. These could be scenarios such as where a piece of information is incorporated by the model and the system has determined that a certain course of action needs to be followed, whether that is buy, sell, hold, whatever quantity and so on. They are threshold-controlled within that process, which operate constantly on a real-time basis to monitor those movements.
For example, a stock exchange would have a control around the extent to which a price can move within a certain time. If that price moves too quickly in that certain time and meets that threshold, the circuit breaker will be triggered. The circuit breaker essentially means that that security can no longer be traded. The theory behind that is it allows the market to cool down, to assimilate the data and when that trading can be resumed in that security things are likely to be calmer. These controls exist both on the exchange level but also at the individual firm levels as well. That entire system is designed to work together to try to dampen and reduce the effects of any scenarios where potentially your trading could move in one direction very rapidly.
Q31 Dame Harriett Baldwin: If I may very quickly, Chair, a question for Jana and your members. We have heard in the past from policymakers about their concerns of the quantum apocalypse when quantum computing is used to unencrypt everything and causes a massive systemic risk to the financial system. Do your members believe that artificial intelligence is a power to prevent that from happening or something that increases the risk of that happening? Just a quick one!
Jana Mackintosh: A quick one. Again, like AI, we have been exploring quantum for quite a while so the risks of quantum graphic vulnerabilities are very well known. We have considered that for a while. There are quantum-safe technologies or post-quantum technologies that you can employ to protect yourself from that risk going forward. Firms have been looking at their own vulnerabilities.
Q32 Dame Harriett Baldwin: We should not have to worry about it at all, then?
Jana Mackintosh: There are of course interactions with AI where similarly the application of the technology can speed up processes. We need to be aware of that risk and the combination between AI and quantum, but the underlying fundamental risks remain the same.
Chair: I call John Grady.
John Grady: Thank you, Chair. I need to keep this plugging along, so if you can keep your answers short.
Lola McEvoy: It’s a huge subject!
Q33 John Grady: It is a huge subject. The first question is to Mr Luther. Do firms understand the counterparty risk of AI providers? By that I do not just mean solvency. Do these counterparties have idiosyncratic risks such as the departure of a core team that would be very damaging to ongoing provision of a product?
Amandeep Luther: I will answer your question in two points. First, around whether the sector has a handle on those firms and, secondly, how we reassure ourselves of that fact. The answer to the first part not surprisingly is yes: we do deem that our members have a very strong handle on the third-party providers. The reason for that is, again—from a technology perspective, irrespective of the technology provider, whether AI or otherwise—there are very strict regulations around the use of any kind of third-party technology vendor.
That involves significant due diligence on that vendor, significant contractual arrangements in place with the vendor to ensure elements such as transparency around data, transparency on how models are trained, transparency around their operation, their hiring processes and their solvency to an extent as well. These are contractually bound terms.
To your point about whether the departure of a core team in a vendor could jeopardise the operation within the bank, again that is a core part of that process. The resiliency of the vendors is again a core part of that due diligence process undertaken before engaging with any vendor. That process also has a contextual element to it. If you are using a vendor for a very small, minor part of the business that tolerance may be lower than if you are using that vendor for a very critical part, a very critical, serious part of the business in which case you would take a very different view of it.
Q34 John Grady: Just very quickly, Ms Mackintosh and Mr Otudeko, is that similar from your perspective and does it reflect your understanding?
Jana Mackintosh: Yes, certainly. I think when it comes to third party management, and especially critical third parties, there is a lot of ongoing work within the sector to make sure that we understand that the contractual relationships will fully embed and consider the risks associated with the introduction of new technology. I do agree with that. It does extend across other critical third parties and third-party suppliers we contract with.
David Otudeko: I completely agree. Mr Woods mentioned the critical third-party regime that the regulators have recently introduced. They have also recently introduced operational resilience requirements more broadly. That would cover third parties. There have been long-standing requirements around outsourcing and the need to conduct due diligence, as colleagues have mentioned, on third parties in the sector.
It is important to highlight that, while we all think that the current regulatory environment works for the risks of now that is no substitute for vigilance, proper risk management within firms, in terms of their ability to identify risks as they evolve and have emerging risk frameworks so they can see things coming down the track before they hit; that remains critical. It might be all fine now but that is no substitute for maintaining vigilance on AI as it evolves.
Q35 Chair: Are risk officers in banks or financial institutions capable of monitoring this risk? I am not being rude about them, but it is a very different skill set.
David Otudeko: It really is. The way I think about it is your traditional risk management process does not change off the back of AI risks. The elements of the process that are altered are how you identify those risks and how you mitigate those risks. For those two elements within the process, you need specific skills and to refresh your skills regularly to be able to identify those risks appropriately and identify the appropriate mitigants, which would be a combination of technical knowledge—someone who knows the IT and AI very well and can let you know about it—versus someone who knows the business very well and the applicability of those mitigants within the business.
I would say that those two elements of the risk management process are important to get right, but to your point it is important also to ensure that you have the right skills to identify those risks. For that you need to constantly look at your skills base within any organisation. Fundamentally, I do not think the risk management process changes and that gives me a little bit of comfort in terms of that we do not have to come up with a fundamentally different process on how we manage risks generally.
Q36 John Grady: When it comes to risks, they ultimately sit with the board of directors and the senior managers of the firm. They are the ones who are legally liable and responsible. A lot of these models are black boxes and very complicated. How confident are you that the directors and senior managers of the insurance firms in the UK, for example, really understand the limitations of the models and the risks involved such as opaque decision making and hallucinations?
David Otudeko: There are three elements to that. First, and I do not think anyone has mentioned the Senior Managers and Certification Regime so far: there is individual accountability and responsibility on certain individuals within a firm’s leadership to be able to identify and manage these risks properly—I am talking about the SMF or the chief risk officer. It is essentially on their head if they do not do it properly, so they have quite a big incentive to make sure that they understand these risks before they go ahead into adopting AI. What was the second part of the question?
Q37 John Grady: How confident are you that boards and senior managers, boards of directors in particular, in firms understand this stuff?
David Otudeko: It was about the explainability and the fact that some of these models are quite opaque. This does again speak to regulation around model risk management that the Prudential Regulation Authority is introducing for banks and has also pointed to the insurance sector in terms of, again, before you adopt these technologies you need to understand how they work.
Q38 John Grady: I know, and you are sort of answering the question, but gently I guess the question is: at present, pending the PRA’s work, do directors and senior managers understand what they are using in this very advancing, complex, technological world? I am humble enough to say that I do not fully understand all this stuff. Do directors understand it?
David Otudeko: I confidently say that for the technology that has been adopted now, yes. For technology that is emerging such as generative AI, that probably then points to the reason why firms are taking a softly-softly approach to adopting it because you need to learn and understand the risks fully and be able to back that to quite a few different stakeholders before you adopt such technologies. Again, I am not going to sit here and say it is a one and done when it comes to learning about technology. You need to constantly refresh your knowledge and bring it up to a place where it is in keeping with the technology of the day. I would say yes.
Q39 John Grady: Then the other question I would ask is on the outputs from these models, which is coming back to the theme that Mr Glen picked up on. If I am looking for insurance, how is the industry doing at testing the outputs of these models to ensure that they are giving fair and justifiable answers? For example, the insurance premium for someone with a mental health condition is reasonable and sensible, so how does the industry go about testing the outputs of these models to see that it is giving rise to outputs that are consistent with the consumer duty?
David Otudeko: Two things I would say. We have talked about AI, but we have seldom mentioned the fact that you are not going to get rid of human beings in the channel in terms of checking the outputs of these models. There will always be a human being at some point checking what a model gives as an output.
Secondly, I know I mentioned this earlier, back to model risk management. Historically in the insurance sector perhaps we thought about model risk particularly in the context of capital models and so on, but these are now very applicable and live when it comes to AI models. There is an element of good model risk management around model validation, which is you fundamentally checking the outputs of the models. If you have a good model risk management policy, validation of the outputs of the model is a key element of that. That really speaks to your point in terms of the results.
Q40 John Grady: If I wrote to the top 20 insurers in Britain and said, “Please can you provide confidentially”—because it would be confidential—“your model risk management policy?” would each of those policies pass muster?
David Otudeko: I would hope to God they would because Sam Woods wrote to the CEOs of the insurance sector in his 2024 business plan saying exactly that: “You need to pay attention to what we have asked banks to do on the model risk management side and essentially get your houses in order”. So, I would hope that they would, yes.
Q41 Lola McEvoy: To follow very quickly on that point, because the Chair is going to tell me off, it could be that if there is a load of AI firms watching this they are thinking, “We could have robot traders. We could have more technology but at the moment we do not have anyone to translate it to the people who would be responsible for taking the risk on behalf of the consumers”. So in a nutshell, as far as I am concerned for constituents in Darlington, that means that the regulation is working well. Would you agree?
David Otudeko: I would go as far as saying that. I would say do not take on technology that you do not understand, so yes.
Q42 Chair: Jana Mackintosh, you are nodding?
Jana Mackintosh: Yes.
Q43 Lola McEvoy: So cybersecurity: another fun topic for us to dip our toes into. Maybe for Ms Mackintosh, how much has AI boosted the ability and capacity of cybercriminals?
Jana Mackintosh: The technology is available as much to us to benefit from and use as it is to those who want to use it in the wrong way. Within cybersecurity and operational risk management, again when it comes to financial services we have great frameworks and regulations to help us think that through, especially over the last five years. We are having a big focus on operational resilience with the new operational resilience policies that came into place this year.
Firms have taken additional steps to make sure that they understand the processes, the vulnerabilities and the impact on their businesses and consumers, and they have put mitigants in place to address those. The introduction of AI again amplifies some of those risks that you can think of. If you think about cybersecurity in particular in this context there are a couple of different angles to consider.
One is the vulnerabilities introduced into the banking sector when we use AI and the systems that those are embedded in. Like any operational matters, it is about making sure that those can be safe and secure alongside the provision of services across the bank, so making sure those vulnerabilities are not introduced when we adopt the technologies when it comes to cyber is important. That is one area.
The second area is around being able to deploy the technology in a proactive, offensive way to make sure that you can have greater information and threat intelligence available to defend against those threats coming in. As you have rightly said, the same technologies will be available to those that would want to use them in the wrong way.
Q44 Lola McEvoy: We could use this new technology to protect ourselves against cybersecurity, the increased cybersecurity attacks from AI in the first place?
Jana Mackintosh: Yes.
Q45 Lola McEvoy: That is interesting. Can you tell us about a few more scenarios for a bit of colour for the Committee? What sort of vulnerabilities are we talking about? Give us some scenarios of where we could be at greater risk from cybersecurity threats.
Jana Mackintosh: The use of AI does rely on a big amount of data so storing that data, using that data within systems does make it attractive to criminals that want to extract that data and potentially use it in different ways. There is a vulnerability in terms of that information. Security is important when it comes to the introduction of AI into those kinds of systems, to make sure that they adhere to the same level of scrutiny and security that is needed.
Vulnerabilities in systems when it comes to AI could be in terms of the introduction of new products and services. You may not necessarily yet have the same level of market experience and so criminals may use that to enter the systems to exploit new products and services or offerings, which would increase fraud or risk within the system. That is important to understand. Those are two examples.
Q46 Lola McEvoy: Mr Luther, did you have anything to add on that point—cybersecurity and what effects you can envisage generative AI specifically having on cybersecurity threats?
Amandeep Luther: Again there are probably two sides to it. One is that AI unquestionably makes it easier for bad actors to perform their role, but AI also certainly gives advantages in combatting those challenges. AI is both the perpetrator as well as the solution to some extent.
What we have seen in the wholesale financial markets is probably a slightly different lens to the consumer world, in that to commit an offence and to make an attack in the wholesale markets typically would involve breaking through several existing security parameters within a large investment bank, which typically is a harder thing to do than perhaps breaking through to an individual consumer. It is an active area that our members are currently looking into.
It is something that I am very much aware our members are putting a lot of resources into looking into, but it is not an area where we have seen a huge increase in attacks as a result of generative AI or generative AI-enabled attacks at this stage. What I would say, however, is that that is not to say that the industry is complacent. It is very clear that this is an area that could be a risk to the sector.
Another area that is also being investigated is around to what extent the new AI models themselves could be compromised by bad threat actors. We are not just talking about somebody using generative AI and creating a deep fake profile of themselves to fool a system but to what extent can the systems themselves be fooled.
There is a lot of work going into that both from the LLM manufacturers around guard rails and so on, as well as research being done within academia and within firms to understand the models and how we can ensure that the data within the model is ring fenced such that a bad actor is not able to exploit the model themselves and gain access to information that they should not be doing.
Q47 Lola McEvoy: Is cyber risk arguably AI neutral because while the risk goes up so does the technology to mitigate against the risk? The issue that we have is about whether the investment is going in at the same level to risk-creating technologies as it is to risk-protecting and mitigating technologies? Are you confident then in your sector that the technology and the skills in the risk protection side of the development is happening?
Amandeep Luther: Absolutely, yes.
Q48 Chair: Ms Mackintosh is also nodding.
Amandeep Luther: Yes. Without question it is something that firms are looking at very actively.
Q49 Chair: Before I bring Mr Dean in, it seems that we have a bit of an arms race here. We have the hackers using their generative AI to get into the system. You have your system trying to stop it effectively. Is there a risk that you could lose that arms race?
Jana Mackintosh: It is not a new risk. I think that is a risk that we face every day. Again, the introduction of AI may bring with it speed or slightly new risks to consider but ultimately that is an arms race we have had for a while.
Q50 Chair: Are you going to win it?
Jana Mackintosh: Yes.
Chair: You have to sit there and tell us that, I suppose. This is partly why we are looking at this because it has a lot of alarming aspects to it as well as a lot of benefits. Mr Dean.
Q51 Bobby Dean: The discussion before takes my mind back to a discussion we had with the FCI last week. One of the things they highlighted was the lack of co-operation from some of the tech giants, Meta in particular. It takes me back a bit further in our discussions today about third party providers. I know you have said there is nothing new about having critical third-party providers, but I guess most of the time they were probably ones that were in the regulatory reach of government and international co-operation.
We now have tech companies that are so big and so powerful that they do not necessarily have to be that responsive to government demands. If you look at their ruthless acquisition strategies, are you concerned at all—perhaps this is a question for Jana Mackintosh from a British industry perspective—about the concentration of these technologies in a handful of companies and their resistance to compliance with regulation?
Jana Mackintosh: There are two things I would say. First, having greater involvement of the tech companies and the social media platforms across financial services for us to manage risk, improve fraud protection and protect our customers, is absolutely essential. We have been talking about that for quite some time. As you move into greater digitalisation, greater digital access and greater digital use of payments and finance, the role that firms play that supply and are part of that value chain as we evolve is essential. I think there is always more that we can do to make sure that they can be held accountable and can comply with the regulations that can manage the risk that they introduce.
In terms of concentration risk, a lot of the AI models are open source in the way that they are deployed so that means that firms can use those and deploy them within their closed environments. Many of the different deployments and models out there allow firms to then enhance and own those on their own platforms. You can in part mitigate the risk because of the open-source nature of the technology. When it comes to critical mass adoption that you will see, scalability of some of the models and so forth, the way that you manage your critical third parties will become increasingly important. That will have to evolve as we see greater adoption of AI.
Q52 Bobby Dean: I accept that is where the technology is now. I guess that is why I mentioned a ruthless acquisition strategy. We are not just talking about social media. The likes of Alphabet, Meta, and Apple are all going to want to buy whatever they anticipate is going to be the best technology in this field and I wonder if that is a specific worry of yours or if you think that is just a normal worry that you have always had.
Jana Mackintosh: It is probably a normal worry. I think the way that Bit-Tech and other technology firms participate across financial services brings the same considerations and risks.
David Otudeko: The concentration risk issue—not putting words into the PRA’s mouth in terms of why they do stuff, but that is the very reason why we had the critical third parties designation regime: so that you are able to identify that these third parties are critical, where there is a significant amount of concentration and where they wield quite a bit of power, and bring them under the purview of the regulators in terms of how they operate. That is important and that is what the whole designation was about.
Fundamentally for me, if I was an insurer and I was looking to take an AI system from someone, part of my own due diligence would be about making sure there is not a significant concentration there where if that party fell I would be in trouble. There is an element of safety from regulation. There is also an element of safety from the processes and steps insurers take before they make decisions about outsourcing or doing business with third parties.
Amandeep Luther: If I may add on to that, there are two parts to that. First, from the wholesale markets’ perspective, we have very strict processes that govern how we can deploy AI. There is usually a risk committee that would meet to discuss any proposed use of AI and the concentration risk, and the vendor risk is an element that goes into that conversation. What that means is that every proposal does not necessarily get approved by that committee if it is deemed to be inappropriate or deficient in one of those areas.
What that means is that the model that you describe there may not be used in a critical activity because of risks such as those. They may get used in a peripheral activity where if the connection to one vendor goes down or if that one vendor gets compromised, it does not affect a critical part of the business. Therefore, the choice of vendor for a particular use case is a key decision in that assessment process.
To echo Ms Mackintosh’s point earlier on, there is an element of dilution of that concentration—without trying to use the words properly there—and there is an element of open-source models that changes things to some extent. We are also seeing a decrease in the cost of policing these models, which means that there will be more models that are being produced.
There is certainly analysis that has been done in academia around using small language models rather than large language models, which increases the models available to choose from. Again, they are increasingly open-source models that are now approaching almost on par in terms of quality and output as the more commercial offering, so when you add them all together you already have a dilution effect.
The last element of that on top of it is that the use of AI is not just dependent on the model itself; it is also dependent on how a firm chooses to use that model. That means you are taking a firm’s data and putting it into that model, performing some kind of analysis, and getting an output. So the firm’s data that goes in is a factor and the processes that you are using it for also become a factor.
There is a natural dilution, so firm A and firm B could be using the exact same model but for different things with different data and have different outputs. From a concentration point of view, that is dilution again. That is not to say that this is a risk that we should actively not keep an eye on because it is certainly an area of concern to our members. We do not want to end up in a situation where we sleepwalk into a situation where we are affecting financial stability because everyone makes assumptions.
Q53 Lola McEvoy: On that point, is there a scenario where firms might buy technology to not use it so that nobody else can use it—things like some of the models that are advancing at the moment? You talked about the open-source models. I am not sure they will be up for sale in the same way. What is your speculation on that for your sector?
Amandeep Luther: In terms of whether our members could consider buying technology so that—
Q54 Lola McEvoy: The advances happening so quickly: whether you would buy it out so that nobody could use it if it were going to undermine stability or the whole model’s use.
Amandeep Luther: That probably delves into more of the commercial strategy of members. That probably exists in every aspect of their business. Whether a firm would choose to buy out a vendor just to take it off the market; that is difficult for me to comment on. It would be a commercial decision.
Lola McEvoy: I do think it would be difficult in this instance. Given that you have so many different models and there are emerging models and greater competition, I think it would be difficult to make that strategy be effective.
Q55 Yuan Yang: I would like to ask the panel about the interaction between AI and consumers. In particular, I am thinking about the impact on consumers of AI being used in ways that inform the decision to grant certain services and the price and the conditions on which those services are granted and the spectre of bias.
We have had submissions to the Committee from those who argue that in fact AI can be a tool for correcting and checking human biases, since we know that humans are also biased. We have also had many submissions from those who have done research into the ways in which AI could introduce biases that are more difficult to inspect because of the opacity of the black box: of the AI models underlying them.
I would like to hear from each of you about the conversations that your members have had between each other and with your associations, and what main concerns they have raised when it comes to the introduction of bias through AI. I will start with Mr Otudeko.
David Otudeko: It is something my members are certainly aware of in terms of the ability of AI to amplify bias. That points to the importance of ensuring that the dataset on which the AI is making decisions is as complete and accurate as you can possibly get it.
I go back to something I said earlier around regulation. If AI is integrated into an insurer’s business model, it means holistically you have to comply with consumer duty. It is not, “Okay, I am not using AI and as a result I have to comply”. If it is built into your business model you have to comply with it, and that includes the outputs and the decisions you make as a result of using AI. It is not distinct from other elements of the decision making of how an insurer decides on price, value and so on. As we know, the point around the price of insurance products is driven by quite a few other things aside from the use of AI.
You think about the introduction of AI; could that have an impact on policies long term? To consider that kind of question, you need to think about the cost, the amount of capital expenditure you need to put up to purchase AI in the first place. Then you need to think about the impact it will have on the individual elements in the insurance lifecycle before you even begin to think about the impact it can have on price. Fundamentally, if it is integrated into a business model you have to comply with consumer duty, which means that your outcomes have to be those that benefit customers and do them no harm.
Jana Mackintosh: Mr Otudeko covered biases, but all the checks and balances will be essential in ensuring, as the technology evolves and there is a greater reliance and a use of it across the board, that it doesn’t create singular behaviour in the market. We still need to challenge the data and outputs in the way that we talked about before. As we rely on these models and the models train more on the data, there is a risk that those biases would be amplified but also create reliance on AI to help us spot that when we need to make sure we still have human oversight on the output being created.
Within financial services, we are incredibly customer focused in terms of the products and services that we do provide. It is an extremely competitive market, and so you do need to get it right. The one thing that you never want to undermine is the trust of your customers when it comes to managing their money, keeping them safe and making them feel safe. The moment you lose that trust it is virtually impossible for you to regain. There is a lot of caution and care that firms do take if they deploy new technologies and the way that consumers will perceive it or the way that it will affect the relationship that they have with their customers.
You can work through all the benefits that come with that for consumers in terms of the personalisation of services, and greater access to potential services that they might otherwise not have had as vulnerable customers or otherwise, and improve potential financial inclusion across the board. You do not want to allow the risks to limit the benefits that we can ensure that society and customers can receive but again making sure that we retain that trust.
The other thing I will say about the technology is that like a lot of this being deployed across financial services—whether AI, quantum, open banking, some of the new forms of money or programmability—for consumers a lot of this happens behind the scenes. These are technologies that help improve operational activities or product offerings. Consumers want to see good outcomes. They want to see a product that serves their needs. They want to see a product that is competitive. They want to see a product that can address whatever need that they have. That is where we sometimes need to get back to when we talk about the interaction of the technologies with consumers
You deploy these technologies for a reason, and that reason in financial services is making sure that you can provide a better product to your customers.
Q56 Yuan Yang: Thank you, Ms Mackintosh. Can I speak directly to the conversations that your members are having about the risks of bias in AI? What concerns are they raising?
Jana Mackintosh: All the ones that we have mentioned. It is front of mind. It is a risk that is very well recognised, and so within all the processes, procedures, risk management that we deploy, it is about managing those biases, making sure that you can deploy further testing and quality assurance. I think all of those are risk mitigants that will help you address biases.
Q57 Yuan Yang: Thank you. Finally, Mr Luther?
Amandeep Luther: I think from the wholesale market perspective there are use cases that AI is suited to and those that it isn’t suited to. When it comes to bias and hallucination, there are active concerns that we would have—again, referring to the processes I mentioned earlier—around assessing any use case for deployment within one of our member firms. That is an area that would be considered. Where we have situations where hallucination may be a concern, it is unlikely to be approved for use in that scenario.
When it comes to bias, bias is slightly different in that it can take forms of the underlying model, how the model has been trained and the data that goes into that model. It also factors into how you are going to be using that data. When it comes to the wholesale market, certainly from a trading point of view, biased data tends to yield bad trading algorithms. If you want to be trading you cannot kid yourself by feeding bad data into your model so you can trade in a certain way. It probably would not behave in the way that you would want it to behave anyway.
Biased data and data selection more generally is an active area where members are very careful, even pre generative AI, when it comes to using or training any kind of models to perform any function within the wholesale markets specifically.
Q58 Yuan Yang: Thank you very much. Ms Mackintosh mentioned there is a behind the scenes nature of these new models. One of the concerns that academics expressed in their feedback to us was about the transparency of these models.
I think in particular of a 2019 case from the Dutch tax collection authority, which was a huge scandal on the use of AI in trying to detect fraud. It was a scandal that had been going on for about six years before it was published and detected by journalists. It had led to a huge amount of bias in terms of accusing families with certain protected characteristics of fraud when in fact many of them were innocent. It led to suicides. It led to families breaking up and so on. There was a six-year gap between the start of that model being applied and that actually being revealed to the public.
Under the current regulation regime, banks have to make many things transparent confidentially to regulators. Is there a need for an increased level of transparency given the increased level of opacity of these models? If so, what kinds of transparency and data sharing do you think would help customers and regulators feel confident in these new models? I will start with Mr Otudeko again.
David Otudeko: I was thinking about the level of public disclosures insurers have to undertake as a given. There is a document called the “Insolvency and Financial Conditions Report” that insurers have to produce. It is typically on their websites where they talk about their business models, and they talk about capital levels and so on. If I was adopting AI and it was going to materially change how I did business compared to before, I would put that in my IFCR, which is something that would be available to policyholders to read and to understand how I planned to adopt AI going forward.
I am not a tech expert by any stretch of the imagination; I struggle to upgrade my iOS on my iPhone half the time. But I am aware that there are models out there that you can use to explain the outputs and how AI makes decisions and comes to conclusions. Again, it is a question of risk management for me. There is a question of disclosure, and I think there is an opportunity and a means by which you can do that via some of the public disclosures that insurers have to undertake. There is also a question, as I say, of proper risk management around being able to explain how the model reached the conclusion.
As part of the Financial Conduct Authority’s sprint recently on AI, I know that there was a bit of discussion about how you keep the consumer in the loop when insurers are taking on AI. It is an ongoing discussion, and it is clearly a valid one where we need to reach a sensible conclusion in terms of how we bring consumers along with us on the journey as we adopt AI and innovate from a technology perspective.
Q59 Yuan Yang: When it is difficult, as in most cases, for a human to explain how a model arrived at the output that it has arrived at, in those cases do you see an argument for sharing the underlying data, the underlying model itself and even source code with regulators, Mr Otudeko?
David Otudeko: I am pretty sure the regulator would ask for it, so almost out of necessity you would have to share that with the regulator. If I was a chief risk officer in a firm and something was going on—I mean it is a fundamental rule for how you engage with the regulators to tell them everything you would reasonably expect them to know. If I was using AI in a major way, as a regulator, I would expect to know that so I would share that proactively.
Q60 Yuan Yang: Even before things started to go wrong, would you expect to declare your model and data at the beginning?
David Otudeko: I think you would have to. While the regulators are technology-agnostic and will not favour you taking one sort of technology over another, again if you were using something in a transformational way that impacts your business model you would have to explain that to the regulators.
Q61 Yuan Yang: Is there any disagreement on the panel with that view, Ms Mackintosh or Mr Luther?
Amandeep Luther: I would not disagree, but I also clarify that at this point that is already actively happening today. I have personally been involved in a situation where the regulator has walked in and asked the bank to explain how a certain model worked. In that situation, you are literally opening up the source code. An independent third party is also engaged, and it is the job of the independent third party to literally read line by line; sitting next to the engineer and going line by line through the code to understand how it works. The regulator has that power today to ask firms to be completely transparent about how any aspect of the business operates.
Q62 Yuan Yang: The models today—it seems that, in response to something happening or triggering an alert for a particular firm, the regulator goes in and looks. Do you see a move to a pre-warning system where the regulator has much more awareness of the models being used as opposed to the current problem-solving approach that goes in after an issue is detected?
Amandeep Luther: From the wholesale markets perspective, if I go back to the method of regulation that I mentioned earlier, part of that involves a compulsory annual audit of your algo trading platform. That has to be performed again by an independent audit function. That is already in place today. Certainly, in the wholesale market you have to perform that function, and you have to be able to evidence that you perform that function. Again, that function is not just involved with one part of it. It is a multidisciplinary audit, which is done on a compulsory basis.
Q63 Dame Harriett Baldwin: This question is for Ms Mackintosh. This Committee has always thought that our constituents struggle to get financial advice—and I put the emphasis on advice—without paying quite a high price for a full personalised piece of work. Probably over 90% of our constituents have to rely on guidance. Do your members see a scope for artificial intelligence to bring down the cost or potentially have an artificial intelligence version of a financial adviser that would be able to help more of our constituents at an affordable price point?
Jana Mackintosh: In short, yes. Personalisation is one of the benefits that you can see from the deployment of the technology. Whereas before it may very well have been quite resource intensive or take very long or could only be done by certain specialists, the careful and appropriate introduction of AI into some of those use cases can certainly enhance personalisation.
Q64 Dame Harriett Baldwin: Are any of your members planning to do that now that the regulator is looking at innovation in this area?
Jana Mackintosh: That is certainly one of the key benefits and outcomes they are exploring across a number of different use cases that they are looking at.
Q65 Dame Harriett Baldwin: Can anyone else elaborate on anything they have heard of in this area where artificial intelligence actually helps our constituents make wiser financial decisions?
David Otudeko: As Ms Mackintosh has touched on, the applicability of AI and the benefits it can offer there is clear. For me, it needs to be complemented by appropriate regulatory changes, so the work the FCA is doing around the advice and guidance boundary review is really important to make sure that those that are currently unable to get that kind of support and advice are able to take advantage of it. I see AI helping that process. It is something that I definitely think can be applied there, but it has to be coupled with regulatory changes that make it possible to actually get advice that is not financial advice in the typical sense of it.
Q66 Dame Harriett Baldwin: Anything to add, Mr Luther?
Amandeep Luther: I think that on the wholesale market side—obviously the consumer side I will leave for Jana and David—it is a different kettle of fish when it comes to advising our customers. Typically, the conversations tend to be much more bespoke in nature and are difficult to automate. What we have found is the use of AI in informing staff to have those conversations.
Some of our member firms are using AI to do things such as summarise previous interactions of a salesperson with a client or previous conversations that may have been had around that area. That enables the salesperson to then have a much higher quality conversation with the client to ensure that we can better satisfy their needs. We are not at the stage again with the technology where, as a sector, we would feel comfortable deploying that fully autonomously to have a conversation directly with one of the corporate clients.
Q67 Bobby Dean: Following up on Dame Harriett Baldwin’s point, I was part of a roundtable of lots of fintech firms the other day and they are quite excited about the potential to offer personalised advice in an affordable way. One of their concerns was actually that some of the larger institutions in the sector are pressing the go-slow button because they are quite intimidated by these challenger brands emerging and want the opportunity to be able to catch up.
I heard in your answer, Jana Mackintosh, about how you feel that we need to go at a careful and appropriate pace. Is this something that you are aware of? Do you think they have a fair case when they say that the larger institutions are slowing down the challenger brands in the fintech sector?
Jana Mackintosh: I am not entirely sure that is a fair characterisation of the behaviour that you see from the larger financial institutions, as we have talked about today. Within financial services and banking in particular, more broadly, there is a very strong compliance culture, a very strong focus on risk management and a very strong focus on customer outcomes. The introduction of something like that, which will have a direct impact on the outcomes your customers will see, expect and receive, will affect the relationship that they have with them from a financial services point of view.
Back to the point about trust: the moment you break that trust, the moment you explore something that will make your customers feel uneasy, it will be of concern to them. I think caution needs to be taken in terms of how any risks are introduced through that. It is not necessarily that that is being held back because of competitive pressures. I would say that it is being held back linking back to the conversation that we are having around introducing it in the right way, understanding the risks and—
Q68 Bobby Dean: If I could just challenge slightly, I feel that in much of our discussion so far today, the response from the panel in general has been, “We are coping well with the emerging new technology. Most of it is not new. The regulation is there. We are adapting. We are moving with the times.”
But when we come to this conversation specifically, it seems that most of the benefit is going to be to a new set of companies and it is, “We need to be really cautious. We need to be careful. We need to make sure that we do not harm consumers by going forward with it”. Can you understand why that might be confusing to the Committee?
Jana Mackintosh: I can see why you would challenge that perception. I do not see it contradicting previous conversations. A lot of the introduction of the technology in financial services happened through partnerships, and in many ways the smaller firms, the fintech firms—I think the conversation we had earlier—can develop the technology a lot faster. They are more nimble. They have the newer platforms and technologies to work on. The larger financial institutions have the resources, the capabilities, the risk management and the regulatory culture.
Those two things need to complement each other in the right way. I think that complement happens. We do see it happening. There are great partnerships where firms do come together and provide those. I also think that fintechs, big techs, non-traditionally regulated financial institutions do have a different culture. They do have a different risk appetite. For you to be able to marry those up in a way that is responsible and measured, needs to be considered in the right way.
Q69 Bobby Dean: I am going to move on to some of the concerns around privacy. There seems to be a bit of confusion in the industry now about how much the GDPR, consumer duty and the Equality Act interacts with AI versus it is not a problem at all. We have read that in some of the reports. Mr Luther, may we come to you first? What is your impression on GDPR, consumer duty, and the Equality Act’s interaction with privacy concerns that might emerge from AI?
Amandeep Luther: I sound like a broken record on this. It is again a very key part of the appraisal process for any use of AI. I mentioned surveillance earlier, for example.
Any use of AI for surveillance would have to go through a privacy assessment from the firm to ensure that the data being collected is appropriate and is only used for the use that is intended. It is not possible for a firm to get away from the obligation under GDPR because they are just going to do a different activity. Privacy is front and centre and it is something that firms cannot really avoid complying with.
Q70 Bobby Dean: Do you think existing legislation is holding back the adoption of certain uses of this technology, from your impression so far?
Amandeep Luther: An interesting question: is it holding it back? I think it is fair to say that any element of regulation does slow down innovation. That is a broad statement, which is applicable. The question is whether the regulation is appropriate and are we happy with the fact that we are slowing down innovation for this particular reason. I think our members’ view would be that privacy is a very important element that we should not consider stepping aside from or just ride roughshod over.
It is fair to say that privacy regulation is probably slowing down the adoption of AI to some extent, as it is technology in general. The question really should be whether we feel that that is the right thing to do. Our members’ view would be that it is, because when we come to managing our clients’ money and assets that is a serious obligation. Ensuring that things are done in a compliant manner so that data is kept secure, private data is only collected where it needs to be and is used for the use that it needs to be used and for that use only. That is something that would be very critical to our operation.
Q71 Bobby Dean: Then to you, Mr Otudeko. The scope for this in the insurance industry is quite huge. We have read stuff about how you could tailor insurance policies down to areas that have more 20 mph roads, for instance, if you had access to that data, and all kinds of other characteristics based on customers. Do you feel the existing regulations are holding back potential innovations in the insurance sector, potentially being able to lower premiums for people if you were able to be more specific?
David Otudeko: No. I say that because I am actually a fan, as I mentioned earlier, of the almost technology-agnostic approaches that the regulators are taking to AI and innovation more broadly. For me, the crucial thing is one of proportionality and are they actually covering the right areas and also making sure that we understand what the requirements are because they are a little bit fragmented.
You have the SM&CR coming from the Prudential Regulation Authority and the Financial Conduct Authority. You have GDPR things coming out of the ICO and so on. It is almost making sure that insurers and adopters of AI understand the regulatory requirements across the piece. Therefore, I would not say that regulation is currently hindering innovation, no.
Q72 Bobby Dean: To take a specific example, do you think that insurance companies—health insurance companies, for example—should be able to see things like your Apple health data on your phone, in a safe, secure, anonymised way perhaps? Should they be able to tailor insurance policies down to that personalised level of data?
David Otudeko: Again, under GDPR there are strict requirements in terms of access, people being able to give the insurer access and actually consenting to that data being used.
Q73 Bobby Dean: It would have to be explicit permission you think?
David Otudeko: Yes.
Q74 Bobby Dean: If a technology was developed that allowed you to do that at scale, you think that would be a step too far?
David Otudeko: I wear one of those devices, so I would want to consent to someone using my personal health data for anything else. We also need to look at the opportunities here, because again, using my personal device, if you see me doing a few more steps than people exiting the room, which you should, it is because I am trying to hit my 10,000 steps. That is because I am aware of the health benefits of movement. There is the element of the behaviour change for the insured in terms of their own personal risks that we need to look at in terms of the opportunity that this wearable technology provides as well.
I go back to the protections under GDPR in terms of the data that is being used, the amount of security around that data that needs to be commensurate with the sensitivity of that data and various requirements there. I know that the FCA is also doing a bit of work with industry to make sure we understand what the requirements are of us as we use data and of regulation and so on. There is a lot of activity around that question. Again, fundamentally, I like the fact that the regulators are taking a technology neutral agnostic approach to AI and innovation in the sector.
Q75 Bobby Dean: In the car insurance industry, you could have a black box fitted to your car—and obviously the person consents to that—and that often ends up lowering their premiums. What if we were able to develop similar schemes in other areas of insurance, like health insurance, and maybe make sure that you take 10,000 steps a day to keep your insurance premiums low? Is that a world of technology that you would be comfortable with, provided that there was consent?
David Otudeko: I touched on this a little earlier. I think that there are a lot of things that impact the price, and the premium paid for an insurance product and a lot of that transcends technology. Purely from an economic standpoint, the cost at which I procure said technology is an important determinant in terms of how I service the products and how I offer the products to the customer.
Currently my understanding is that those technologies are still quite expensive, and you need to use them in a substantial way across the insurance life cycle and reduce the cost of servicing an individual policy before you even start to consider the implications on premiums. Then fundamentally, even if all the stars align and you get to that place, it will be a decision for an insurer to make. While not speaking for the sector, what I can say is that if the opportunity presented itself, especially in the environment we find ourselves in, why would I not try to reduce—
Q76 Bobby Dean: Maybe in the future.
David Otudeko: Maybe in the future is the way to put that, yes.
Amandeep Luther: If I may add a little bit to that, so to the point earlier on: does the regulation hinder innovation? The answer there is it may slow things down. There are additional hoops that you have to jump through.
You are still not prohibited entirely from engaging in a course of action, but there are hoops that you need to jump through to ensure that you satisfy certain regulations. Some of the scenarios that you described earlier on—without going into the nuances of them—may not be prohibited activities in the current regulation but, in order to execute them, you would need to ensure that you collect data in the right way and used it in the right way.
Jana Mackintosh: In the Bank of England survey that I highlighted, one of the considerations, especially for smaller firms, and thinking about developing some of the technologies, was around the cost of that compliance, which effectively ultimately impacts on your business model and the commerciality of the investment that you do want to make. That is another thing to consider in thinking about this.
We have many active conversations with the ICO and the FCA around how to evolve the guidance, which comes back to your question on regulation as well. Although we feel like the Financial Services Regulation is fit for purpose to deploy the technology, we also recognise that there will be areas where guidance needs to be developed and this is certainly one area that we are looking at.
Q77 Chair: Thank you very much. Some quick fire questions from me. We have touched quite a bit on regulation, but I am interested to know if you think that any change is needed in the way that AI is regulated in your sector. Let me start with Mr Luther and work across.
Amandeep Luther: From our members’ perspective, the regulations we have covered are generally fit for purpose. What our members have found is that when it comes to some of the larger global banks, there is a patchwork of regulation in different global jurisdictions. To the extent that the UK is able to engage in those conversations to either create a harmonised approach or declare equivalence to certain financial regulations, that certainly would be helpful.
If you take the example of huge regulation, the EU AI Act—which is currently in the process of being implemented—is a horizontal regulation and applies just as much to a global bank with a significant governance infrastructure in place as it does to a tech provider who is using this stuff for the first time. To the extent to which the Government can engage in these conversations and try to create that more stable and level playing field, that would be an approach that would be welcomed by our members.
Q78 Chair: Ms Mackintosh, is there anything you think should change?
Jana Mackintosh: Well, I certainly agree that the Financial Services regulations are fit for purpose. We have a strong framework to deploy the technology in a responsible manner. We do support the pro innovation approach.
Again, across the board, with government, with Treasury and with regulators, we are having an active conversation around the balance between regulation and innovation, when it comes to managing risk. Striking the right balance is absolutely essential to ensure that the benefits can result in a competitive environment, the competitiveness, growth, and productivity that we talked about before. We are very actively engaged in making sure that we can develop a regulatory framework and support that innovation in the right way.
Q79 Chair: Striking the right balance is an easy thing to say, but quite hard to deliver. You talked about using AI to understand customers and it may have been Mr Luther who talked about how firms use this and the two ways that this can happen. How firms use it, but also how and what data firms put into this—so those are all areas that are potentially ripe for regulation and there has been all sorts of debate over many years in this place and elsewhere about the use of data. We just touched on it then. Do you have any view about where the balance should be?
Jana Mackintosh: Yes. That is obviously something that you need to keep considering. I do not think it is a once done conversation that you can have in choosing that balance. It needs to adjust. It needs to evolve.
Q80 Chair: Once something is out, once the horse has bolted, that is it. If firms start using certain personal data or using that data to understand customers and, therefore, perhaps pitch products to them in different ways or change their insurance profile or look at use their credit information differently, you cannot pull back once it is out there, can you? Do the regulators need to do anything different to stop anything bad happening?
Jana Mackintosh: The one thing that we are doing, which we certainly see in other jurisdictions as well, is very close collaboration. There needs to be a very close relationship between the regulators and industry so that we are all informed, we understand the evolution, we understand the risks and we are not waiting for it to crystallise or realise—we are actively mitigating those. The collaborative structures that we see around the consortium—that the bank is setting up, the testing environment and that the FCA is setting up—all those are really good initiatives to make sure that we do this in lockstep.
Q81 Chair: It is interesting because if we take a parallel model, perhaps, HMRC now requires that if you are trying to sell a tax avoidance product, it gets into it early and makes everybody pay upfront the tax they might have to pay beforehand. It is trying to get ahead of the curve because of the problems that it has had with the marketing of tax avoidance products. Do you think there is anything that the regulator should do to try to get ahead or is it even possible in such a fast-moving area of technology?
Jana Mackintosh: It comes back to the opening discussion we had about regulation and saying that the financial services regulation is fit for purpose. We do have those regulations already in place. I do not think there are gaps or absences that are of concern to anyone. That is certainly the current position and understanding of the application thereof.
There will be areas where greater guidance, greater regulatory involvement is going to be needed in the future as it evolves, but I do not think it is starting from scratch when it comes to the application of this. There are no gaps in the regulations that we think are needed to manage those risks adequately across financial services.
Q82 Chair: Mr Otudeko?
David Otudeko: I would agree on the point about interoperability of domestic and international standards. I have talked about model risk management quite a bit in the room, so that is probably an area for insurers that the regulators might want to look at and I know it is already on their radar.
The bigger point about regulation, which colleagues touched on, is opportunity cost. Clearly, regulation is quite expensive to comply with, which is why the Government have launched the initiative to try to reduce the overall burden of regulation. The fact is the cost to comply with regulation could be money that could be deployed for innovation, whether in AI or in other areas of the insurance business model. That is the bigger thing to tackle from my perspective.
Q83 Chair: There is also cost in removing regulation, isn’t there?
David Otudeko: There is.
Chair: You don’t want to be swinging backwards and forwards.
David Otudeko: You have to get the balance right, and the point about stability is a really important one. Just to reference what you were saying earlier, the fact that AI is evolving so quickly means that, if you had a regulatory regime that tried to keep step with every evolution of AI, you would have a very unstable regime because it is moving very fast. As a result, having something that is principles based, and outcomes focused, is a priority for me.
Q84 Chair: Do the rest of you prefer the principles-based rather than very prescriptive regulation? Nods from Mr Luther and Ms Mackintosh.
Ms Mackintosh, we touched on this a bit when I mentioned the tax avoidance parallel model, but in your evidence to us you were very clear that you were not in favour of the UK adopting the EU AI Act; I wondered why that was.
Jana Mackintosh: It is a very different model from the one that we are considering in the UK and, as much as it is across the board legislation that is quite prescriptive. In thinking about the model that we have here, which is sectoral, allowing the AI to be applied in use cases within a regulated sector will allow you the flexibility to manage the innovation and the limitations potentially thereof. The flexibility it gives you as well in terms of being outcomes based and principles-based means that, in a space where you do have nascent technology that is evolving quickly, you can keep up and embrace with changes that are necessary.
If you introduced legislation like in the EU, which sits across the board, we know that takes time to amend. It takes time to evolve and can, in and of itself, place strains in sectors where the same application of the regulation would not necessarily apply in the same way.
Q85 Chair: Do you have a problem with this—going back to the tax model I was highlighting? Because it may be that not everybody follows the EU AI Act, which as I say I didn’t follow until we were preparing for today. That requires companies to register high risk AI models. Just to be clear, you are against that?
Jana Mackintosh: No. Understanding different levels of risk associated with different use cases is important. As we have talked about before, there are certain use cases that are low risk in which you can explore and implement those.
Q86 Chair: What about the ones that are high risk? Where there is a high-risk model being developed, are you absolutely dead set against that? Because that is what the model is in the EU AI Act.
Jana Mackintosh: No. The EU AI Act allows you to identify those as at-risk models and then put certain mitigants and controls around those. It is entirely appropriate for you to know which of the use cases and the models are high risk. The classification thereof, the potential treatment or guidance thereof, if they introduce different risks I think they are all appropriate. The fact that you can apply those high-risk use cases within financial services and within sector-specific regulation and AI adoption is helpful, instead of the way that the EU AI Act is thinking about it across the board.
Q87 Chair: In summary, you are saying that they are doing the model by the sector?
Jana Mackintosh: We are doing it by sector, yes.
Q88 Chair: Mr Luther, Mr Otudeko, anything you want to add about the EU AI Act?
Amandeep Luther: The EU AI Act is an area that our members are actively grappling with at the moment. Maybe if I could take a moment to give some context to what the Act is about. The EU AI Act is a horizontal regulation that applies across the EU. I mentioned earlier that it applies to a bank as much as it would do to a dating app or any anything else using AI. It requires firms to classify any model in terms of four categories, whether it is prohibited or it is high, medium, or low risk. If it is a prohibited activity, you cannot use it full stop. That is really things around social scoring, behavioural and biometric data collection and things of that nature.
When it comes to financial services, the high-risk category at the moment in the current interpretation is only relevant to areas such as credit assessment, employment screening and algorithm trading. It is a very broad category to just define in that way, but in order to comply with the regulation firms would be forced to rank all their algorithms and put them into the relevant category. If you are deemed to be high risk, as you mentioned earlier on, that does require you to register that system in a database, obtain a CE mark for that algorithm and a whole bunch of regulations around documentation, record keeping, transparency, and so on.
While our firms are actively working to comply with that regulation, it is fair to say that a principles-based regulation is something that we would be more in favour of. It tends to be something that is more technology-agnostic and allows the regulation to scale as technology develops. The problem with risk-based regulation is that, as those risks change, as the use cases change, the regulation does need to be updated so that becomes a challenge.
The last point there would be around financial services, as we have talked at length about today already. There is significant governance already in place. The interaction between a horizontal regulation, which is very broad, and the very deep sectoral regulation that is already in place is causing challenges. That is an area that we are actively looking to work on.
Q89 Chair: The pace at which it is changing makes it hard to change the rules around it. I understand that. David Otudeko, is there anything you want to add on the EU AI Act?
David Otudeko: I agree with everything that has been said. I think it is important to also remember that within financial services regulators you have firm level supervisors who are able to take a very firm specific view as to the risks that firm poses to the regulator’s objectives. It is important not to lose sight of that specificity coupled with, as you say, principles-based regulation and the utility that could bring to the system.
Q90 Chair: Finally, we touched on it before: we have the FCA and other regulators in front of us regularly, but others have commented on the black box. The detail of it is hidden away because obviously it is often commercially sensitive. Do you think the regulators will be able to keep up with looking into that and be fit for purpose? Because the best developers of AI, the generative AI, the most developed end of that will be moving at a pace and altering at a pace. It is going to be quite hard to keep track of. Do you think that the regulators are up to it or if they are not will they be up to it? I will start the other way round with Mr Otudeko.
David Otudeko: Going back to Mr Woods’s evidence the other day, I think it is important for them to have the right skills to be able to keep up. It is also important for them to place a level of reliance on the skills within insurers themselves because, when it comes to things like AI, I do not think there is a monopoly on wisdom. It has to be a collaborative effort in terms of trying to mitigate the risks AI poses, so it is about them having the right skills. Are they up to the task? They are extremely clever people within our regulators, so I would say they definitely are, but they also need to realise—
Q91 Chair: They are watching you, Mr Otudeko.
David Otudeko: I am hoping they are. It makes my next interaction with them a lot easier. But then, yes, they need to collaborate with the industry to recognise that within industry there is the capacity and capability of actually getting people that have those skills, and it is about collaboration across both. We have talked about the amount of collaboration going on between industry and the regulators. There is a lot of it, so long may that continue.
Jana Mackintosh: We talked about skills development earlier and around the impact the technology has. Sometimes we think about AI, quantum, cloud or new forms of money like crypto, and we think that it is just about the technology, but it is not. Across the board, whether you work in business development or sales, customer management, regulation, governance or board directorships, there is a level of training and understanding that we do need to have of what the technology is, the risks that introduces and what that means for the specific roles. The same applies to the regulators. They have a strong skill set and we are very lucky in the UK to have very strong, capable regulators that engage with us quite frequently and collaboratively.
From a policy point of view and a supervisory point of view, the regulators do not have to be engineers or to fully understand the models to be able to understand the risks that they will introduce to customers and financial services. That same level of challenge, education, and training that applies to all of us in the sector will similarly apply to the regulators.
Q92 Chair: I think we touched on this before, Mr Luther. Anything to add briefly to what you have heard.
Amandeep Luther: Probably just very minor. I would echo what my colleagues have said. There are two elements to this. One is keeping the door open to industry and having these conversations, so the session like we are having today. The Bank of England has newly set up the AI consortium. The FCA has set up the AI lab, and the series of events that are associated with that.
We have also seen a significant upskilling in the FCA in particular. There has been a lot of hiring of very technical staff in the FCA to help them build up this function. I think that is the right way to go. If you want to regulate a function, you have to have the skills in house to do that as well. Certainly, what we have seen in the UK has been a move towards that, both in the open-door policy as well as the upskilling internally within the staffing themselves.
Chair: Thank you. We could go on forever, but we won’t. We are going to be seeing banks in the next couple of weeks, so we have lots of work to do in this area. I suspect it is going to be a running theme for Parliament.
Can I thank our witnesses very much indeed, from my left to right? That is Amandeep Luther, from AFME, Jana Mackintosh from UK Finance, and David Otudeko from the Association of British Insurers, the ABI. Thank you very much indeed.
The uncorrected transcript of this session will be on our website in the next couple of days and we may produce a report. We are yet to decide what we do with all the evidence that we are receiving. Thank you very much indeed.