final logo red (RGB)

Justice and Home Affairs Committee

Corrected oral evidence: New technologies and the application of the law

Monday 19 October 2021

11 am

 

Watch the meeting

Members present: Baroness Hamwee (The Chair); Lord Blunkett; Baroness Chakrabarti; Lord Dholakia; Baroness Hallett; Lord Hunt of Wirral; Baroness Kennedy of The Shaws; Baroness Pidding; Baroness Sanderson of Welton; Baroness Shackleton of Belgravia.

Evidence Session No. 5              Heard in Public              Questions 68 - 82

 

Witnesses

I: Professor Sandra Wachter, Associate Professor and Senior Research Fellow, Oxford Internet Institute, University of Oxford; Dr Liam Owens, Founder and Chief Executive Officer, Semantics 21; Mr David Lewis, former Deputy Chief Constable of Dorset Police and former ethics lead, National Police Chiefs’ Council.

 

USE OF THE TRANSCRIPT

  1. This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.

22

 

Examination of witnesses

Professor Sandra Wachter, Dr Liam Owens and Mr David Lewis.

Q68               The Chair: Good morning and thank you to our witnesses for coming to talk to us about new technologies and the application of the law. We have a number of questions for you and there will be a transcript, which you will have the chance to check after this morning’s session. As it is quite likely there will be more to say than we can fit into the time we have available, if there are other things you would like to put in as evidence in writing following this morning, we would be very grateful for that.

I will not ask each of you to introduce yourselves at length. We have Dr Liam Owens, founder and chief executive of Semantics 21; Professor Sandra Wachter from the Oxford Internet Institute, whom I heard the day before yesterday on the radio talking about some of the things I know we will be covering; and David Lewis, former deputy chief constable from Dorset and the former ethics lead on the NPCC.

Dr Owens, I know that you want to say something at the start to make your own position clear as somebody from industry.

Dr Liam Owens: My Lord Chair, thank you for inviting me to this inquiry and to provide evidence. It is an honour and a privilege to be here. Today, I provide evidence based on my expertise and experience as founder of Semantics 21, a company created to tackle the lack of innovation technologies used by policing in child sexual abuse materials—CSAM. I was recognised with a Queen’s Award for Innovation and I am a PhD holder in artificial intelligence and decisionmaking in digital forensics.

Semantics 21 is the only company in the world dedicated to victim identification. We believe that every victim matters. With over 15 years of experience working in digital forensics, it is clear that the challenges or blockers faced in the UK by both our investigators and expert tool providers aiming to deliver the best public protection are unique, overwhelming and unnecessary. The UK was once considered a leader in digital forensics, whereas now many senior UK officials have stated we are far behind countries such as Canada, slow to react to new trends and even slower to engage with new technologies such as AI. I believe we can summarise the three main blockers, and would welcome the opportunity to expand on—

The Chair: Can I ask you to pause there? I thought you were going to say something like, “Im not speaking for the whole of the private sector”. Some of the things I think you were going to say will probably come up through questioning.

Dr Liam Owens: Yes. Would I be allowed to finish or would you prefer it if I stop?

The Chair: You will get plenty of opportunity to put the points.

Dr Liam Owens: Thank you.

Q69               The Chair: Let me start with a general question, and, if you like, I will come to you first. As regards procurement, when a public body is purchasing technology and there is little experience on the part of the purchaser, what would be an ideal process? What are the milestones? How can there be any equality of arms? Let me come to you first, Dr Owens, and then I will ask the other witnesses.

Dr Liam Owens: Of course. It is a difficult question. One of the big difficulties with procurement of artificial intelligence, especially in digital forensics, the area I work in, is that it is such a niche market in the requirements around the digital forensics or the AI to be used. It really needs to be novel and specifically designed for the intended purpose. For instance, if we are looking for a classifier to try to automatically find CSA material, that works very differently to, say, a live-time AI looking and tracking CCTV of people. It is very difficult in that sense.

We also need to recognise that AI is still in its infancy. It is still niche and still needs a lot of training and a lot of work around it. The ideal way would be for a police force or police forces to highlight very clearly what they are looking to achieve and then for AI developers, if that is relevant—AI is not a solution to everything—to be able to work on a result, or something that can actually deliver that, rather than there just being a generic AI for every solution.

Professor Sandra Wachter: One of the things that was just said is very important. From the get-go we have to be very clear what we want technology to do. Technology can help us to make decisionmaking faster and cheaper. If the aim is just to replace human decisionmaking to find an answer to financial pressures, or to ensure that we get rid of a backlog, that is a different type of trajectory from hoping that the technology will help us to make better decisions. Then we have to have a discussion around what better actually means. Depending where you fall on the spectrum, the milestones will be different, as will how you perform and validate the model you have.

Regardless of where you fall on the spectrum, there are always three issues that you will need to deal with. There will always be a data protection issue because you always have to work with data. An algorithm is useless without data. You always end up with biases in the dataset, because there is no such thing as unbiased data, and you always run into questions of explainability. Very often, the black boxes are quite hard to understand. Those milestones have to be on the spectrum regardless.

Having said that, a lot of people are fully aware of those challenges, but we have not fully grasped how to tackle them. My team and I have done some research in the past looking, for example, at bias tests currently out that are being proposed around the world. Most of them were developed in the US, for example, and we wrote a paper on bias preservation in machine learning where we showed that the majority of themthirteen out of 20do not live up to the standards of equality law in this country and the European Union. It is very important when those systems are being applied in our society that we test whether they live up to our standards.

The Chair: Is training for the purchaser something that a purchaser should think about before even getting into investigating what is available?

Professor Sandra Wachter: Yes, it is definitely important to train the person who will be deploying the system and to get outside advice from people, because you cannot expect a person who is making decisions, for example in the criminal justice sector, to have full understanding of all the laws, all the ethics, all the computer science and all the equality issues in our world. We need to draw on expertise from other people because we cannot expect one person to do all that.

David Lewis: I am no procurement expert, so I will not go into a lot of detail about procurement processes. My observation, having led a number of change programmes, is that nailing down the requirements at the beginning is the key element. The difficulty we often have is that, of course, if you are purchasing nationally, you have 43 to 50 different entities all disagreeing on the exact nature of the requirements, so a strong SRO who can guide and lead that is absolutely vital.

The other observation I would make in this space particularly is that there needs to be as part of that requirement some testing with the public, or at least explicit acknowledgement of a public interest test, so that you are working into your thinking. Is this in the public interest? What will the public think about this? In what way is what we are doing proportionate? We need to think about bringing in independent advisers and expertise, as Professor Wachter said. We have lots of experience of doing that in different fields, but we need to mobilise it in policing generally, so that we bring in the use of ethics committees and supporting bodies, and properly test and check before we go on to procurement.

The Chair: Thank you.

Q70               Baroness Sanderson of Welton: What are the main hurdles to purchasing or selling new technologies? Are those hurdles necessary and are they proportionate? Dr Owens, I think you were heading that way, so could we go to you first?

Dr Liam Owens: I was, yes, thank you. We can summarise the three main blockers that commercial agencies and I and my company run into. The first is the unwillingness of our police forces to break the current status quo and embrace the change of new technologies to provide better protection for our public. There is a lack of incentive and support from the Home Office, particularly around victim identification and reducing backlogs, which we suffer with quite badly in this country. I believe also that the decisionmaking is either biased or unqualified at senior levels, which has resulted in tens of millions of pounds of taxpayers’ money spent poorly on solutions that are unfit for purpose or projects that have little value. There is unwillingness to change despite failure.

We certainly find day to day that many of our forces are drowning in backlogs. It is almost the better the devil they know: This is how we have always done it and this is how a lot of our forces do it. We will carry on doing it that way, because as long as everyone else is drowning with us, we’re not below. We appreciate that there are many standout forces that try to take that on head first, and we are very lucky to work with them, but I think the biggest thing is breaking the status quo and engaging with new technologies.

Baroness Sanderson of Welton: Do you mean that the decision-making is biased from the separate police forces? You mentioned the Home Office. Where do you see this bias and in what form is it coming in?

Dr Liam Owens: I would not like to point a finger.

Baroness Sanderson of Welton: No, I mean generally.

Dr Liam Owens: Everyone has the best intentionsI would like to make that absolutely clear—especially in the area we work in, to want to stop it, but there are so many overwhelming factors that stop police forces from taking new innovations such as the ones we bring to the table. I do not know whether that is poor decision-making or unsure decision-making. My expert colleagues on my left mentioned some of the reasons behind those. There is a complete lack of support or incentivisation to do it.

We could compare ourselves with Canada. We work very closely with almost every force in Canada, and they brought in, as some of the panel may know, the Jordan decision, which meant they have twelve months from seizure of a device through to gaining a guilty verdict in court. If we brought that in here, it would be disastrous for our courts because more often than not there are twelve, eighteen or 24-month backlogs in our forces.

In Canada, they engage with new technology, such as ours. Of course, we are not the only solution, but we may be part of it. They took the chance and engaged with new technology and now they have done it. Forces here in the UK—we spoke with many of them before we joined—have gone from a fourteenmonth backlog down to ten weeks of backlog, through using new technologies, including our own. It is taking that leap and having support from relevant people to allow them to take that leap, encouraging them to.

David Lewis: The difficulty with this market is that it is simultaneously not a big market and very complex. That probably militates against new entrants and favours existing providers. There is also a difficulty in that many senior officers are not particularly confident in this area and, therefore, do not necessarily act in it with the sort of dynamism and confidence that I think Dr Owens is alluding to. We have some difficulties there, and I would certainly advocate training for senior officers so that they become more confident and literate in this space.

It is tricky, because you need procurement processes, and you need some regulation and some healthy cholesterol in the bureaucracy to make sure that the processes work. Suppliers are used to dealing with the public sector in a structured way, but the lack of coordination and complexity is problematic. I think possibly the Police Digital Service will offer us an opportunity in procurement and setting up some frameworks that will work successfully for entrants such as Dr Owens’s business.

Professor Sandra Wachter: It is a very interesting and important question, because it asks whether we can still apply the laws in the same way as we used to. In my experience what is happening is that technology is completely disrupting the law, because the laws that we have, for example data protection law and nondiscrimination law, were designed to keep humans in check, not algorithms. The way algorithms operate, what motivates them, how they group people, how they decide about people is so fundamentally different from how a human does it that you cannot easily translate one law to a technology and hope it will have the same result.

As an example, nondiscrimination law is based on the idea that you should not treat somebody unfairly based on a protected attribute that they possess. The thing that algorithms are very good at is finding correlations between mutual data and protected attributes. That could be very interesting. You might say that an algorithm found that people who like the colour red are very creative. What a great finding. It could also find out that people who like the colour red are more likely to be gay, and all of a sudden you have created a proxy that is very unintuitive. A human would never think the colour red correlates with sexual orientation. You put that in an algorithm and all of a sudden you disadvantage people without being aware of it.

The thing that you need to do is test, test, test. That is really crucial. As I said, most tests that are out there do not live up to the standards that we have in the UK and the European Union. Again, my team and I have developed a bias test that is now being used by Amazon that allows you to test for those types of biases: figuring out if something is wrong with the data, going in, tweaking it and getting better decision-making. But you need to do something. If you do not, it will just discriminate against people without you being aware.

The second problem is that data protection law was created to keep nosy humans in check, not to learn our secrets. The way an algorithm learns our secrets is very different. Data protection law is interested in not singling out people, not figuring out who you are, not figuring out that I am Sandra Wachter, who lives in Oxford and has a dog. That is the type that is protected. Algorithms can learn everything about you without knowing who you are. They are not interested in you as a person; they are interested in how you look in relation to other people, so I learn about you by looking at other people. I do not need personal data. I do not need to identify you.

Data protection law is only concerned with personal data, but the harm is the same. When you purchase data technology, the law might tell you that everything is fine from a data protection perspective, but in reality it is not, because technology is very different from humans. To keep that in mind, to make a good risk assessment based on data protection and equality, you need different people working together to assess what is at stake.

Baroness Sanderson of Welton: Thank you. I have a very quick follow-up question. You briefly mentioned this when you said that the best technologies are those tailored to the specific problem that you are trying to address on both sides, purchaser and seller. One of our experts said that the development of new technology tends to be driven by the availability of the data rather than the problem that needs to be addressed. Do you think that is happening too much? You have both mentioned the best technology, but how much is the new technology being driven by the solutions we need rather than the data that is available?

Dr Liam Owens: I do not quite agree with that, if I am honest with you. Of course, we have to have the data to be able to train, but there are other ways to create AI that do not rely purely on data. It can look at patterns as well. Of course, I appreciate that patterns come from data, but I believe what we have shown is that we can create what is required by our end-users. We work very closely with people to create what they require. We do not create something and try to sell it in. We work with them to work out what they need.

The other thing I may not have made clear—apologies—is that there is a difference between reactive and proactive AI. Proactive AI is where the CCTV livestreams I walked through in the House of Commons try to recognise who I am with facial identification, whereas almost every force we work with in digital forensics is reactive. Something has already happened and we have the data offline. We are not spreading it on the internet or trying to find live people on the internet. We are talking about filtration of data.

There are definitely some companies that are developing AI based on the data they have, and I appreciate that data is probably the biggest challenge we have as experts. However, for a lot of the work we are doing reactively, we are talking about filtration. The key point is that I do not believe AI is at the point where it can replace a human. I do not believe it is advanced enough or unbiased enough to be able to make a decision as an investigator would. We always say—all our staff everywhere—that AI is a filtration mechanism. It is a way of trying to find the key intelligence first. We are not saying Do not look at any of that. We are saying, Please look at this first, because that’s where your victim may be. Rescue the victim today, not in six weeks’ time, for instance.

Baroness Sanderson of Welton: That is helpful, thank you.

The Chair: Baroness Hallett wants to come in and I have questions on the contractual arrangements. Lord Blunkett, would you like to go first?

Q71               Lord Blunkett: I have a couple of quick observations. First, we have been bedevilled by presumptions since the dawn of history, including crime writers who seem to believe that if someone’s eyes flick left they are lying. We build these things into our presumptions.

The second point is that the Home Office from almost 20 years ago when I was there is bedevilled by not having the expertise, knowledge or confidence to be able to deal with these issues, even with the police national computer. Good luck with the Police Digital Service.

The follow-through on that, which the Chair has referred to, is partly the fear of who will be held to account for failure and, secondly, the lack of knowledge of who will be held to account. With technology, either the misuse of it or the failure of it is something we are particularly interested in. The follow-through with product liability is who will be held to account, and for how long, for this massively changing field where things happen, as you described earlier, with good faith and turn out to be entirely wrong. Unless we can get that right, there will be a block on using the best technology because of the fear of it being very badly misused. Could the three of you have a crack at that?

The Chair: No pressure.

Professor Sandra Wachter: I am happy to start. It is a very good question, because the system will fail. It is not a question of if but of when. It is very important to figure out who will be held accountable, but to know that you need to know when the system has actually failed. There needs to be a good understanding of, How can I evaluate if the system is working as intended?

A lot of people, when they sell their goods, talk about accuracy rates and tell you that it is accurate and that for 80% or 90% of the time it is working as intended so you can trust it; it is not failing. When people say that, we really have to question what they mean by accuracy and how they arrived at that conclusion. The idea of accuracy stems from how the technology works in itself. Machine learning and AI, regardless of how it is being used, works in the same way: it looks at the past and tries to predict the future.

That makes sense, so in our setting we believe we have some ground truths about the world. We have data on who is being stopped and searched, who is being arrested, who is being charged, who is being sent to prison, and how long they are sent to prison for. We have ground truths about the world—at least that is the idea—so we have some estimation of what the future will look like. If somebody else comes into the system, we compare them to previous offenders and we judge them in the same way. We make claims about accuracy.

That would be fine if we are very happy with the way we have made decisions in the past but, unfortunately, explicit and implicit bias creeps in at various stages of the process. You are more likely to be stopped and searched when you are a person of colour than when you are white. You are more likely to be sent to prison when you are a person of colour than when you are white. You are more likely to have fewer resources available to get you off if you have been charged. We cannot talk about ground truth so much, and we have to be very critical when people say that it is working at a high accuracy rate.

The second point has to do with technology itself; you have no idea if it is working as intended, because you do not have access to the counterfactual universe. You only have access to the data of the people that you release on parole, not the ones you actually detain. You do not know if you made the right decision and you will never know. This is not to say that you cannot make claims about accuracy. It just means that you have to be very critical when people throw around statistics from a technical perspective, because there are very clear limitations. Secondly, just from a social perspective, it is incredibly hard to predict whether somebody will reoffend or will commit a crime. It is inherently difficult to predict to begin with. Whenever people say, It is reliable, it is trustworthy, it will not fail and it has a high accuracy rate, I would caution and make people reflect on those notions.

David Lewis: There is a lot in that. Part of the problem is the lack of nuance in the public debate on the use of technology and artificial intelligence. We all seem to think that it is a panacea and it will deliver us something that it may not necessarily be able to. There needs to be some acceptance of that.

Having said that, you need really clear boundaries in your contractual arrangements. The answer to the question, Who should be accountable for the misuse or failure of technology, and how long should the relationship between purchasers and vendors last? is, How long is a piece of string? It depends on the individual piece of technology and the relationship between the purchaser and the vendor. One of the things we need to be seeking is longlasting partnerships, even cocreative partnerships, in which you work as a police force, for example, with a trusted partner who at least shares your values and you are working in a collaborative partnership. That seems to me the way ahead, not a finger-pointing approach.

In short, thinking the question through, you are probably talking a maximum of four years for a contract that could then be renewable, a minimum of two, but you need to be able to incentivise people to come into the market, going back to some of our earlier conversation, and that relationship is the allimportant part.

Dr Liam Owens: I think AI is the future—we all appreciate that—and we cannot be afraid of the wind. It is coming and we need to engage with it. We are still taking baby steps. It needs to be nurtured and to be watched closely, but we need to use it where we can to try to help ourselves. Ultimately, it needs to be human validated. We cannot just let AI go rogue. At this point, it should be for filtration and to help speed up processes, but it still needs an expert, a semantic being, a human being at the end of the process who ultimately makes the decision.

AI can make a recommendation, but a human should still be there. I am sure we can use a robot to fly a plane around the world, but we still have pilots in the cabins, because ultimately there still needs to be a human at the helm. I believe that is true for AI as well. AI should be used for filtration and to work alongside expert officers who bring semantic intelligence into the frame, but it should not replace them.

This may come up in a later question. We talk about a blame game. We will always get things wrong, but we learn from our mistakes. As a doctor that is what I was taught. We stand on the shoulders of giants to get to where we are today. We have to make mistakes to learn from them and we cannot be afraid to make mistakes. If we wait for AI to be perfect, it will never happen. We have to use what we have and learn from the mistakes we make and keep building forward. That is what we have done since the dawn of time, and that is how we should treat this new technology. We should encourage AI. It does not need to be a blame game, but in critical areas there has to be human validation.

The Chair: Is there an industry standard contract developing? Are there guarantees? Are there competitive tenders? Several of us around this table would naturally pick up a pen when a draft contract is put in front of us, but how open to negotiation are these things? Within the industry, as I say, are standard forms of contract being developed?

David Lewis: I would say there is a degree of expertise being developed on procurement in the industry. You will find that large parts of policing are organised into regional procurement hubs. The eastern region has one, as does the southwest. We would expect the Police Digital Service to undertake digital procurement, and to do that with a degree of expertise. Across blue light services there is blue light procurement. There is a range of procurement expertise developing. I imagine that the contracts will vary depending on the technology that is being applied, but as to whether there is a body of skill, yes, I would say there is.

The Chair: And on the side coming from the industry?

Dr Liam Owens: It is a difficult one, because—I keep coming back to this—AI is so new. You can have an AI where you run it on a case and the accuracy is wonderful, but you can run it on a completely separate case and the AI can get it all wrong. To establish a benchmark is very difficult.

The Chair: I was not thinking so much of a benchmark in the accuracy. It is unusual for a purchaser to say, I want to buy something from you. Here are the terms on which I’m going to buy it. It is usually the other way round. I was just wondering about that. This may be a completely irrelevant question.

Dr Liam Owens: Some forces come to us and say, This is what we expect. With other forces, we say, This is what we can provide. It depends on who we are working with across the world.

Q72               Baroness Hallett: I want to go back to the lack of national procurement, because I was really troubled by the idea of several bodies. Even if you combine them into a region, you are still talking several bodies potentially reinventing the AI wheel.

One issue in particular that has concerned me for a long time, given my background as a judge, and as a barrister who prosecuted and defended in cases of alleged rape, is the examination of electronic devices. We all know that complainants feel that this is a great intrusion into their privacy and that it takes far too long, but we also know if we have defended in this kind of work, as I have done, that sometimes there can be very relevant material there.

The question is how you strike the balance. I would have thought that for all police forces in the country the balance is obvious. It has to be a full, effective and fair examination of the device from the point of view of the complainant and the point of view of the suspect. Why cannot all the police forces get together and agree on those requirements, so that we can develop an algorithm that can shorten the misery for both the complainant and the suspect?

Baroness Kennedy of The Shaws: Hear hear.

David Lewis: I probably ought to have a crack at that first. I think you have put your finger, Baroness Hallett, on the nub of the police governance problem. There are 43 individual police forces around the geography of the UK, plus various other law enforcement bodies, and very little coordination. Without straying too far on to one of my own private hobby horses, it has become somewhat worse over the last ten years or so with the advent of PCCs, because they hold the purse strings in each force. The governance is not unified, and the Home Office has taken a step away a little from policing—I am now talking personal opinions, so I should probably make that clear—since the advent of PCCs, and since Lord Blunkett’s day.

I think you are right in that that fragmentation of approach bedevils a number of elements of policing, so why should it be any different in procurement? Some of the things I have been talking about mean that people are starting to come together to procure together. The advent of a police digital strategy, a national strategy, and the advent of the Police Digital Service mean that we will see greater coordination. What you say is absolutely right. There should be a single standard; for example, the Forensic Capability Network and forensics generally are trying to develop ground truth data and apply uniform standards across the country. Those are all innovations that I think will be positive for the future and, hopefully, we will learn the lessons from the past that we talked about before.

Professor Sandra Wachter: I think you are absolutely right. One of the reasons why we are not there yet is that we do not even have clear standards of what it means to be a good data scientist. We are not even fully at the end of the discussion as to what we expect of a person creating an algorithm. We have very clear ideas of what a good lawyer or judge, doctor or psychologist looks like, but we have none of that for the people actually creating those systems. My colleague, Brent Mittelstadt, said that it is no surprise that we are going in this direction, because the ethical conscience has not been evoked yet and a lot of people are not aware of the responsibility they are currently bearing.

A lot of the discussion in the field is about having more standards for the people actually developing that. When we have a moral conscience for these people, it will be easier to think about contractual relationships when you are selling or providing technology to people or to Governments that is being used to deter people from prison, for example, because that is a big responsibility. We need to start even earlier than that and make people aware of the responsibility they have.

Baroness Kennedy of The Shaws: Gosh. I am saying gosh, because it seems to me extraordinary that there are no ethical standards for data scientists and that there is no ethical code by which you should practise, which would be the same sort of thing that lawyers, doctors and so on have to apply to their work, and that most scientists in fact apply to their work.

The Chair: We will come to that shortly.

David Lewis: To set the record straight slightly, there are ethical standards for everybody operating in policing, and a very clear code of ethics. The Forensic Capability Network has also developed its own, off the back of that, to make it much more bespoke for forensics—an ethical code for forensics. Ethical standards are very much applied to the way in which people operate.

Baroness Kennedy of The Shaws: You are talking about the police operating, as distinct from the scientists who are developing the codes and so on. That is the distinction we are making.

David Lewis: That is a reasonable distinction, although it is quite a heavily regulated sphere. The forensic science regulator lays out quite clearly, with an ethical underpinning, the standards that should be in place. There are some standards. It would be wrong to leave you with the impression that it is completely bereft of standards.

The Chair: Baroness Pidding has a question about regulation, so shall we go on to that?

Dr Liam Owens: Did you want me to answer the question as well?

Baroness Hallett: I was hoping for an answer from all three witnesses.

Dr Liam Owens: I could not agree with you more; there is no need for us to have the backlogs we have. I created Semantics 21 exactly to go for that. There is no need for our courts system to need to wait for so many years. There is no need for victims to continue to be abused, or for justice not to be served. On the flip-side of that, there is no need for people to be deemed potential paedophiles, for instance, when the evidence shows there is nothing there and we have made a mistake, yet all of a sudden they have lost their lives and potentially their family. I could not agree with you more.

Why can we not come together? That is something we are striving to achieve. We have multiple incentives to try to achieve it. Unfortunately, forensics needs to evolve. As forensics suppliers and as police forces using forensics, we need to break the status quo. We have not really moved on for ten years since I first went into forensics. We are still doing the same things. We are still not really achieving the aim.

We need to evolve and take the chance of engaging with AI and engaging with new technologies. Let us give it a go. If it does not work no one should be held accountable. Everyone is scared of that, from what we are seeing. Everyone is scared to take a leap, say, in changing a piece of software. Many of the forces we work with have taken that leap and changed and they said, Wow, we’ve gone from a fourteen-month stack to a ten-month backlog. Others are scared to take that risk, because they are already at sixteen months or eighteen months, and if they change and suddenly drop to two years, someone has to be held accountable.

On the flip-side, you say, Why can’t we create an AI? I think we are expecting too much of AI at that point. The AI to do a whole police officer’s job is not there. It is there to assist. If there was accountability on why a force has a backlog of 18 months or two years, and if there was the incentivisation and support from the Home Office to say, You’ve got a twoyear backlog. These haven’t. Why haven’t they? What are they doing differently? What can we learn from them?, it would be great.

Unfortunately, we and some of our forces have been called an anomaly because of what we have done, but when you do it with almost a whole country in Canada, it is not an anomaly. There needs to be support and we need to take away the risk of people feeling they will be held accountable if it does not quite go right. You can monitor these things all the way through, with a threemonth assessment or a sixmonth assessment. It does not need to be two years or four years; it can be done quite quickly.

Q73               Baroness Pidding: Going back to what we were discussing earlier, I am interested in your thoughts on guidelines and regulations. Which do you favour? What form should they take, and who should set them? Perhaps we could start with Mr Lewis.

David Lewis: You need some form of guidelines and regulations, first and foremost. I know you are interested in an office for AI guidelines, and those guidelines for AI procurement look sound and are already being adhered to by many bodies in procurement in policing, so I think that would be the way to go. From my perspective, there is a difficult tightrope to be walked. Regulations have to have teeth and exert control, otherwise the market acts in a somewhat chaotic way, but you do not want to restrain new entrants and innovation and competition. We need to see regulation as an enabler, not a barrier, because it is inhibiting to organisations coming into a sector when there is no regulation and no clarity.

Some of the Home Office and DSTL[1] research demonstrates that forces lack confidence about procurement, and they lack confidence about compliance with regulation. We need to find a way of having regulation that is not a dead hand but is not so lax that organisations do not comply. The issue with regulation in this space is that there are so many regulators. Somehow we need to rationalise the number of regulators so that there is not a constant crossover of regulators and confusion about how they operate.

Specifically, in respect of a couple of other questions that have been asked, yes, wherever it can be it should be cross-sectoral. I do not think there is any point in focusing on one sector, unless that particular sector requires a specific response. As to whether you go UK or international, we have some UK-wide guidance, and I think we should stick with that, but it would be helpful if it was not too divergent from international regulation for Dr Owens. The only other issue with the international piece is that it would probably slow things down if we tried to regulate internationally as opposed to UK-wide.

Those are my thoughts. I have rambled slightly, I am afraid, but hopefully it helped.

Professor Sandra Wachter: The question of hard and soft regulation is important. You usually have to answer that depending on what the object of regulation is. Here we are talking about criminal justice, and that is one of the most important high-risk areas I can think of. Only to have soft regulation would be irresponsible. I would be strongly in favour of having hard rules that are enforceable. I would not be too worried about the market, in the sense of losing people who would only enter the market if there were no rules, because perhaps those are not people we want to work with. Perhaps that is also not the right way to think about regulation to begin with, because good regulation does not hamper innovation; it inspires innovation, and gives a road map for ethical and responsible innovation. It is a very different way of thinking about regulation in that regard.

As to whether it should be sector specific or general, certain things with AI will always be the same. We will always have a data issue, a bias issue and an explainability issue. Certain standards on that should be applicable regardless of where it is being used, but then broken down with higher or lower regulation depending on the sector. Different risks arise in advertising than in health and in criminal justice.

As to whether it should be national or international, in general it is very wise to start on a national level, as with all laws, because what equality, fairness and transparency mean will depend on culture, jurisdiction, tradition and history. It is the same way in which human rights developed; they started locally, but at some point they started to wander off and form a global network. I am hoping for that in the same way, and that we start off with certain values that we share here and then have dialogue with other countries, because I think we have more in common than sets us apart. International human rights are a wonderful example of that.

Dr Liam Owens: Guidelines versus regulation is a difficult one. I think there should definitely be guidelines sitting as I do on both a research and an academia front and as a commercial entrant. The problem we would have as a commercial entity trying to innovate and create new things is that no one has done them before. If we suddenly try to regulate innovation, innovation does not work. We need to let innovation grow, and based on the outcomes of that look at how and if regulation is required. Regulations developed ahead of that would stifle innovation.

As Mr Lewis said earlier today, the marketplace in digital forensics is shrinking, and the competition there is shrinking. That means that we have fewer innovative products to try to help our forces, fewer ways to step forward, and commercial organisations are more likely to go elsewhere than to support our UK forces.

Of course, there have to be some kind of guidelines. As a member of the public, I would hate to see red tape getting in the way of our police being equipped with the correct tools to do their jobs effectively and provide the very best public protection. I would definitely go for guidelines, but as for them being sector-wide, the guidelines in British forensics—I have to go back to where I know—for live tracking of people walking around have to be very different from the guidelines for post event looking through media that is offline. It would not even work at a sector level. It almost has to be individualised, so it would be very difficult.

I think I represent many of the commercial entities out there, and in either step we would welcome the opportunity to work with those creating the guidelines and regulations, because certainly, in my view, currently that is not happening. As a commercial entity creating innovation, we are not even part of that and are just told, Oh, guidelines are coming. We are not even brought into the mix, so it is difficult. I think there has to be an open forum for everyone to be a part of.

Q74               Baroness Shackleton of Belgravia: I have a very quick question. I was slightly puzzled by how you can be half regulated. It sounds like being half pregnant. Surely, you either are regulated or you are not.

David Lewis: I did not mean to give the impression of half regulation. I think there should be regulation and you are quite right, but part of the difficulty is that the regulation has to be sophisticated enough to keep up. The classic example is Baroness Hallett’s mobile phone download where PACE[2] gave the police enormous powers to download people’s information off phones, and the regulations and law did not keep up for quite some time.

Ironically, the metadata, if you wanted to identify who was using a phone and who they were phoning, required quite a sophisticated application, and the legislation on that was much tighter. It is a bizarre mixed playing field that we need to clean up, and then it needs to keep pace. Otherwise, it becomes a dead weight and you get the problems that Dr Owens described.

Q75               Lord Dholakia: May I come back on the matter of procurement? To what extent do purchasers and vendors comply with the AI procurement guidelines? Where guidelines are not complied with, what are the reasons? There seems to be a difference of opinion. Some of you are talking about regulations, and you, Professor, are talking about hard law. What is the alternative, and what should we be following?

Dr Liam Owens: I printed off the guidelines that are currently available and brought them with me—the procurement in a box guidelines, et cetera. A lot of the guidelines are more about industry purchasing a product as opposed to what we, as commercial vendors, provide to the police forces, or whoever. I read through the guidelines and felt they were very broad, which I think they have to be in this area, and nonspecific. I do not know how easy it would be to create a guideline for something so leading-edge and so very new.

It would be better, in my view, that forces themselves established guidelines based on their requirements. A force, for instance, may be prioritising—that may be the wrong word, and I apologise if it is—knife and gun crime, and therefore need to acquire AI that can look for knives and guns. Others may be looking at CSAM and abuse. I think the guidelines need to be about what a force’s main objectives are. I do not think the current guidelines really fit the mark for commercial suppliers.

Professor Sandra Wachter: As I am an academic, I cannot really say whether industry is complying with them; nor can I say if the police are complying with them. I can only talk about what the guidelines say and how I think about them. I have looked at the current guidelines and I think they are good, but they are not good enough, because they just talk about having to set the objective, having to engage stakeholders, and thinking about the impact on data protection issues, and on equality.

That is all good: the ten commandments of good regulation, if you will. Of course, they are good, but what is a person supposed to do differently on Monday? That is the interesting thing. That is so much harder to do, and vague guidelines will not be enough. You need to come up with criteria to help people make value judgments on a daily basis, because it is so complicated and cannot be broken down into ten little guidelines.

David Lewis: I agree with what has been said. I highlight that there are the Public Contracts Regulations 2015 as well, which are widely adhered to. Where you have places with expertise such as the ones I highlighted earlier, they are very well adhered to. The area where there might be more in the way of problems is the ad hoc development of contracts through local arrangements and personal relationships. I am certainly aware of issues where people who have a personal relationship with an organisation have said, What can we do to help you? Let’s have a look.

Even if it is without prejudice, that is a much more ad hoc and perhaps less compliant way of procurement, which muddies the waters a little, but it is bound to happen when there are so many different organisations, and lack of experience perhaps, certainly at senior officer level, of how to manage procurement processes. All in all, where it goes through the proper routes, it is widely adhered to, but there will be pockets of poor practice.

Q76               Baroness Shackleton of Belgravia: The committee has been told that some vendors have made unproven claims about products. Some of this in evidence has been a little bit more nuanced. For example, if they are given AI in relation to a study that has been culled from arrests as opposed to a general study, they may give different weight to the intelligence that is produced. It is not deliberately misleading. It could be misleading, or had they known all the facts they would think differently.

How can public bodies be confident in the scope, quality and legality of the technology they are procuring? As long ago as my law school days, we had the Sale of Goods Act of 1893not quite when I was there—but it was amalgamated into the Consumer Rights Act 1979. Goods had to be fit for the purpose for which they were sold, and if they fell short of that there was a remedy. Do you foresee that that is the gold star you would be looking to achieve to make sure that the potency of what is being used is up to standard? We will do that bit first and then go a bit further.

Dr Liam Owens: That is an excellent question. I completely agree with your comment. We have seen a lot of falsified—to a point—data. We have seen a lot of commercial vendors making claims about the accuracy of their AI or their projects, which we know, and I know, coming from an educational background, is not achievable to that level. People say that they can detect child abuse material at 97% accuracy. That just will not happen, not with the quality of data. There is no accountability behind that.

There are even projects run through the Home Office where millions of pounds have been awarded to a contract where in 2018 someone stood up and said, Our police force will never grade another image, and yet that project, up to a point, failed. The outcome did not do that and today we still grade images, but there is no accountability for why the project was allowed to go so far. Where were the checkers all the way through? We talk about the three and sixmonth checkers. Why did we not see that? Why did we not do something about it?

We consider ourselves a very ethical company and very open and transparent in how we do our testing. The lack of accountability has to be stopped, because police forces do not always have the time to go to the nth degree to do the testing. As suppliers, we have to be open and honest. We have to be held accountable for our accuracy, and currently I do not believe there is anything there. You mentioned the Sale of Goods Act, and that is probably the only thing we fall under. We have stuff in our contracts where, if police forces do not like it, we refund them. That is the honesty of what we are. We adhere to that as a UK company. I am not sure how it works outside.

There has to be some regulation on that. Regulation may be the wrong word, but there has to be some accountability behind it. I think that is as much as I can say: there has to be accountability. Forensics has been overtaken by marketing, from my experience. Marketers lead digital forensic companies to do what they are doing at times and, therefore, they will take something they do not understand and shout a number that they do not understand. That 97% may be in a controlled test over 100 people. You see it all the time in shampoo adverts—X% of so many people agree. We do not do that, and we should be doing that. We need to be sharing data, I think. There has to be transparency in this. That is all I can add.

Professor Sandra Wachter: It is a very valid point. A lot of people are hoping AI can do things it absolutely cannot do. A lot of people are claiming that they are able to make AI do a thing that it is not possible to do. People will say, Don’t worry about bias. We have debiased that dataset. When somebody says that, it is probably a lie. The only way to debias data is to debias humans and collect the data that they are producing. That is the only way to get rid of bias. When people are claiming those kinds of things, we have to be very critical.

It is the same when they talk about accuracy. In reality, we are always talking about a prediction. That is something in the futurenot now, perhaps not ever – will I know whether I made the right decision? Making claims about accuracy can be quite irresponsible. There is a disconnect between what I am predicting and the thing I am using to predict; I am always using a proxy for it, because I do not know the future. When people throw around claims that something is high accuracy and has a lot of unbiased data in it, I would be very critical.

From my perspective, if a public body is thinking of purchasing data technology, it should use its purchasing power to demand access. Show me the work. If you can do that thing, show it to me. If you cant, it is a red flag. I understand that sometimes perhaps there are not the internal resources or time available, but there are opportunities to work with independent researchers who are not on the payroll of anybody and have no stake in the result. You can have access to the data, test it, validate, and reproduce everything that has been claimed, and then you can make an informed decision whether the thing is fit for purpose or not.

Q77               Baroness Shackleton of Belgravia: May I add to your misery of questions, Mr Lewis? In relation to the police, do you think it would help if the procurement was centralised through the Home Office, which is particularly pertinent to you, or do you think that would be a mistake?

David Lewis: I will answer the general question first. This is not the only area where sales and marketing run ahead of the operations folk and leave them to pick up the pieces of what has been promised. I have certainly been part of procurements where vapourware was being marketed to us and all we had was a few screen grabs of a project that did not yet exist. We can manage that. We can put in safeguards and break clauses. If you have a robust user acceptance testing process and a robust payments structure and you take the contract in chunks—we talked earlier about three and sixmonth chunks of work—I think you can manage to make sure that you get what has been promised to you. There is plenty of good experience around that.

My personal view on centralisation is that there probably should be more centralised procurement. I have already alluded to some of the procurement that is going on, and it needs to be tied together more overtly. We are right in the middle of that now. There is quite a lot of innovation going on in some of the things I talked about on forensics. For instance, the Forensic Capability Network is based on a number of pillars, one being science but another being commercial. There is a whole load of work going on now to improve it, and I think that the centralised bodies I have talked about should make an impact if they are allowed to run.

Should it be the Home Office? I am not sure, because I think there is an accountability problem then. The Home Office is a separate body from policing, perhaps more separate than it used to be. Therefore, if you are spending police forces’ money, you probably need to be accountable to the police forces and not the other way round. That is the challenge.

Baroness Shackleton of Belgravia: A policing body would be a more appropriate Court of Appeal?

David Lewis: I think so, yes.

Baroness Kennedy of The Shaws: Really?

David Lewis: A policing body, yes.

Baroness Kennedy of The Shaws: Without independent persons on it, too?

David Lewis: I never said that. I think there should be a lot of independent scrutiny of the way police make those sorts of procurement decisions.

Q78               Baroness Hallett: The World Economic Forum recommends that Governments ensure that AI decisionmaking is as transparent as possible. Personally I prefer the word open, but there we go. How can this be achieved? For example, it has been suggested that this committee should recommend a public register of algorithms used in the justice system. What are your thoughts?

Dr Liam Owens: I think it would stifle innovation.

Baroness Hallett: A register would stifle innovation?

Dr Liam Owens: It could. It is good to have an approved register, definitely, but there also has to be room for innovation and new technologies to come in. It was a question about centralisation. When things get centralised down, in my experience it stifles newer things being able to pick up, because they do not have the resources or the funding behind them. Any centralised procurement, or if we bring in a register that costs a lot to get on to, or is difficult to get on to, or there is a lot of politics to get on to it, could make it more difficult for newer technologies to come into the market. Everything we use today was a new technology at one point.

The other problem you rightly mentioned is transparency in the commercialisation. IP, of course, would be another a big one: how much as commercial vendors would we need to reveal about IP? It would be interesting to look into, and we would certainly engage with it, but my bigger concern straightaway is about stifling innovation. We are one of the most innovative countries in the world and I would hate to see anything that might slow it down.

Baroness Hallett: The other side of it is whether the public are entitled to know what systems are being deployed on them every day.

Dr Liam Owens: I agree. There has to be accountability. You would be striking a balance and it would be a very fine striking of that balance. There would have to be an expert body, maybe from academia, that would review it. I am not sure the public would completely get the whole AI system, if that makes sense. I am trying to be really polite. There would have to be an intermediary. We should trust that our Government have our best interests, and I do, and we should trust that they choose the best way of doing it. It is not necessarily being completely open with the public but showing them that this is how it works, almost watering it down for the public. There needs to be trust in the Government that they have done everything they could to make sure it is right.

The Chair: So that it is clear in the record of these proceedings, you mentioned IP, and I take it by that you mean intellectual property.

Dr Liam Owens: Yes. Apologies.

The Chair: That is fine. It is just so that the record is correct. Shall we continue?

Professor Sandra Wachter: I very often hear the idea that regulation or transparency stifles innovation. I disagree. First, it is very important to differentiate between research and deployment, because you can research openly and freely and nobody will stifle that. The question is when you enter it into the market. They are two different things and innovation is not being stifled. In my personal experience, from the research I have done, I do not think having rules on, for example, explainability stifles innovation.

A couple of years ago, when there was a lot of discussion about whether algorithms are explainable, a lot of people, including mainly the private sector said, No, it can’t be done. It would stifle innovation. I felt that was an interesting task for me to do, and my team and I developed a tool that lets you understand what algorithms are doing in a way that does not stifle innovation. After a year, Google, Amazon, IBM, and Vodafone are using our explainability tool. It is possible. It is not stifling innovation and you can have meaningful transparency and accountability. You just have to think outside the box.

As you rightly said, the public have a right to know if algorithms are being used to send them to prison. In relation to that, I have done some research on bias testing. We say that you should make the test results of the bias test public. You should show what type of method and criteria you have been using and how it has affected certain communities. The public have a right to know. I do not think that transparency will deter innovation. I think it will attract people who have good intentions.

As to what should go in the public register, I think we can draw inspiration from the AI Act that is currently being developed in the European Union, where they are toying with the idea of having a public register, with things such as the training data, the model, the testing being done, and oversight mechanisms. All those types of things could go on a public register. It is now being discussed in Europe, where nobody thinks it would stifle innovation.

David Lewis: I am in support of publishing the algorithms that are being used. I think there is a matter of degree to be debated. It was recommended by RUSI, the Royal United Services Institute, in its police use of algorithms report in 2020. The National Police Chiefs’ Council adopted it at the end of last year with a recommendation I made to catalogue and maintain an uptodate record of police use of technology, particularly algorithms.

It is a massive task. That is the only challenge, because there are a tremendous number of algorithms and a lot of technology being used. The more you put around that, the more challenging it is. The important thing is to make it meaningful and to catalogue the particularly controversial algorithms. If you are going to start somewhere, you need potentially to think about whether you want to publish the algorithm, how meaningful that is in reality and some of the other things that are proposed—the training data impact assessments potentially. It depends on how much work that involves, because that could become an industry in itself.

It is an absolute yes. It is really important to be transparent and open and for the public to be able to see what is being used. It is then just a matter of what practically can be delivered and to what extent it is meaningful to the public.

Dr Liam Owens: May I clarify one point? I completely agree on transparency. Professor Wachter has just made an excellent point. What I was trying to say about research is that, if I were a new vendor coming on the market with a brand new algorithm, our forces would not have the time to do the research required to get it to the level where it could be put on to a register.

If we had a register that was very rigid, saying, It has to be on here to be used, our forces could not engage with that new technology. It would not be used and could fall by the wayside and never get used. I was trying to make the point that it depends on how it is deployed, so that we make sure that we do not discourage it. Our CSAM classifier is an excellent example. If we had not been able to research with forces across the world, because it was not on a register, it would never have been developed and be in use today. Unfortunately, we do not have the time to do all the research and R&D to be able to get it to a level where it is on some kind of a list to be used. There are occasions such as live forensics when it would have to be on some kind of register, I appreciate that, but post event looking at filtering data I do not believe it needs to be.

Q79               Baroness Kennedy of The Shaws: I want to ask Dr Liam Owens, who claimed that his concern was the stifling of innovation, if he can think of any innovative process where we missed the bus because of regulation. Can you give me an example of anything that you feel was missed?

Dr Liam Owens: It is difficult without it almost looking like I am trying to defend what my company is trying to achieve and some of the innovations we have, but I can give an example we are going through currently, which is the global alliance database. In this country, we have CAID—the child abuse image database. I am a strong supporter of CAID. CAID cost nearly £40 million as a project and it hosts about 110 million records of known child abuse imagery. The idea is that it reduces the need for our forces to have to look at the same images again.

It is a wonderful innovation, but there are many difficulties with it. For instance, having to pay an American organisation to be able to gain access to that database as a vendor makes it very difficult to become compliant with it. That stifles innovation for us, because we want to work directly with CAID and make the best use of it for our forces, but having to comply with an American organisation to gain access to a public taxpayer database is very difficult, so we created the global alliance.

We continue to strongly recommend CAID. It is not the be-all and end-all but it is the great one. We have said that there are forces not only across this country but across the world that want to share that data in a secure manner—hashes we call them. It means our database sits with about six million, so six to eight times the size of our UK database. There are so many blockers, regulations, unnecessary politics around the CAID database—

Baroness Kennedy of The Shaws: It sounds as if it is money that is the problem, because you are having to pay to get into the American system to bypass the block that is put on it, so it is a financial block rather than a regulatory standards block.

Dr Liam Owens: Yes, but there are other blockers—for instance, having to provide copies of our technology to American agencies that do not have data protection legislation in place or the things we have in the UK to protect us. There are a lot of blockers. That is just one example of some of the challenges that we are coming up with that are causing issues.

Q80               Lord Hunt of Wirral: Dr Owens, I think this has been a very useful exchange about the importance of allowing innovation to grow. You said before that red tape must not get in the way, and innovation must not be stifled. You have just given an example in answer to the last question of how you need to allow innovation to grow. What is the balance between transparency and commercial confidentiality? Perhaps Professor Wachter could explain where commercial confidentiality sits, in her assessment, but first, Dr Owens, how do you see the balance between transparency and confidentiality?

Dr Liam Owens: We try to be as open as possible. We draw the line in how far we go when we give anything away that means that our IP could be replicated by another. That is the point where we are. We would not have any concerns with the public knowing exactly how we work, or a committee knowing how our algorithms work, or, to a degree, education asking us how our algorithms work.

The line we draw is strictly at the point where, if we reveal any more, we will expose ourselves to people copying our technology, and if we reveal it publicly, regardless of any patents or anything else we put on there, someone will find a small way to change it, or they will go to a country where that patent or anything else does not protect us and replicate it there. We would be open and honest with the public. With police officers and everyone we work with, we explain how all our stuff works. It is where another commercial unit or vendor could copy us that is the line we draw.

Lord Hunt of Wirral: Professor Wachter, where does commercial confidentiality fit into your assessment?

Professor Sandra Wachter: You are absolutely right: a balance has to be struck when it comes to transparency issues and the question of how an algorithm works. When people do not want to tell you how it is working, it is either because they do not want to or because they do not know. I do not think either is acceptable, especially in the criminal justice sector. If they claim that it is too complex, you might be better advised to use a system that is easier to understand, because accountability is very important. Cynthia Rudin has done amazing research that shows that simple algorithms perform just as well as complex algorithms.

In addition, there are explainability tools such as the one I have developed that make you understand even the most complex algorithms without revealing trade secrets. We came up with the idea of finding a good middle ground that tells you why somebody has to go to prison or why they are denied parole, without revealing all the commercial interests. Again, it was born of the idea that there can be a meaningful way for transparency and accountability that makes you understand what is going on without revealing too much.

When people say that it is just about trade secrets, I do not think that is an acceptable answer, because somebody has to understand what is really going on. The idea that liberty and freedom can be trumped just by commercial interest would be irresponsible, especially if there is a way to find a good middle ground, where you can fully understand what an algorithm is doing and can trust that it is performing as intended without revealing all the commercial secrets to the whole world.

Lord Hunt of Wirral: Mr Lewis, where is that good middle ground?

David Lewis: It is definitely the case that as purchasers we should not allow black box solutions where we do not know what is inside the box and are not able to understand the algorithm. That would be one nonnegotiable for me.

The other area is where you have something that needs scrutiny but it cannot be exposed to public gaze. The police service is quite experienced at bringing people close to the use of sensitive tactics or covert issues, and we can use some of the practices that we have developed there, bringing in specialist independent people who understand but are properly vetted and are therefore not going to betray, in this case, trade secrets, to provide a degree of scrutiny that we can have confidence in. That is something we can do, and we can definitely bring people alongside to understand it.

Lord Hunt of Wirral: Thank you.

Q81               Baroness Chakrabarti: This has been a fascinating but absolutely terrifying evidence session, probably the most terrifying of the many interesting sessions that we have had. I am getting the impression of an absolute wild west, with profiteers going for the gold rush and sheriffs doing their best to keep up and making individual contracts with those involved in the gold rush. Meanwhile, we are at a moment in our criminal justice history in this country where trust in the rule of law and trust in policing is not great.

I will start with Professor Wachter, if I may. Previous witnesses to this inquiry have expressed enormous concern that the lack of transparency and accountability and the reliance on inevitably biased datasets in particular are seriously affecting trust in the rule of law. As the noninterested party, as a lawyer and an academic, what would you say should be done about that? Should we be going for a pharmaceutical model where there is prior approval of a product but, as you suggest, some commercial protection for the patent itself, or should we be going more with the traditional criminal justice model where the algorithm is the code of practice, and the algorithm is to be published so that there is a traditional criminal justice rule of law model?

Professor Sandra Wachter: That is a fantastic question, and the way to answer it is to come back to the first question: what do you want from the technology? Do you want it to be faster and cheaper than humans doing the same thing that they have been doing for 20, 60, 100 or 1,000 years, or do you want to invite the new technology in our society to make the world a better place? If you are doing that, I am in that camp, because I hope and truly believe—that is what I am marching for—that technology can be used to make less biased decisions and make decisions more transparent, and do it in a way that improves human decision-making. But we have to do something for it.

If we just let it be and let it run into the wild, we will end up in a situation where we have algorithms that learn our secrets behind our back without us being aware. My click on Facebook tells you whether I am gay, whether I am black, whether I am a woman, and whether I have a disability. Algorithms can do that. Every data point is biased, and it will be taken up, replicated and scaled up. It is hard to understand, but if you do not see that as a law of nature, if you see it not as a hurdle but a to-do list, you can do something about it.

If you say, We’re going to test for bias, and were going to make sure that the criteria we are using make decisions better, you can use algorithm decision-making to make fairer decisions. The bias test we have developed lets you do that and lets you make better decisions. The explainability tools that we have developed let you understand what is actually going on and to make a judgment call about whether you are happy with it. We are standing at a crossroads where we can either do nothing and watch how things are getting worse, or seize the opportunity and use the technology to make better decisions than humans used to make.

Baroness Chakrabarti: Mr Lewis, you represent the sheriffs in my wild west scenario. Do you not think that far from guidelines what we need is hard law? We have hard law about police powers. Why should we not have hard law about police algorithms?

David Lewis: I am a retired sheriff. I am one of those who lives in his farmstead out in the country somewhere, who will come in for this. I cannot therefore speak for all chief officers, but I think your analysis is right in that it is still the wild west as regards technology. You can see that, for example, with big tech and the way we are trying to exercise some governance over big tech.

I know that senior police officers and all police officers take the current state of trust and confidence in policing incredibly seriously. They are all really worried. We know that trust is won slowly and lost quickly. It arrives on foot and departs on horseback. There is a long journey, and I am not sure that legislation and regulation will be the thing that wins public confidence back. I have some thoughts about what could.

I think you have to strike a balance between allowing people to develop and evolve products and having clarity around rules and regulation. I should prefer a regulated approach. We are used to a regulated approach that springs off the back of legislation. That could be achieved, but it is such a fastmoving field that legislation struggles to keep up. A regulator has a better chance of keeping up with the pace of change; therefore, I would probably go with that.

Baroness Chakrabarti: Dr Owens, finally, you have been a wonderful and colourful witness today, but, if I may say so, you sometimes tended in two different directions. On the one hand, you began with a very firm pitch for how much you believe in your company and in your product, and I think you expressed some concern that other companies in the market may be less scrupulous, less vigorous and so on. You used words such as transparency and accountability and then you veered away at a fast pace from any suggestion that we as legislators should do our duty and step in and create a legislative or regulatory framework.

Do you want to respond to that, perhaps with where you think the balance should be struck in the public interest, not just in the interest of profit but in the all-round public interest in innovation in these tools on the one hand and protection of people’s rights and freedoms and the rule of law on the other?

Dr Liam Owens: You summarise me very well. I am sort of sitting on a balance. As someone from academia, I understand why we need legislation and regulation and law on policing, and as someone who works with policing I know the importance of transparency. I find myself stuck at the crossroad, as Professor Wachter just said. It is very difficult; as the questions have gone on, you can see that I have gone to and fro because there is no simple answer. I think everyone will have their own thoughts about it.

It would be really important to split what is reactive AI and what is proactive AI; what is live, something used to make a prediction about someone or make a prediction about whether someone is the person we are looking for, for instance, versus a reactive AI, where it is more about filtering through content faster. That is where I am stuck at the minute. I feel those two things should be treated differently.

Regardless of them, I go back to the original point that AI is still very new; it is niche, it still needs tailoring. You are completely right: it is the wild west out there, and everyone is trying to jump on the back of it because it is topical, and people are buying into it. I still strongly believe that regardless of whether it is proactive or reactive there has to be human-level validation behind it.

AI should be used to assist police forces. It definitely should be transparent. I want to make that absolutely clear. I believe fully in public confidence, and I appreciate it has been knocked, but it needs to be transparent. The public need to have confidence in what police forces are using and that vendors like ourselves who provide products are doing it for the right reasons. Ultimately, I still feel we are many years away from having AI that can replace the semantic decision-making of a human, and, therefore, humanlevel validation is needed, and we should still have an ultimate human who makes that decision at the end of the day. I apologise if that does not answer your question.

Baroness Chakrabarti: Thank you.

The Chair: Professor Wachter, you have mentioned a couple of times your explainability tool and I have seen references to it. Is that something that might be demonstrable to us in private?

Professor Sandra Wachter: Absolutely, yes, I am happy to do that.

Lord Hunt of Wirral: That would be very useful.

Baroness Kennedy of The Shaws: It would be very helpful.

The Chair: We have come to the scheduled end of this session. Does anybody else want to ask questions on anything they feel we ought to have covered?

Q82               Baroness Kennedy of The Shaws: Professor Wachter, you made a very interesting comment about data scientists not yet having clear standards set. Is work being done on creating such standards, because it would seem to me that that ought to be happening?

Professor Sandra Wachter: Yes, absolutely, and I am very happy to share a paper by my colleague, Brent Mittelstadt, who was one of the first people who realised that there is a gap. If you go back a couple of years, when the idea of AI ethics started to emerge, a lot of people were comparing it to medical ethics, and he was the first to point out that data scientists and medical professionals are not treated in the same way, so it is a bit of an illusion to expect that they will act in the same ethical way. The patient relationship with a doctor is not the same as the relationship Google has with its clients. It came out of that. He would be a fantastic person to talk to because he is at the forefront of that.

A lot of people are starting to realise that the guidelines we have at the moment, which are not enforceable, might not be good enough with all the scandals that have happened in the past. The ethical conscience of people is growing, and I can see it also inside industry. There is definitely wind in the right direction, and I am very happy to share that work with you as well.

Baroness Kennedy of The Shaws: That would be very helpful. Thank you very much indeed. It has been a pleasure listening to you.

The Chair: Thank you, all three. It has been a very interesting session. I have been terrified through every session we have had.

As I said, you will get a transcript, and we will be in touch with you, Professor Wachter, if we may, about further explaining the explainability test.

Professor Sandra Wachter: Thank you so much for the invitation. It was a great honour.

The Chair: Thank you.


[1] Defence Science and Technology Laboratory: an agency of the Ministry of Defence which works to maximise the impact of science and technology for the defence and security of the UK.

[2] Police and Criminal Evidence Act 1984