final logo red (RGB)

 

Communications and Digital Committee

Corrected oral evidence: Digital regulation

Tuesday 16 November 2021

2.40 pm

 

Watch the meeting

Members present: Lord Gilbert of Panteg (The Chair); Baroness Bull; Baroness Buscombe; Baroness Featherstone; Lord Foster of Bath; Lord Griffiths of Burry Port; Lord Lipsey; Baroness Rebuck; Lord Stevenson of Balmacara; Baroness Stowell of Beeston; Lord Vaizey of Didcot; The Lord Bishop of Worcester.

Evidence Session No. 2              Heard in Public              Questions 11 - 16

 

Witnesses

I: Rachel Coldicutt, Director, Careful Industries; Sally Sfeir-Tait, Chief Executive Officer, RegulAItion.

 

USE OF THE TRANSCRIPT

This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.

 


17

 

Examination of Witnesses

Rachel Coldicutt and Sally Sfeir-Tait.

Q11              The Chair: Welcome to our witnesses, Rachel Coldicutt and Sally Sfeir-Tait, who are giving evidence to our inquiry into digital regulation. We are focusing on how the regulators join up and co-ordinate their work; how they look to the future as well as react to issues that are in the public domain now; and the relationship of the regulators with Parliament and parliamentary oversight. Rachel and Sally, thank you very much indeed for coming and giving us evidence today. The session will be broadcast online, and a transcript will be taken.

Rachel Coldicutt is an all-around expert on the social impact of new and emerging technologies. She is director of research at Careful Industries and has held a number of roles in the sector in recent years. She was the founding CEO of Doteveryone, an organisation that came very early to the debate about the responsible use of technology and its societal impact. Sally Sfeir-Tait is the chief executive officer of RegulAItion, a solicitor and a barrister. She has held a variety of roles in the financial services industry and in the sector. We have two excellent witnesses to help us shed light on the issues we are discussing today.

In response to the opening question, add any further points about your background or any initial thoughts that might be helpful, if you want to. Let us get stuck into the first question.

Q12              Baroness Buscombe: Welcome, Rachel and Sally. The first question could almost be: “How long is a piece of string?” It makes me happy in a way, because 10 years ago I had the audacity to suggest publicly that we may at some point need to regulate the online world. I was thinking particularly about young children. I was lambasted from here to Hades, by the media and journalists in particular. We are in a better place now: we accept that there has to be some form of regulation, but we also want to make sure that we do not stifle innovation. What will be the biggest challenges for digital regulation over the next 10 years?

There is a second part to the question, and it is probably better if I ask both parts first, but we can interject. How can regulation keep pace with developments in technology? The two are intertwined, are they not? You cannot really discuss one without the other. Rachel, let me ask you first.

Rachel Coldicutt: I will add that I am a non-exec at Ofcom, but I am not here to represent Ofcom at all. This is something that I think about a lot. We are clearly at a moment when things seem to be moving at pace. Currently, it feels like there is a lot of fairly atomised energy. I will begin by looking at what is happening now in order to think about how to look ahead. We are thinking about the individual harms or consequences that are arising, and looking at very specific technologies, but not thinking about how or why. If we look at what has happened over about maybe the last three months, the rate at which a news story about a harm comes out is probably weekly, if not daily on occasion. This shows that the way to regulate is to think not about what has happened but about what will happen. It is much more important to think about the world we want to live in, the role of technology in it and how technology can make a positive difference rather than constantly going after the bad.

At the moment, no one is really talking about how it ought to be, only about how it is all absolutely awful. Not only is there an imaginative piece there to think about potential, but we should think about what has happened over the last year and a half. Technologies that were created for one thing have been used for another. Zoom has become a school, a dance class and a church hall. If we had taken the approach to regulating that is coming out in the online safety Bill, everybody would be up in arms, because no one had thought about how to regulate online dance classes.

We are not really thinking about the way that technology turns into the things that we need. The first part of that is to look ahead and not continue to look back. Secondly, to your point about how you keep up, it is about creating the pace as opposed to running along afterwards. If we can think about the mechanisms we use not just to instrumentally regulate content, data, privacy or algorithms but to articulate, above that, the kind of world that technology can play a role in and incentivise good, there is a real opportunity there.

Baroness Buscombe: That is incredibly optimistic. I like what I am hearing from you, Rachel. We tend to be very reactive, and some of us are quite concerned that people are going to have quite extraordinary expectations that the online harms Bill will cure the problem. Those of us who know something about this world know that is a big ask and is probably not possible. It is better to think much more deeply about the possibilities, what we want to have and how we can innovate to achieve that, which sounds much more practical.

Sally, as well as those two questions, would you agree with others that, from a legal standpoint, the law is always going to be behind the curve? A mix of law and codes of practice, as other forms of regulation have, is perhaps needed. I have to say I am and have always been a huge fan of self-regulation. May I have your take on it as a lawyer, which may be slightly different from Rachel’s?

Sally Sfeir-Tait: As a little bit of background, I have been general counsel and head of compliance. I have been a regulator and I have been a partner in a law firm, and most of it was in financial services. I come with that background. I transitioned into technology through University College London, where I am a research associate, in 2018. The company that we established develops technology infrastructure. That journey took me down the route of understanding technology in depth and interacting with technologists on a daily basis. I come with knowledge from those two areas.

Let me tackle your third question first, and then I will come back to the challenges. The law has to come from society. Yes, it always has to follow, but society has a duty to start to self-regulate. We see this emerging already in the US and Europe. If you have not found it, the Center for Humane Technology in the US is quite a well-known initiative. From that perspective, I quite like the self-regulating approach as a starting point. Everybody who is involved in regulation gets to understand the business, economic and societal aspects of it, because it is quite complex.

To me, that education process is one of the challenges for the next 10 years. I did not know much about technology. All I knew was that I interacted with my customers, who were banks and regulators, and they would ask me, “How do I implement blockchain? How do I deal with cryptocurrencies?” I was advising them as a lawyer. It was not until I went on that journey of interacting with technologists on a daily basis that I understood how complex technology is, how quickly it moves and, most of all, how interconnected it is.

One thing that most people do not think about, but I think about it on a daily basis, is that the reason deepfakes exist and visual AI is so powerful is that it came out of a need to protect people in the security forces, in MI6 et cetera. A lot of money went into developing it, because it was intended to keep us safe. There were also a lot of images available on the internet for AI to learn from. That was necessary, and it really helped with the development of the technology. Any good development of technology will always have the reverse side of the coin. That is going to be one of the challenges, and it is also about keeping pace.

How do we address that? This is another thing that I always think about. The technology is going to be continuously changing. We need to have, on the side of the regulators, a new breed of individuals who understand technology fundamentally. A procurement process is never going to give you the reality of what is in academia. Keeping academia close by is also something that happens, but we need to find a governance structure in order to ensure that those pockets of knowledge within the regulators and the agencies are shared at all levels, not just at the top. A lot of sharing happens at the higher levels, but not at the level of the troops. The education piece is one of the challenges for the next 10 years. The other side of it is that we have to be careful not to open up too much or to close too much. That is a very difficult balance to strike. Trying to strike it with a wholesale approach is probably not going to get us there, because it becomes too cumbersome to deal with.

Finally, on the macro side, this is not with my technology hat on but with my regulatory hat on. Financial services is quite advanced when it comes to regulation. That is because there are international standard setters. At the international standard-setting level, principles get set and then get implemented. We are not there on the technology side. It is still starting, so everybody is discovering at the same time. In a world of competitive advantage for implementing digital policies—let us call them that for now—this requires a lot of effort. These are the macro and micro challenges.

Baroness Buscombe: Can I come back to one thought that I am having, listening to you both? One form of regulation or self-regulation that to my mind has worked well over a period of years is advertising regulation. One of the reasons it has worked and continues to work is that it comes with a cost. If advertisers fall foul of regulations that they themselves develop, oversee and are involved with, they suffer financially and economically. As well as the need for more people who really understand technology, it is perhaps also important for them to keep close to people who understand governance and regulation. If they work together, it creates quite good tensions on both sides in terms of the need to ensure that you understand the consequences of everything you do. That is one of the reasons why, as I say, advertising is quite a good model.

Sally Sfeir-Tait: A self-regulating ecosystem is a really good way of dealing with it, because the industry self-regulates better. When the right governance is in place, everybody feels the pain and everybody feels the benefit. Instead of having the regulator against the industry, it becomes industry self-regulating. That is definitely a good model, especially to begin with, when it is a very technical subject.

Rachel Coldicutt: It is slightly different, because, for instance, a lot of what the online safety Bill is trying to regulate is online content, but the content is, to be honest, neither here nor there. The thing that matters is the business model behind it and the intent of the business model. Say you have a company whose aim is to somehow change the behaviour of a user, by encouraging them to spend more time on a service, to share their content or to contribute. If what we are regulating is the effect of that, self-regulation will only ever move, particularly the larger technology companies, into different outcomes.

One of the complexities here is the importance of regulating business models and regulating their underlying aims, which sounds fantastic, in a way. That kind of complexity is the difference between advertising and online.

Sally Sfeir-Tait: To follow up on that, speaking of standard setters, I would encourage you to look at the Center for Humane Technology. As a result of the recent Facebook testimony, it published a framework for how self-governance would take place. There are seven recommendations or seven points. One of them is external legislation or regulation that is government-imposed. The other six are internal: they are business model-related, ethics-related, et cetera. There is thinking around that model.

Baroness Buscombe: That is very helpful.

Q13              The Chair: Rachel, I want to come back on the point you made. Baroness Buscombe asked you how we can look to the future and how important horizon scanning is. You made quite an interesting point that I have not looked at before. We have always thought about horizon scanning as looking for future problems and addressing them through regulation. You said, no, it was about looking for future opportunities as well and understanding what technology can positively do for society as well as what problems it may bring.

I see how you apply insight into potential future harms. You take a view as to what needs to be done to address it, whether you need a self-regulatory, co-regulatory or regulatory approach, and how that might work. Have you thought about how that insight is used as part of horizon scanning to identify future positive opportunities, particularly societal ones?

Rachel Coldicutt: There are a number of techniques that one can use. I am running a programme at the moment with people who work in charities and civil society to look forward to 2050 and beyond, and then backcast to now for how to get to those outcomes. If the lens you are using is technology, a lot of the time you are going to be thinking only about technically mediated possibilities. That really comes out to me.

When we look at the government paper, Regulating in a Digital World, that came out in June, there are three categories in there. I have noted them so I do not get them wrong. One of them is about competition; one is about safety; and the last is about a flourishing democratic society. Under that lever, it is all media and a little bit of privacy. It is not really thinking about, for instance, the role that algorithms have in financial harms, the way working lives are changing or how we are educated. It is not thinking about those softer things. I would really encourage you to think about domains such as housing, transport or healthcare and the role that technology plays there.

It is easy to get very hung up on the specifics: “Could an algorithm do this, this or this?” There is a huge industry that exists to make us believe, for instance, that it is possible to achieve a general artificial intelligence. Who knows? Maybe it is not. The more technocratic narratives tend to tell us about what the technology can do, not about what we would like it to do. There is a role for both of those.

The Chair: I agree. That is a very interesting and useful point.

Q14              Baroness Rebuck: It has been very interesting thus far. We are going to be talking a lot about horizon scanning, because my questions are about horizon scanning. We have heard a lot of evidence and acknowledgment that new digital business models can be introduced very quickly and, just as you have said, that legislation trails a bit behind. When we took evidence from members of the DRCF, they talked about needing to scan the horizon more effectively, both individually and collectively, and in cross-departmental teams, which is a new initiative of theirs. Yet this activity is time-consuming and finding the right resource is also challenging. My first question is about that and your perspective on that, and then I will come on to academia, which you have touched upon, and other matters. Sally, shall we start with you?

Sally Sfeir-Tait: Yes, it is time-consuming. First you have to define what you are scanning the horizon for. Are you scanning for changes in technology that are going to potentially impact behaviour? It is an open question, which is therefore going to require a lot of resources. How do you do that? You have to start, pragmatically at least, from a base of having enough people around the room who have a good understanding of certain areas and then you can grow from there.

Unintended consequences are impossible to predict. I would say that very long-term horizon scanning might not be the most useful thing to do. So 2050 is 20 years away. How many years old are Google and Facebook? We could not have imagined teenage suicide as a result of Facebookwe could not have imagined Facebookwhen the iPhone started. In the world of digital horizon scanning, we might be aiming for something that we cannot achieve. Let us just put it that way.

Baroness Rebuck: Would you suggest that we should restrict ourselves to two to five years?

Sally Sfeir-Tait: A framework for experimentation would probably gather knowledge at the level of everybody and build the knowledge, so that you can have pockets of knowledge that go and find things out for themselves. Institutionally, we do not have the right governance frameworks in place to encourage experimentation. We think a lot; we do not experiment enough. The experience that you get through experimentation is unique, and it is not one to one. It is not a person who has read a report; it is the entire group that was involved in the experiment that gains that knowledge.

If we do that in a way that forms lots of clusters, again it is an ecosystem way of having lots of people scan the horizon through bilateral conversations and then share that knowledge. I would use experimentation as a mechanism for horizon scanning short term, leading to long term.

Baroness Rebuck: What is the best way to pull all of this together?

Sally Sfeir-Tait: There are pockets of innovation that happen in the public sector. Often they are constrained by time, budget and the individuals involved in the experimentation having full-time jobs and being extremely stretched. In financial services, we talk about the tone at the top. The tone at the top has to be permissive of experimentation and of failure within a contained environment, because experimentation includes failure. Closing a door and knowing that something cannot be done is as important as opening a door.

There is a Deloitte study called Lessons from the Edge, which is about allowing people to experiment while ring-fencing the organisation. It is about giving them a real framework, a budget and the ability to experiment. That would be a way of understanding where technology could go.

Rachel Coldicutt: I have a rather different opinion. Many of the things that happen that we do not expect are a lot more mundane than we imagine. I have been working in technology since the late 1990s. As someone who ran an online community of a couple of hundred thousand teenage girls in the late 1990s, I could have definitely told you that any technology that comes along will expedite the way that people talk about eating disorders and self-harm, but I could not have told you that QR codes would be absolutely everywhere in the last year.

We can definitely speak with quite a lot of confidence about those much larger trends, but we do not know exactly how it will look. There is something about the level of fidelity you need to have to understand the future. It is not knowing exactly what the interface is likely to be. It is not a “Minority Report” kind of world. It is thinking about what happens when you give people access to information and what is likely to happen when you give people a voice who have never had a voice. These are not extraordinary things to think about.

In trying to champion a little more of a formalised approach to this kind of thing, at Doteveryone we created a toolkit that product teams could use to start thinking about the unintended consequences of the products they were making. Very often, you are tasked with making a feature that brings users to the site more often or makes them spend longer there, but you are not asked to think about what the environmental impact of that is or what the impact would be if everybody started doing it.

A lot of the time, technology is understood only through the lens of the business case. There is a huge opportunity. To pick up the start of your question, it is not really hard and it does not take a huge amount of time. Exactly as Sally said, having people who can think about technology side by side with people who can think about governance, in order that those two things can play out together, is the really important part.

Baroness Rebuck: That is interesting. I completely agree with you. When you were looking at your community of young girls, the harms that would come to them through lots of online use were pretty obvious. I suppose I am interested in the digital initiatives that have not come to market yet. We have heard evidence, written and otherwise, that somebody should be scanning venture capital and private equity more effectively—follow the money, if you like—because, in a sense, if the money is there, those new technologies might come to market.

At the other end, Sally, you mentioned academia. I was just wondering whether regulators’ relationship with university research is strong enough. I was reading some of our evidence about Georgetown University’s Foretell platform, which concentrates on scanning the security technology sector. They use AI and they do it at huge scale. That sounds interesting as well, to this point about what is coming next. Maybe you would like to comment on that.

Sally Sfeir-Tait: You can certainly use technology for horizon scanning and following venture capital. A lot of money goes into funding horizon scanning, but it is for the purposes of consumption by specific users; for example, horizon scanning on the change of regulation.

The relationship with academia depends on whether or not programmes are in place. Rachel and I do not disagree, in the sense that you can think about horizon scanning in certain areas effectively, but doing it as an approach that tackles the whole universe of things that could change is not possible.

Rachel Coldicutt: Yes, I would agree.

Sally Sfeir-Tait: My position on horizon scanning is a pragmatic one. Horizon scanning is necessary in sectors of deep expertise when we are looking at certain areas, but it should not detract from doing.

Baroness Rebuck: I am conscious of time, but I have one last quick question. Rachel, I will start with you on this. When we heard from DRCF members, they mentioned a piece of horizon scanning work that was going to invite significant public engagement. They did not define what that was, but it seems to be in the pipeline. Assuming it goes ahead and once it is completed, what mechanisms can you see for feedback to the publicand, for that matter, government and Parliamentin order to inform public policy issues?

Rachel Coldicutt: From the public sentiment work that I have done previously to understand how people think about technology, one of the things we have to come to terms with is that people mostly do not want to have to think about it. We are using technology as a thing to make our lives easier. We are not using it because, every time we swipe right on our phone, we want to understand the underlying mechanisms.

Going back to what I was saying at the beginning about having a vision of what the world can be, there are much more productive things that can be done in terms of asking for and giving people more permission to co-create the future. Public engagement can sometimes focus on questions that are of interest to legislators, but they are not quite the way people understand the world. The disconnect between those two things is very important to resolve. We are at this odd moment where there is quite a lot of theatre of public engagement. What does not really happen as much is accountability over time. I would be more interested in what that accountability looks like.

Lastly, a huge amount of the narrative in the media at the moment, particularly about Facebook—I am not a fan of Facebook, or Meta, as it is now called—is driven by animosity from media companies whose lunch has been eaten again and again. We are not really getting to a constructive story overall. There are a few different levers for change there.

Baroness Rebuck: Thank you very much. That is hugely useful.

Baroness Bull: Rachel may have picked up on some of this, but horizon scanning seems to be about not only scanning for new technologies; it has to be about scanning for behaviour change. My interest is in how much one can scan the next generation for what it is doing. You are very young, but we are all of a certain age, are we not? Is there an assumption that young people will graduate to the platforms that older people use, or will they adapt, tweak and find ways to use the platforms they are using now in different ways?

How much do we need to scan what young people are doing in order to understand the unintended—“consequences” is not the right word—uses of platforms in different ways that they might take forward and use differently as adults in ways we cannot anticipate? That was a rather convoluted question. I hope it made sense.

Sally Sfeir-Tait: That is why my starting point was that horizon scanning is very difficult. In our business, we hire multiple generations on purpose. You would be surprised at the gap between the 40 year-old and the 30 year-old. It is huge. You would be very surprised at the gap between the 27 year-old and the 25 year-old.

Baroness Bull: Of course, you cannot hire an eight year-old.

Sally Sfeir-Tait: No.

Baroness Bull: That is the challenge.

Sally Sfeir-Tait: When I look at my daughter, who is four, I can see that generational understanding is absolutely necessary. The gap is getting bigger and the age is getting shorter. Facebook is for old people. It is a very well-known fact. New platforms are coming out on a regular basis. How do you predict the behaviour of younger generations? I will leave that to behavioural scientists.

Baroness Bull: I am not sure you can predict it.

Rachel Coldicutt: One way to think about it is to think about public space, the way that different generations use it and the way that you segue through different modes of behaviour at different times. One thing that defines a lot of the discourse, particularly about safety at the moment, is a generalised anxiety about what people under 18 are doing, because people who are sitting in rooms like this do not understand it. That has always been the case. That is life.

There is an extent to which there is a kind of magic in not knowing. That is where the change, the creativity and the next thing comes from. I do worry. While there may be all kinds of democratic harms that arise from TikTok’s algorithms and its content targeting, is it not lovely that people are singing songs together? That is not a thing that anybody thought was possible. It is good to lean into the optimistic parts of that, rather than hope to close it down entirely.

Sally Sfeir-Tait: On Rachel’s point about the opportunities around horizon scanning, forgive me, but I need to take you on a bit of a journey. In financial services regulation, it became very obvious that there is a lot of data out there and you need to regulate a world that is heavily financed and international. It is cross-border, and most people do not quite understand it.

From that perspective, there is a little bit of a parallel. We accept that that is okay. It is okay that we have to have financial services regulators that go into the detail; the public do not really get into it at that level. One thing on the opportunities side of digital regulation is that we are talking about regulating systems. Systems are easier to regulate than people. Systems can be regulated by systems. Again, there is a precedent in financial services, which is circuit breakers on the trading floor. When algorithms start shorting stock, the circuit breakers kick in and it stops.

Would it not be wonderful, on the opportunities side, if we started thinking about implementing digital regulation digitally so that we prevent harm from happening? There are mechanisms out there; they exist. We have been working on them at UCL. The AIR platform that RegulAItion developed, which is funded by Innovate UK, is designed specifically to prevent the sharing of data before it happens. There are instances out there. Lots of people in the technology space are thinking about this. Would it not be wonderful to have them at the table?

Baroness Stowell of Beeston: To go back to something that Rachel said a little while ago, you were talking about regulating the underlying aims. I just wanted to probe that. Are you suggesting that it would be possible to regulate aims, by licensing organisations as fit to operate on the basis of trying to deliver on an aim, or were you just putting that out there as a wishful thinking-type thing?

Rachel Coldicutt: If you look at how technology products are made, for instance, lots of companies in the Valley will use OKRs—objectives and key results. For every feature in every product, there is a list of objectives and key results. Those objectives are not tested against anything other than the potential to deliver shareholder value. In a way, that is an extraordinary opportunity for transparency. Almost no other industry atomises its objectives at that level.

Of course, I cannot imagine that there would be a willingness to co-operate at that level of transparency, but it speaks to a general air of pretend naivety and wanting to pull the wool over people’s eyes. The larger companies will say, “We could never have anticipated this outcome, because the objectives have been very clearly laid out”. There is an opportunity there. It is perhaps a less readily understandable opportunity than regulating content, but it is certainly possible to a degree.

Q15              Lord Lipsey: I am sorry, but I am going to lower the tone a bit because I want to change from horizon scanning and the theoretical stuff to the actual practical doing via a body such as the DCRF. We had four representatives before us, who told us how magnificently everything was co-ordinated. They were very persuasive. When I went away and thought about it, I thought, “Here is a body that does not have a chair of its own. It does not have a non-executive chair or anybody; it consists just of representatives of the regulator. It does not have any statutory powers whatsoever and the regulators can do what they have always done. It has a minute staff, which, as I understand it, is entirely drawn from existing regulators. Anyway, there are more people who are not represented on that body who are regulating than there are people on that body who are regulating, starting with the ASA and going through to Z”.

My question really is this: can a system of this kind possibly be effective, even if you give it the right objectives and the right foresight? It is to either of you. Just help.

The Chair: Start with the lawyer.

Sally Sfeir-Tait: It is a forum, not a regulatory body, at least from what I understand. It is a body for co-ordination, not for regulation. With that in mind, it is not open to everybody and not everybody has a seat at the table. It is a forum for sharing knowledge. I have had interactions with it. I will leave it at that.

Rachel Coldicutt: It is probably a very useful administrative thing. To the points that were raised about sharing information at all levels, it is a very positive step towards that. For instance, the sorts of issues that it is looking at in the work plan are very instrumental.

It is very early days. It has been around for just over a year. It feels like a very useful administrative body, but there is a need for something rather more than co-ordinating that looks beyond the sum of the partscall those parts data, content, safety, privacy or whatevernot least because, as a member of the public, it is not going to get you anywhere. The public still need a front door for redress. They need to know where to go. That is unlikely to be a co-ordinating body at this time.

Lord Lipsey: To what sort of body should they go? Funnily enough, I was grappling with this yesterday because somebody sent me an email out of the blue, without any permission, flogging me a financial product. I could not quite decide whether to go to the FCA or to the Information Commissioner, both of which have a role in this. Should there be a central clearing house for people who wish to complain? Should that be based around the DCRF or be separate?

Rachel Coldicutt: Nobody really loves the idea of there being more and more and more regulatory bodies. The DCRF could be the base from which other things grow. There is a huge amount of potential and opportunity there, but, in a way, at the moment it is there to do the very detailed job of organising and co-ordinating. There is a different skillset that is needed, which could be built on top of that, to be public facing. To your earlier point, it would need to encompass a larger number of regulators in order to do that.

The Chair: The thought I have on this illustrates a wider part of the problem here. We look at a problem and we think, “That is a content problem”, “That is a data problem” or “That is a competition problem”; “It is for this regulator to deal with”. That is why we have an online safety Bill, which is basically the Ofcom Bill. It is about what Ofcom might do in the area of online safety.

Arguably, competition policy is an important remedy and an important tool. These platforms have become huge and that people do not have any choice but to use them in order to engage with their friends. They have to do it under the terms and conditions set by those platforms. They could create a much safer environment for themselves if they could move away from the platforms’ own terms and conditions and create their own environment. That is about competition policy; it is about envisaging something like open banking. Because we look at it as an Ofcom problem rather than a competition problem, we are not using all the available regulatory tools in a joined-up way.

As well as the co-ordination and co-operation between these bodies in horizon scanning and understanding what is happening, should they not be working together to address these problems using all their tools rather than individually? Should Parliament and government be thinking, “I want to address the societal problems using all these regulatory approaches rather than handing it to one regulator”? That is where it might play a more powerful role.

Sally Sfeir-Tait: There is certainly a vacuum in the regulatory space for all things digital. There is piecemeal regulation. If DCMS is looking at something, it looks at it from the perspective of DCMS. If the FCA is looking at it, it looks at it from the perspective of the FCA. The FCA would not look to take action against a digital platform that had nothing to do with financial services; it would not understand its business model.

There is certainly a vacuum. Negative jurisdiction is where the consumer falls. That is the problem we are talking about. Forums for co-ordination will always be necessary among multiple regulators. If the question is about the vacuum, yes, but let us not go down the route of regulating activities as Europe is. That would be absolutely disastrous for innovation in the UK.

Think about seed companies that are just two individuals out of university who are developing a piece of code when they are collaborating with a company. If they were asked to register a business, to follow a code of conduct at that stage and to seek authorisation, it would be the death knell for them. At an appropriate time, that could be done through registration or anything else, but that is the point. There needs to be that kind of thinking being done independently of the areas that the current regulators already regulate.

Rachel Coldicutt: I would look particularly to the problems with the proposals in the data strategy that is currently out for consultation, in which the role of the ICO is recommended to be enhanced. The proposals in there would consider the Information Commissioner as being potentially an arbiter of fairness around issues such as employment law and the use of algorithms in employment, education and finance.

That speaks to the need for a much more sophisticated way of joining together expertise. I would totally advocate that the ICO develops more technical capability. At the moment everything is being diced up very finely, but there are certain issues that are complex and interrelated. We need to think about the way that a technology will roll out in society and about the people who are impacted by it. It will not happen tomorrow that we will wake up and everybody will have a self-driving car or a workplace monitoring programme, but certain people will.

It is very easy to think about horizon scanning as something that is happening in the distance, but we can be looking at the harms that are incurred, particularly to minority communities, and the ways that certain technologies roll out as edge cases. We can use those as case studies to think more deeply about what greater harmonisation between regulatory authorities looks like.

The Chair: Could I ask you come back on something slightly different? This may not be something that you have given thought to. Both of you are describing judgments on societal issues. Somebody has to balance these issues, work out where the harm is and maybe balance innovation against regulation thresholds. Have you thought about where Parliament fits into this?

Parliament is increasingly giving very broad powers to regulators, because it is the only way to deal with the very fast-moving digital world that we live in. Then Parliament looks at it again in five years’ time and another piece of legislation comes along. Do you see some danger in the kinds of societal judgments that you have talked about being made by regulators rather than by Parliament?

Sally Sfeir-Tait: That is a good question.

Rachel Coldicutt: My answer is “yes and”. This is an area where we need to look at a more continuing process. I have brought it up a couple of times, but look at the general discourse about Facebook at the moment. If you had spoken to people who work in technology rights at any point in the last 20 years, none of these things would have been a surprise to them. They would have all been arising. There is something about which moment a thing is taken seriously at.

There is another problem in the departmental structure in government, which does not really reflect accurately the way that technology is a social, economic and political force. It is about not just the role of Parliament but policy more generally, and listening.

Sally Sfeir-Tait: I need to consider the question a little more. Technical committees would certainly be welcome, because the external perspective is also useful. A regulator becomes very focused on what it is doing.

Baroness Buscombe: Right at the beginning, you both talked about the future world that we want to live in. That will not be the focus of regulators, will it? It is not in their DNA. I can say that; I can be a little more blunt. Regulators tend to exist to regulate as opposed to thinking about the bigger picture.

Sally Sfeir-Tait: Financial services regulation does have a precedent. The agenda of sustainability started being pushed by the Bank of England several years ago. It was implementing that and making sure that the right framework was in place for the banks to fund sustainable initiatives. So regulators increasingly play a part in societal issues. Financial stability is also societal.

These are highly technical areas. Going into horizon scanning again, having deep technical expertise and frameworks in place means that they can do the horizon scanning in their area of expertise to a deep enough level so they will have enough information. That does not mean that you should not also have checks and balances.

Rachel Coldicutt: If we think about the role of military innovation in the technologies we use every day, there was an extent of foresight. There was an extent to which people were thinking about novel uses of tools in the world. This may seem very remote from everyday life, but it is absolutely a part of the phones we are carrying around every day. This thinking is happening. Very often it is happening through a lens of security rather than possibility. There is an opportunity to open it up.

Sally Sfeir-Tait: On the opportunity side, if I am a new technology company and I am not certain what regulation applies to me, I am in a situation of vacuum. Having a go-to body that I can ask what I need to be careful of is positive for innovation. Regulation is not necessarily negative.

The Chair: Something that could guide you through all the avenues, rather than just the perspective of one regulator that you have to keep happy.

Sally Sfeir-Tait: That is it, even for the regulators themselves. Being a technology company, we often talk to lots of regulators. Sometimes things do not fall within their remit and they have to refer to each other.

The Chair: We had better move on. That was fairly interesting.

Q16              The Lord Bishop of Worcester: Thank you, first of all, very much indeed for your evidence. It has been really helpful and interesting. I am particularly grateful to you for making us, at the beginning, concentrate on the positive influence that technology can and does have towards human flourishing and the common good. It is really important for us to remember that.

It seems to me that technology is morally neutral. Like water, we need it to survive yet we can drown in it. It is up to us to make good use of it. The pendulum swings from fear to thinking it is the answer to everything. The regulation thing does tend to keep us, as you say, focusing on the dangers, which is necessary. If it is not in the DNA of regulators to think about the effect, it seems to me that it is the role of Parliament to seek, through legislation, the very best for society as a whole.

If we think about flourishing and the common good, I want us to look a little bit at the international situation. Different societies have different interpretations of human flourishing. I want to ask about the extent to which international co-operation is necessary to effective regulation. When Stephen Almond of the ICO spoke to the committee earlier this month, he pointed out, rather obviously, that digital actors are operating in a borderless digital world, and regulators therefore need to have very solid relationships with our international partners, assuming that everyone can be thought of as a partner. He says this is an area of active pursuit for the DRCF.

I am wondering what you would want to say about the international situation and how whatever develops here needs to relate to it. I am really interested in the analogy you are making between digital services regulation and what might be possible as far as digital services are concerned. Could I start with you, Sally?

Sally Sfeir-Tait: Technology amplifies the international aspect of business. In today’s world, every business is ultimately international. In financial services, money is cross-border; it moves freely. The regulation around data sharing hinders the sharing of information about fraud. Both are necessary. It is necessary to protect the data, but also to achieve the purpose of international data sharing. To have something to compare against, financial services regulation starts out through standards setters, which are international bodies on which all the regulators sit. Best practice will be developed and then implemented across national legislation with some differences.

In the technology space, this is already happening in fintech. Again, this is all my perspective. There is a race to see which economy is going to attract fintech start-ups before others, because that generates GVA. But they still abide by the international standards of regulation. There is a level playing field and it is that level playing field that we need to be talking about. In the absence of a regulator, we do not have a single voice on those matters.

The Lord Bishop of Worcester: A moment ago you said, “Do not go the way of Europe”. That shows there will be tensions internationally as to what is appropriate.

Sally Sfeir-Tait: Certainly, yes.

The Lord Bishop of Worcester: How possible is it to get through those in a constructive manner, as generally seems to be the case with financial services?

Sally Sfeir-Tait: It is through discussions and through international bodies. The EU is also looking to regulate AI ethics ahead of everybody else. It usually leans in that direction. There are positives and negatives. Then it becomes a matter of negotiation and discussion. You find that the forums usually include regulators that talk to each other. You have to have the discussion not at a completely macro level but, in the same way as horizon scanning, you cannot go very general; you have to go sectoral. At least from my perspective, if you want to have a constructive conversation it will probably be the same thing; otherwise, you start talking about fishing rights and all of that in a very general way, which does not really get you to an agreement.

The Lord Bishop of Worcester: As far as digital services are concerned, the whole area is in its infancy as compared with financial services.

Sally Sfeir-Tait: Yes, very much so. That is the biggest challenge for the next 10 years. It is in its infancy. We take corporations for granted, but that had to be created legally. We had to create a legal personality for a corporation or a company. That had to happen at one point. In legal circles, we are talking about algorithms having a legal personality so they can have a balance sheet, et cetera. We are really in the infancy of how we regulate and deal with technology in terms of our legal infrastructure.

Rachel Coldicutt: I would come from a slightly different place. It is worth differentiating between the methods and it is absolutely worth joining up in terms of methods. As Sally was saying, this is very hard and very new. The more collaboration there is, the more likely it is that effective methods will be developed.

One reason that, in the UK, we are in a bit of a pickle generally with all of this is that we have inherited many communications technologies from the US, which has different standards and different approaches to the freedom of speech. There has been a certain degree of equivocation over time in terms of what might be regarded as modern and innovative, which we need to take on, and what is potentially, at a legislative level, not appropriate, particularly in the context of freedom of speech.

Collaboration is great, but we have a harder job to do, which is about clearly expressing political will about what it is that we would like to achieve. If that does not happen, everything else is really just running around.

Sally Sfeir-Tait: Speaking about the different roles that a regulator would play versus a Government, you will see that tech companies are the biggest companies in the world. They do not spend enough on taxes in Europe. When the Government are the ones negotiating, that comes into play. When regulators negotiate around the ethics of an industry, it is the ethics that they talk about and those are the standards that they go into. Again, it is a governance point. Who represents us in those discussions is very important.

Lord Griffiths of Burry Port: This is a bit off-piste. It goes to the opening remark about looking forward and not looking back. I have a headful of the stuff that Baroness Kidron has been putting into my head ahead of the Private Member’s Bill that she is bringing before us on Friday. She says, when looking at technology in general, that one of the most important things is to ask the question: how has this concern not figured in all the legislation we have done thus far?

I am talking, of course, about age accuracy. Given both the evolution of technology and the need to regulate what is emerging, is there a mechanism or a way of asking, “What is not here? What have we missed?” All the legislation on the statute book that affects every aspect of our lives in society is missing the element of age accuracy. Baroness Kidron is trying to get this into our thinking going forward. Is that not a case where the past is informative as we look at how we do things now and in the future?

Rachel Coldicutt: What is interesting here is as much about who is a part of scrutiny. I do not know that it is much about looking at what has happened in the past. A lot of it is about understanding what is happening now. A lot of effort could be spent on mending theoretical examples in the past, whereas actually there may be different democratic mechanisms to make it easier for society to be a part of scrutinising and holding legislation and policy to account.

I say that particularly in the current context and that of the last couple of years, in which the policy calendar is very unpredictable. There can be huge clusters of things that come out together, which means it is very easy for those who are outside of Parliament to miss things that are legitimately shared for consultation. Much as I hate to say it, I do not know that I agree with Baroness Kidron.

The Chair: That would be unusual.

Sally Sfeir-Tait: You are talking about the process that we observe society and then we legislate. That is the normal process and of course you miss things, because that is just how it happens. That is an argument towards having a central regulator for all things digital. Again, forgive me, but it is my background, which is financial services. If you look at the powers of the financial services regulators, they are very broad. They are to protect the consumer. Therefore, they have to develop how they do that. That would be a mechanism for resolving that situation.

Baroness Featherstone: I might have got the wrong end of the stick. It would not be the first time. Rachel, I think at the beginning you said that, really, the all-powerful thing was the business model. Would you want that to be regulated and in what way?

Rachel Coldicutt: At the moment, many of the harms that we see related to business models can be attributed to mergers and acquisitions. There is a monopoly element, absolutely. If we think about algorithmic technologies and the way they are currently rolled out in universities, for instance, for surveillance, there is a question. Is that business model appropriate? Is it ethical? Is it up to the individual purchaser or procurer to make that choice, or do there need to be protections elsewhere?

Baroness Featherstone: I think I am agreeing with you. Carry on.

Rachel Coldicutt: It is a harder lever. It involves a different kind of scrutiny. It involves a different kind of expertise. It is a bit less easy to generate headlines in the papers.

Baroness Featherstone: I know, but in a free-for-all that allows everyone to do what they want the drivers are not necessarily a public good.

Rachel Coldicutt: No, absolutely.

The Chair: I am afraid we need to leave it there. Rachel Coldicutt and Sally Sfeir-Tait, thank you both very much indeed for your evidence. It is very useful and you have given us a lot to think about. You have given us some new areas of thinking as well that we will need to explore further with other witnesses. We really appreciate the time you have given us this afternoon. Thank you.