final logo red (RGB)

 

Select Committee on Communications and Digital

Corrected oral evidence: Freedom of expression online

Tuesday 13 April 2021

4 pm

 

Watch the meeting

Members present: Lord Gilbert of Panteg (The Chair); Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Baroness Featherstone; Baroness Grender; Lord Griffiths of Burry Port; Lord Lipsey; Baroness Rebuck; Lord Stevenson of Balmacara; Lord Vaizey of Didcot; The Lord Bishop of Worcester.

Evidence Session No. 22              Virtual Proceeding              Questions 182 - 188

 

Witnesses

I: Professor Lorna Woods, Professor of Internet Law, School of Law, University of Essex; Professor Andrew Murray, Professor of Law, Department of Law, London School of Economics and Political Science.

 

USE OF THE TRANSCRIPT

This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.

 


16

 

Examination of witnesses

Professor Lorna Woods and Professor Andrew Murray.

Q182         The Chair: Our next witnesses today are Professor Lorna Woods and Professor Andrew Murray. We are going to focus on the Online Safety Bill, which is expected before Parliament in the autumn, and to which the committee will be giving consideration.

We have two very good witnesses in that respect. Professor Woods was an architect of, and was involved in drafting, the Government’s proposals. Professor Murray has been the committee’s specialist adviser in the past, including on our report, Regulating in a Digital World. Thank you both very much indeed for joining us. The session is being broadcast online, and a transcript will be produced.

Professor Woods and Professor Murray, may I ask you briefly to add any words of introduction that you would like to, and to give a brief overview of your perspective on the Online Safety Bill, before we move on to questions from members of the committee? Let us start with Professor Woods.

Professor Lorna Woods: Thank you for inviting me to give evidence today. You mentioned my role with the Carnegie UK Trust in proposing a statutory duty of care and the emphasis on regulating the systems, rather than the content or focusing on individual items of content on platforms.

To the extent that the Government’s full response takes that approach, I obviously think that it is a good thing. It plays into ideas that I think Baroness Grender mentioned in the previous session about system design and the like, and the impact that that can have on content, content flows and so on.

It is difficult to say precisely what I feel about the Online Safety Bill. Obviously, we have not seen the Bill, and I have a horrible feeling that there is going to be devil in the detail. In particular, I have questions about how the duty itself will be structured—there seem to be a number of elements to trigger that thresholdand about how the sub-elements within a duty will relate to each other. I am aware that some of the questions that we will come on to will probably deal with this. The questions between users and non-users, between criminal content and harmful but legal content and, I suppose, gaps may be covered by the scope of the Bill or they may not, so there are questions there.

The Chair: Thank you, Professor Woods. You are right: we want to delve into quite a lot of those issues. Professor Murray, welcomeit is nice to see you again.

Professor Andrew Murray: Thank you. It is nice to be backvirtually. For those joining, I am professor of law at the London School of Economics, with 20-plus years’ experience of internet regulation and governance.

My initial thoughts, or my starting point, on the Online Safety Bill, which, as Professor Woods has noted, we do not have the detail of yet—we have the Government’s response and the prior White Paperis similar to that of Professor Woods. The idea of regulating structurally, rather than trying to knock down every individual mole that pops its head up in that global game of content whack-a-mole, is a very good one, and the idea of requiring a structural response from platforms, as was originally proposed in the Carnegie Trust paper that Professor Woods was involved in, is therefore a strong one.

My worry, which is much as Professor Woods was suggesting but was being slightly coy about, was about exactly how this is going to be implemented when we get the Bill and then when we also get the full codes of practice from Ofcom. I have a worry that, at the moment, the Bill is likely to represent a bias and may attack the wrong type of content-based harms. This is my concern.

I have a concern that the data does not support that there is a greater risk of harm online as there is offline in relation to all the areas that are in scope in the White Paper. One, for instance, is bullying, which is of course very concerning. In their response to the consultation, the Government note that 23% of 12 to 15 year-olds have experienced or seen bullying, which sounds very worrying and sounds like something we need to do something about. However, in the same year, the Children’s Commissioner reported that most bullying occurred in schools, that 62% of bullying was by a classmate, with 37% by someone else at school, and that, in fact, cyberbullying was the least common form of bullying that was reported to them.

This is my general concern: that we are being influenced to believe that there are particular harms online. The data in the White Paper and in the response suggests that these harms are more pressing, when in fact there are equally pressing concerns offline.

The Government response also refers to online media and the distribution of misleading information about Covid-19 and 5G, where the Government say: “Social media has been the biggest source of false or misleading information about 5G technologies”. That is a question of scale, not scope, and I would expect that because of the scale of social media. Of course they will be the bigger source of misleading content. They are the biggest source of accurate content at the same time because of their scope. This was reported in a paper in the Lancet in January this year, where it was noted that “accurate and reliable information”, supplied mostly by Government, through social media platforms was essential in tackling misinformation”.

I have a worry that the idea, which is a very good one, may lead us into a lot of effort and time focusing on issues without the evidence to support that they are the most urgent issues that we should be looking at.

The Chair: That is very interesting. There are quite a lot of knotty issues there for us to start exploring. Let us start with one of the knottiest: legal but harmful.

Q183         The Lord Bishop of Worcester: Thank you very much, both of you, for being with us and for those helpful introductions. Can we drill down into one of the concerns you mentioned, Professor Woodslegal but harmfulwhere the Government defined harm as content that gives rise to a “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”? A number of our witnesses have expressed concern at the inclusion of legal but harmful content for a number of reasons, one of which is that, “If the definition of harmful content is expanded, it is more likely that the volume of removed content will go up” and legitimate content will be removed.

May I ask you, first, about your general thoughts about the inclusion of this and how it is done? Perhaps as you mentioned it first, you could answer first, Professor Woods, if you would.

Professor Lorna Woods: I would perhaps take one step back. I have always expressed a concern about seeking to define or delimit the scope of the platform’s responsibility by legal categorisations of content. For the speakers, we would say that is entirely appropriate, because the various rules have been considered with regard to the culpability of the speaker, but they have not been necessarily considered so much with regard to the harm to the victim or to the role of the platform.

One way of explaining thisI have used this quite a bit, so apologies if you have heard this before—and the model that Carnegie UK Trust looked to when coming up with this idea was the Health and Safety at Work etc. Act. That is about a risk assessment. When you do a risk assessment, you do not think, “How do I stop people getting broken arms?” You look at it the other way round and say, “Oh, that floorboard at the top of the stairs is sticking up. Somebody might trip and really hurt themselves”. You are looking at the features of the environment and at how that adds to or creates the conditions for harm arising. You have a broad sense of what the harm is, but you are not getting down into that detailed categorisation of whether something is one side of the line or another with regard to whether it should be seen as criminal or whether it should be seen as harmful but legal.

The other problem that I have, and I disagree somewhat with Professor Murray on this, is that while we may divide the world into criminal, which implies that as a state we have taken action against it, and harmful but legal, which implies that there is no action, there is actually lots of regulation of content that is not criminal. We have administrative rules and we have advertising rules, for example. That applies offline, so why is it entirely or automatically out of scope online? Equally, we have rights for individuals to protect civil issues: defamation and privacy. Whether they should be in scope or not, I do not think the answer is through asking whether they relate to criminal offences or not.

I have a problem with trying to define the scope by reference to categories that were not designed for that purpose. I think I am sort of touching on your question but not answering it, because I do not like the starting point.

The Lord Bishop of Worcester: Okay. Thank you. You have invited Professor Murray to disagree. Before he does so, may I just press you? You have adumbrated the difficulties but, as you have said, not suggested a way forward. There are those who suggest taking an audience-focused, rather than speaker-focused, approach. Would that be a way through, or are there other ways through that you would recommend?

Professor Lorna Woods: When we first looked at this, we looked at the sorts of harms that existing content regulation sought to deal with and to work from those. Some of those were harms to under-18s, harms arising from hate speech—those sorts of things. I have to confess that I am not sure what the label “audience focused” means, but it sounds to me to be similar to what we were thinking about.

The Lord Bishop of Worcester: It has been suggested that the regulator could impose a duty to allow users to customise what they are shown to avoid certain harms. That is one way.

Professor Lorna Woods: I certainly think it will be good if users have more control over their environment. Certainly for lesser harms or less harmful content, that is a very good way to go.

I suppose this picks up on another point that I would like to make, which is that we tend to think about takedown. Certainly, if you are just thinking about takedown as a solution to the problems, if you think of a very broad scope it is very worrying for potential sorts of content and remit, but if you recognise that there are a whole range of other solutions coming out of the design and the tools that are made available, it is not so much about the scope of the regime as about the proportionality of the response. You might say that, for lesser harms, it is more about allowing customisation and ensuring information flows than taking stuff down.

In a way, that is also a way of mitigating concerns about freedom of speech: you are allowing speakers to continue, but you are also allowing people to protect themselves and to exercise their own choices about what they choose to engage with in terms of speech.

The Lord Bishop of Worcester: Thank you. Professor Murray, this is your opportunity to disagree.

Professor Andrew Murray: Thank you very much. I am going to take the opportunity to disagree, but perhaps not for the reasons that Professor Woods might have anticipated. I do not think that the issue is about legal or illegal or about criminality in any sense. My concern is about the locus of definitions and the locus of authority. This is where we have a major issue with this muddy area of lawful but harmful.

Speaking first as a lawyer—as a black and white lawyer—something is either lawful or it is unlawful. If it is unlawful, we as a society have taken steps to control or regulate that. If it is lawful, at least in English common law the assumption is that you are allowed to do it unless somebody tells you otherwise. We are in danger here of creating a bit of a muddy in-between, where we are saying that this is lawful but it is also harmful. It strikes me slightly that this is Westminster passing the buck a little bit and saying, “We don’t want to take responsibility for actually passing primary legislation to define this harmful content, this harmful activity, as something thats unlawful”.

When we go back to Professor Woods discussing areas such as defamation or privacy, I am very happy for those areas to be covered. We have a Defamation Act, we have a long history of the courts defining what is defamatory, we have the Data Protection Act, and we have a history, at least over the past 20 years, of discussing privacy in the courts.

If you asked me, as a lawyer, to define coercive behaviour, intimidation or disinformation, I would find that much more difficult. What worries me is that if, as Lorna Woods said, the devil is in the detail, and if Ofcom, empowered by the Bill, is required to set codes of practice for platforms to define what is in scope, the locus of the authority making here will be away from the publicly accountable Members of Parliament, and the locus of the accountability is not naturally within scope of the courts and proper processes, although there would be judicial review, no doubt.

My other concern, to go back to Professor Woods’s loose floorboard—and this debate has been had a lot—is that it is something to do with the level of scope involved. A loose floorboard is harmful: we can all agree on that. When you trip, the likelihood is that you will be hurt. The question is to what degree you will be hurt. If you are lucky, you might only have one or two bruises. If you are unlucky, you might break an arm or worse. It is therefore quite right that a duty of care applies to people to ensure that that risk is mitigated as far as possible.

Speech is different. I cannot say that my speech is harmful or not harmful. Speech is something that is subjectively given and subjectively received. Gravity is not subjective. When I fall, I will be hurt. This is a very important distinction. As Lord Judge observed in the famous Paul Chambers case, the question of how a message is received cannot be the distinction of the intention of the sender.

I am really worried that we will get codes of practice designed to help regulators and platforms find their way through highly polarised debates, where people will clearly feel strongly on one side or the other. I am thinking here of things like JK Rowling’s series of tweets which she sent in June last year on trans persons and trans rights and women’s rights. This is highly polarised, and the definition that we are going to have of “harmful” would clearly cover this, because some recipients of that were very clearly harmed by it, yet her essay on this was also shortlisted for the Russell prize.

We have to be very careful about defining harms in a way that has not been well thought out by Parliament, by courts and by judges, and then asking platforms, which have a commercial interest, to try to give effect to this through some form of application of a code of practice. I am afraid—Professor Woods knows this—that I do not agree with her that speech is a tripping hazard. I do not trip over speech—I value speech—and I do not think that we can treat it in the same manner as we do a loose floorboard.

The Lord Bishop of Worcester: That is very helpful and very clear. I cannot help but note that the first approach you took was exactly that taken by Google to the committee. I quote: “If a government believes that a category of content is sufficiently harmful, the government may make that content illegal directly, through transparent, democratic processes, in a clear and proportionate manner”, which is pretty much what you are saying, I think.

Professor Andrew Murray: It is. I had not seen the Google evidence. I am not often in 100% agreement with Google, but on this I would be. Any interference with something as important as speech, which is one of the fundamental principles of our free and democratic society, has to be done fully in compliance with our democratic principles in the same way. For me, it would be mostly for primary legislation or for very closely circumscribed secondary legislation, and then for the courts to operate in this area. What we are kind of doing is semi-privatising this. There are good reasons to have regulators involved. They have speed and they have flexibility. The original proposal around the duty of care is, in its essence, a very good one; it is just the application of it that I think is the concern here, and the lack of transparency and the locus of authority is my concern.

The Lord Bishop of Worcester: Thank you very much for that.

Q184         Lord Griffiths of Burry Port: I have to say that, after three years or more of intensive consideration of all these issues, it is a bit perplexing for a layperson such as me to hear such a welter of stuff that has not yet focused in a coherent and manageable sort of way.

I have been reading Professor Murray’s paper, which was written 15 months ago—15 months in this field is a long time—where, again and again, he says that “the Woods/Perrin model is flawed”. Here are the two protagonists sitting in front of us. I would like to think that, as a precondition for feeling that we are making progress on these issues—the Bill is about to be published, after all—the experts in the field, who have considered it in detail and have consulted widely, can offer simple-minded people like me a bit of clarity as we move forward.

My question is about the platforms. They are going to be required, in one way or another, to enforce whatever becomes the law. How confident are we that they have the capacity to do that? Taking stuff down is one clear way of doing it, acting expeditiously to reverse or take down what is considered to be illegal or even harmful. Who makes the final judgment about what is harmful? Professor Murray has just used a word that we have heard again and again in these discussions: subjective. Algorithms will not do it. Even judges might not get it right.

It is such a perplexing area, and you two are here to enlighten people like me with a beam of clarity on these issues. Which one of you would like to go in first to bat on that? Professor Lorna Woods, I heard you clarify things for me on a previous occasion: do it again, please.

Professor Lorna Woods: I am slightly nervous about the idea of private actors being seen as enforcers of the criminal law. I think that is for the police and the CPS. There is a difference, however, between saying that and saying that they should have systems in place to be able either to respond to complaints from users under their community standards or to co-operate with the police in relation to criminal law concerns and takedowns.

Part of the problem is that community standards and the criminal law tend to get blurred. As I understand it, the platforms prefer to operate, at least as regards complaints from their users, on the basis of their community standards, so that they actually have control over that definitional question of, “Is there a problem or not?” Whether they do it consistently or coherently is another question, but, in a sense, in terms of their community standards, what they say goes.

With the UK’s national law or that of each of the states affected by these platforms, the institutions of the state—the police and the courts—must have the final say. That comes back to some of the problems about defining what is reasonable for companies to do by reference to the law, because it is so complex, and I have a concern that, if we say that it is about the criminal law, enforcement gets stuck at that first stage, which is, “Are we in scope or not?”, so we never get beyond that.

As part of a systemic approach, it would be reasonable to have systems in place to respond to police inquiries or police or court orders about content so that, once things are clearly identified as contrary to the law, they get taken down.

There is a proviso there, and I have to confess that I have not sat down and thought this through, but there are obviously rules about criminal procedure and equality of arms, so you would need to be careful when inadvertently undermined by allowing a quick route in—say, by the police—into the platforms on takedowns and the like.

I do not know whether that answers your question.

Lord Griffiths of Burry Port: It moves in the right direction. I will give you an answer later. Professor Murray?

Professor Andrew Murray: I fear that I am going to disappoint your Lordship by mostly agreeing with Professor Woods.

We first have to think about categories. There are probably three or perhaps four categories of content that are the challenge. There is the clearly unlawful or illegal content. That is relatively easy to deal with, and I will come back to it in a second. There is harmful but lawful content. Then there is content that may be harmful to specific groups, such as children. I am perhaps also thinking of my parents and people who are older and less familiar with this technology and who may be more easily taken in by a potential fraudster or similar—that is outwith the scope of the Bill, I should note, and it perhaps should have been there—and economic harm.

There are those three categories, and there is perhaps a fourth one of everything else. The first category is actually quite easy to deal with. If it is criminally illegal, as Professor Woods says it is a matter for the law enforcement agencies. There are processes during which the law enforcement agencies can inform platforms and can inform other hosts of the nature of the content and require it to be taken down immediately. If it is not criminally unlawful but civilly unlawful, as with defamation, there is a similar process, which is called notify and take down, which can apply.

One practical thing that may be thought about by the Government is to restrict the application of article 15 of the e-commerce directive, which was never fully implemented in the UK and therefore, post-Brexit, is perhaps slightly up for grabs as to whether it remains part of our legal framework. This is the general obligation to monitor, which says that platforms cannot be required generally to monitor the content that is on their platforms. That could possibly be rolled back by a parliamentary decision to so do.

The problem is that the Bill, to me, focuses mostly on this middle ground: lawful but harmful. This is the most difficult for platforms to deal with, because it goes back to that subjective issue. I will not go into the way platforms moderate, because I know that you have heard from Professor Klonick, who knows probably more than anyone else in the world about how platforms moderate. Platforms moderate by a mixture of human operatives and very simple AI systems. There is the famous white, black, attack model, which catches out chess players all the time on these platforms.

Even where there are human operatives, these are very much A/B systems: they have a playbook that they follow. Does it fall on this side or that side of our standards and our customer standards? The problem with most of this content is that it does not fall neatly on one side or the other, and to ask private platforms to play referee seems to me not to be fair or reasonable on these platforms.

As I say, I am no supporter generally of these platforms, but one thing I do have sympathy with them on is that they deserve clarity in the law that they are being asked to apply, rather than being asked to follow a set of principles. I would suggest that we perhaps consider looking at article 15 and seeing whether a general obligation to monitor could be put in place. We could put a lot more money into supporting our law enforcement agencies, who need a lot more resource to deal with these online harms—the criminal ones—and we could put a lot more thought into designing a more A/B style of what is and is not acceptable to us as a society and asking the platforms to then implement, with clear guidance, where it is not criminal but may be harmful, what we want them to do through designs and processes. That would be my practical model, if you will, if we were to take it forward.

Lord Griffiths of Burry Port: Thank you both very much indeed. That was most helpful.

The Chair: We are running a bit behind time, so we will crack on.

Q185         Baroness Buscombe: Thank you both. Of course, there is great pressure from the Government to publish this Bill, but the more we look into it, the more questions we seem to raise. This is all extremely helpful.

The Government have stated that the Online Safety Bill will contain robust protections for journalistic content. What form should these protections take? Can any regime from which journalists need to be protected be compatible with users freedom of expression? In a sense, how do we uphold media freedom here, and how would you define “journalistic content”? We are talking here about locus of definitions, and this is a key definition.

Professor Lorna Woods: On journalism and freedom of speech, I think there is a difference, if we can start off with the basics, between the role that the media, or at least journalism, are seen to play in society from that played by Joe Public. The long-standing jurisprudence from the European Court of Human Rights emphasised the role of journalism: the functioning of democracy and the holding of those in power to account.

Part of what makes journalism so important is journalistic ethics. The court has, I suppose, pointed to an orientation towards truth telling. This is accepting that it will never be perfect and that people sometimes make mistakes: with the best will in the world, there are errors. It is things such as double-checking sources and giving people a chance to comment—all the sorts of things that is journalism 101 in schools teaching these things. I suppose that it would be looking to those sorts of criteria to decide where somebody is just giving their own personal opinion and where we are looking at journalistic content, if you like.

I suppose the other practical point is that there are regulatory schemes in place for the media. I am using the word “regulation” broadly here. I am not just talking about Ofcom and the top down, but about IPSO and Impress.

Baroness Buscombe: Absolutely. It is a code, basically, which is key.

Professor Lorna Woods: Yes. I think that even the Guardian and the FT, which have not signed up to either of the two regulators, have systems and processes in place where they have complaints systems. Again, that is pointing back to their trying to inform accurately within the best scheme of things. It is a difficult line to draw, but looking to those sorts of things may help us to identify that.

Drawing on some of the things that the gentleman from the CMA said in the previous session, it is perhaps about regulating the process. In a way, this may be for platforms themselves to think about. They may each take a different approach, but they have thought about it and they have a system in place that is deemed adequate, so you are moving away from a direct regulatory bite on the media in that regard.

I would think that it was important to ensure that platforms were aware of the role of journalism and perhaps that they had expedited processes in place for them. That is saying not that journalism should be immune from the law but that, given the importance of a pluralistic media environment, if there are issues with downranking or even takedown, these questions or issues could be challenged and challenged swiftly—that, in one sense, this sort of information is more important.

I do not know whether I have answered your question.

Baroness Buscombe: It is quite difficult, because it is also about compatibility with users freedom of expression. It is quite a complex question, in a sense.

Professor Lorna Woods: I suppose there is a question about the right to receive information as well and how that plays in.

In terms of the case law and the Court of Human Rights, there are just not the cases about users’ rights. They have always talked about the audience as being, in a way, a passive recipient of what journalism chooses to kick out, and there has been much less consideration of a user’s right to access and whether we have a right to balanced media coverage or to a range of views and things like that. I think that the law is less clear there.

Baroness Buscombe: That is helpful. Andrew, would you like to add to that?

Professor Andrew Murray: Yes. There are quite a few things that you could say in addition to this. I preface my comments by saying that I am aware that I am in a virtual room of many people who have been journalists or who have worked in the media, so I am at risk of saying some things that may not fit with the media journalism model of things.

Baroness Buscombe: Do not worry—go for it—but be brief, if you could.

Professor Andrew Murray: The first thing to say about journalism is that we should treat journalists separately if they are a properly regulated professional sector. Professor Julia Black and I have written about this in another area, and we have said that, where groups seek to exempt themselves from general regulation, they must have robust self-regulation in place.

The first thing to say is that I am not certain, despite IPSO, Impress and others being there, that that is currently in place in the United Kingdom, so I am not sure whether journalists have the professional framework, as perhaps financial markets or even lawyers or doctors do, to be able to say that they should be given a special exemption from general regulation.

The second thing to say is that the Bill will exempt mainstream journalistic websites. That is clear from the White Paper and the Government’s response. It will not apply to the Guardian online or the Daily Mail online. What we are talking about here is where that content is then taken on to platforms in scope, so it is put on to Facebook or Twitter or elsewhere. It is this area in particular that the Bill is focusing on, where these robust protections will come from.

There should still be robust protections for good journalistic content on these platforms, because so many people get news and information from them. There must be a way to be able to tell good from bad. In this way, I would suggest something like a strong labelling system and an option system that allows people to prioritise, shall we say, news from mainstream news sites in their feeds, which would be very beneficial in robustly protecting journalistic output.

Journalists used to play a vital role of holding power to account. That was because they were the only people with that capacity. They were the only ones who had the presses and the broadcast towers. Today, we find more and more that it is citizens who hold power to account, through the use of smartphone cameras and social media. I do wonder whether we need robustly to protect journalists in a world where we are all playing part of that role.

I suppose that my question, which I put rhetorically but if anyone wants to answer it they should feel free, is: why should a journalist with no experience of online social platform regulation be robustly protected when their content goes into Facebook or Twitter, while I, with my 25 years of experience and expertise in this area, writing a report or a story, will not be so robustly protected? It seems that we are protecting the label rather than the quality of the content—if you are labelled a journalist. That is my worry. We need to define very clearly what type of content is journalistic content and how we get this robust protection.

Baroness Buscombe: That is really helpful. The rhetorical question in response is: how do you define a journalist? We could all call ourselves journalists overnight, frankly.

Thank you. I wish we had more time for that, but I think we have to move on. Thank you both.

The Chair: Sadly, we do. I, too, wish that we had more time for that discussion. We may come back to our witnesses at some other time. We still have a few more questions to go, and we are a bit pressed for time, so could we have reasonably brief questions and answers, and we may ask our witnesses to indulge us by submitting further evidence in writing if we run short of time? The next question comes from Baroness Bull.

Q186         Baroness Bull: I want to turn to your views on the Government’s proposals regarding the protection of children. You will know that the proposed framework sets out that the requirement to protect children will apply to all platforms, regardless of their size, but that all companies will have to assess the likelihood of children accessing their services, and then only the services that are likely to be accessed by children will have to take the additional steps to protect them.

The framework further blurs that already tricky territory. We have talked about legal but harmful, which points towards a further category of legal but harmful to children.

What is your view of those proposals? Do you think that compliance will likely mean the use of age verification technologies? If so, what practical problems of implementation do you foresee?

Professor Lorna Woods: I am slightly bothered about the way in which this has been structured. I think we can see it particularly with the rules relating to children, but I have found, when reading the government response, that it is quite an ornate structure, with different subcategories of types of content or types of audience.

I have been wondering whether it all stacks up in practice and how we monitor boundaries. I was thinking that we could distinguish between the acts or mitigation features that platforms actually have to put in place, but there is a process going behind this to identify which things are in scope in the first place. I am rather worried that, if that process of identifying whether we have children or whether we have harmful but legal content is limited to in-scope content, we end up with the rather circular position that we never look outside what is already defined to be inside, so we cannot check whether the boundaries are right.

I do not know whether that will be sorted out in the drafting of the Bill, but perhaps there should be a general risk assessment, so that you can find out where these categories lie. Otherwise, there is a practical problem in understanding who is subject to what.

I also wonder about the requirement that Professor Murray mentioned about types of harm—physical and psychological harm. These apply generally, so presumably some content that is harmful to children but either provides the wrong sort of harm or harm that is not significant would fall outside the duty of care. I do not know whether that is okay or not, but it should be thought about when we are talking about what is actually the scope.

There is another issue, which is that the full government response distinguishes between users and non-users of a particular service. This makes sense on one level, but it means that platforms have a lower duty to those who are non-users. I just wonder how the system works, if we hypothesise a platform that is used by paedophiles to share information or co-ordinate or do background work, but it is not used by children. Is that in scope? We are talking about adults, and I do not know whether all those sorts of discussions would be caught by the criminal law at the moment, so arguably it could be outside scope. Do these platforms have a duty to the children, especially if the content is not hosted on that platform?

Baroness Bull: That is a very interesting question.

Professor Lorna Woods: It just shows the complexity of it all.

Baroness Bull: I am conscious that we are very tight for time.

Professor Lorna Woods: I should let Professor Murray speak.

Baroness Bull: I am sorry that we must pass on, but I am aware of the clock.

Professor Andrew Murray: I will keep this very short, because I am conscious of the time as well.

Two things can be said very quickly. First, it is clear that the Government do not intend this to reintroduce age verification tools as a blanket approach. They have said as much in their response, and a similar statement has been made in the House by the Minister that this is not the intent. Effectively, part of the Digital Economy Act will remain moribund, for want of a better term.

I do not think it will lead to a requirement for age verification. What it will lead to is age-appropriate design, which is very positive and something that we already have through the age-appropriate design code of the ICO. The age-appropriate design should prioritise the interests and values of the child and their parents through designs that allow parents to set appropriate standards for children for access, timing and those kinds of things. I know that kids get around them, but we can give parents all the tools that they can possibly have.

It is also very important that we do not imagine, somehow or other, that children need to be wrapped in cotton wool at all points. We have a legal duty under the Convention on the Rights of the Child to allow children access to mass media as they reach the age and stage to be able to assimilate it, and to improve and support the development of children.

The devil is in the still-to-be-published detail, but I hope this will lead to a lot more tools to assist children to transition from childhood to adulthood in the very complex society that we have, through a mixture, basically, of education and empowering parents, in partnership with guidance from Ofcom and tools designed by the platforms and the ICO. That is very important. We must not turn these platforms into areas that are safe only for children, because that harms them when it comes to a place for adults to have open and free discussion. Frankly, it would also restrict the ability of the child to develop into a fully digital 21st-century adult.

That is my view on this couple of things.

Baroness Bull: Again, we could go on discussing that for hours, but I will need to hand back to the Chair. Thank you.

Q187         Lord Vaizey of Didcot: We could have a lengthy discussion about my children; they are pretty terrifying.

I want to switch the focus. One of the concerns about any kind of regulation is that you provide help for the big platforms, weirdly, because they can adapt to regulation in a way that smaller tech companies cannot. One of the side-effects of the GDPR, for example, has been that a lot of sites that you could access in the UK and Europe you can no longer access as they are based in the US, because people have simply taken the path of least resistance and blocked access because they cannot afford to or do not want to comply. Do you think that is a risk as we introduce this kind of platform regulation in the UK?

Professor Andrew Murray: I think it is a risk. When you introduce any form of new regulation, in particular regulation that is more principles based and less directive, shall we say, and regulation that is more blanket in form, as this is, as you say, the larger companies are much more accomplished at being able to deal with it. They have the resources to be able to do that.

There are two concerns. One is the GDPR-style concern, where non-UK based small-to-medium-sized enterprises just decide that the UK is not the place to do business and that the cost of compliance outdoes the benefits of doing business here, and we end up seeing some form of GDPR-style blocks, which say, “This is not available in your jurisdiction.

The other thing that we are in danger of seeing, which the witnesses mentioned in the previous session, is that small to medium-sized enterprises may contract out some elements of compliance to even larger companies such as Facebook, Google and Twitter, which will come along and say, “We’ll provide you with moderation as a service. In the same way that we do software as a service and platforms as a service, we’ll do moderation as a service and we’ll deal with your moderation. We all then end up with the Facebook universe or the Google universe, as they moderate for a large number of SMEs as well, because they have the capacity to do it.

These are general concerns. If we want to put that into effect, when we got Part 3 of the Digital Economy Act 2017 on age verification, the first company to put their head above the parapet and say, We’ll design age verification for you”, was MindGeek, which is the world’s largest supplier of pornographic content. It said, “Well, it’s to our benefit to be able to do this”. It was then offering to license that. A poacher-turned-gatekeeper scenario could very easily arise in these situations.

There are two concerns: one is geo-blocking; the second is large poachers-turned-gatekeepers for our SMEs.

Lord Vaizey of Didcot: I was lobbied about age verification by the porn industry when I was a Minister, to encourage me to bring it in, but I did not take any meetings.

Weirdly, I am quite in favour of moderation as a service, and I think it is quite interesting that it could spawn a new industry, but I was sort of hoping that it would be the start-ups. You are right, however; it would probably be Facebook. Professor Woods, what concerns do you share about that?

Professor Lorna Woods: I suppose there is a risk, but I would point out that the world is changing and that we are not the only jurisdiction looking at this. There is the Digital Services Act coming through from Europe, and even the States is considering looking at Section 230, which is the—

Lord Vaizey of Didcot: Canada is quite advanced as well, is it not?

Professor Lorna Woods: Yes. I think that Australia is also looking at these things, and there are rules in India, too.

The 1990s environment, where everything goes, is possibly not a valid counterfactual any more. I think there is a risk, but kowtowing to the risk is also outsourcing legislative decisions to external actors. If we do not take action that we think is in the public interest because of this, are we delegating legislative responsibility?

I suspect that I am probably not as negative as Professor Murray on this. I would also say that it might be a good thing to create a space for innovation—perhaps your idea of moderation as a service or safety tech more generally. There is a potential that things might change.

The Chair: We really are running out of time. We have time for a quick final question from Lord Lipsey. Then, I am afraid, we will have to wrap up.

Q188         Lord Lipsey: Before this meeting, I would have taken the view that the penalties should be absolutely vicious on firms that were not doing their job on all this. I was a “Cut their goolies off” man. Having heard this, I would say that the doubts about the offences and getting it right with harmful have made me much less bullish about offences, because of the danger of what are essentially miscarriages of justice hitting people who have not really behaved that badly after all. I would like to hear both witnesses’ one-sentence views on that.

Professor Lorna Woods: I found the question of penalties quite difficult. Whatever the final penalty is, there should be a staged process so that you cannot just jump in with your boots on and start imposing very heavy penalties. We already have models for this. Ofcom has various processes in place in the broadcasting context, where it informs before suggesting that people stop things, before moving on to fines—perhaps that sort of model, I would say.

Professor Andrew Murray: I will just say two things. First, penalties should fit the harm. Where civil harms occur normally, in tort we put people back in the position they would have been in. There is no penal sanction for it. With criminal law, it is a penal sanction. We have something in between, which is civil penalty orders that are used by Ofcom, the ICO and others, and that is what is in scope here.

We should think about what we actually want to achieve. I presume that we want to achieve compliance, and I suspect that the model of a 10% of global turnover maximum penalty will not actually worry these companies, because they will not imagine that it will ever be used, in the same way as GDPR penalties are not used.

I think that the better way to do penalties for these companies would be to give them a perverse incentive: if they continue to act badly, they will incur a series of small penalties, or death by a thousand cuts, if you will, which gives them an opportunity to consider their action.

There is one way to do that, and I can send this to the committee afterwards. I have a colleague, Dr Martin Husovec, who has talked about the idea of having a kind of penalty system whereby, if a company takes down a post and the person objects to that being taken down, they can appeal to an external third party, which then, if they agree with the person who made the post, imposes a small penalty on the platform. Those penalties accrue over time. He has modelled it to show that this is more effective at changing behaviour than the idea of threatening them with a penalty of 10% of global turnover in one big shot. That would be what I would suggest.

The Chair: Gosh—you have opened up yet another interesting subject that we could talk about for a long time. That is a very interesting issue in itself, and it is linked to the question of whether a regulator that finds a company not to be compliant should be required to specify what that company needs to do to be compliant, or whether the company should go away, come back and then be handed a fine because they have not become compliant.

We have just started yet again on another issue that we could discuss at length. We would welcome any further thoughts that you have in writing. If we cut you short on anything where you felt you wanted to elaborate or where we did not get into the meat of any issues that we were discussing, please put something in writing to us or send us some reading references, if you think that would be useful.

As always, Professor Murray and Professor Woods, we are very grateful to you for your time. You are very helpful to the committee, and this has been a typically interesting session today. Thank you both very much.