Text

Description automatically generated

 

Joint Committee on the Draft Online Safety Bill 

Corrected oral evidence: Consideration of government's draft Online Safety Bill

Thursday 23 September 2021

10 am

 

Watch the meeting: https://parliamentlive.tv/event/index/d3916789-39c6-4891-9fbb-c5adbcc15a5c

 

Members present: Damian Collins MP (The Chair); Debbie Abrahams MP; Lord Black of Brentwood; Lord Gilbert of Panteg; Baroness      Kidron; Darren Jones MP; Lord Knight of Weymouth; John Nicolson MP; Dean Russell MP; Lord Stevenson of Balmacara; Suzanne Webb MP.

Evidence Session No. 3              Heard in Public              Questions 69 - 91

 

Witnesses

I: William Perrin OBE, Trustee, Carnegie UK Trust; Dr Edina Harbinja, Senior Lecturer in Media and Privacy Law at Aston University; Professor Sonia Livingstone, LSE Department of Media and Communications; Professor Clare McGlynn, Professor of Law, University of Durham.

II: Jimmy Wales, Founder, Wikipedia.

III: Elizabeth Denham CBE, Information Commissioner; Stephen Bonner, Executive Director, Regulatory Futures & Innovation, Information Commissioner’s Office.

 

USE OF THE TRANSCRIPT

  1. This is an uncorrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.
  2. Any public use of, or reference to, the contents should make clear that neither Members nor witnesses have had the opportunity to correct the record. If in doubt as to the propriety of using the transcript, please contact the Clerk of the Committee.
  3. Members and witnesses are asked to send corrections to the Clerk of the Committee within 14 days of receipt.

 


56

Examination of witnesses

William Perrin, Dr Edina Harbinja, Professor Sonia Livingstone and Professor Clare McGlynn.

Q69            The Chair: Good morning, and welcome to this third public evidence session of the Joint Select Committee on the Draft Online Safety Bill. Welcome to our witnesses, both in the room and participating remotely.

Perhaps I could start by aiming a question at each of our witnesses, starting with Clare McGlynn and working along the panel. Do you think the Bill as set out has an effective system for regulating and moderating content online on social media platforms? Do you think it has the powers it needs to do that effectively?

Professor Clare McGlynn: I think the Bill is a good start, but it needs to be revised to be more victim focused, by which I mean we need to give some direct powers to the regulator to order the take-down of harmful content. That would give some immediate, meaningful and direct support to victims, because at the moment it is basically the Bill saying to victims, “You need to sit back, watch and wait, and hope that in a few years’ time, when the codes of practice are in place, something will change”.

I think we need in the Bill to strengthen the criminal law around online abuse because there are gaps—gaping holes, frankly—in that law at the moment. You have heard recommendations about epilepsy, I know. I would highlight things such as cyberflashing—sending unsolicited penis images to someone without consent. There is much knowledge about AI, but it is not a criminal offence to distribute deepfake porn and fake porn at the moment, so there could be some gaps filled in this Bill. That would then clarify what unlawful, illegal, material is.

Finally, the Bill needs to strengthen the reference to the epidemic of online violence against women and girls. Her Majesty’s constabulary just over the last week emphasised that this is an epidemic and needs to be taken as seriously as terrorism offences, so it needs to be priority illegal content, just like terrorism, and treated like that.

The Chair: Thank you.

Dr Edina Harbinja: I would like to thank the committee for the opportunity to talk about this important issue and vital Bill.

As regards your question, Chair, I am not sure that the Bill specifies that the aim is regulating content on the platforms. The overarching aim of the Bill, as I see it, is online safety, plus some other aims, such as protecting freedom of expression and privacy, as well as digital literacy. I am not sure that the aim you have mentioned is possible because the Bill proposes indirect mechanisms to regulate the platforms through risk assessments and terms of service and regulation by Ofcom. 

Clearly, that is not certain, so those are some of my objections. I have indicated them in a written response regarding the uncertainties, proportionality, and clarity in the Bill, the suitability of some of the provisions and concerns around not just freedom of expression but digital and human rights more generally. We can unpick that later if you wish. 

The Chair: Can I respond on one point right now? The question of whether to regulate or not would seem to be around enforcement, because you could say that the Bill creates certain standards and regulations around illegal content—things that Parliament has already made a criminal offence—and then other areas of content that we would legislate for through this Bill. Ultimately, it comes from Parliament and sets standards for the companies, but if you dispute its being a regulator, is it because you do not think it has—similar probably to Clare McGlynn—the regulatory enforcement powers that a regulator should have?

Dr Edina Harbinja: That is one reason, yes, but another is that the Bill is quite unclear as to the nature of the content, the mechanisms and some of the concerns that should not be left to the regulator or the Government to decide, but should be in the hands of Parliament, in both Houses, to decide. I think these concerns are quite significant.

The Chair: Thank you. William Perrin, the submission from Carnegie sets out a lot of areas where you request that clarity be brought to the Bill. Are those drafting issues around clarity, or do you think there are more fundamental questions about the powers of Ofcom?

William Perrin: I thank you and the committee for having me, and I also thank my colleagues at Carnegie. We have a team working on this and I hope I can do their work justice here today.

If we step back—a long way back—what DCMS civil servants and Ministers have done is produce quite a remarkable thing. They have produced, arguably, the first piece of national legislation in the world that properly grapples with social media companies in a broad sense by giving Ofcom powers to seek information, powers to disrupt businesses, and powers to require risk assessments in companies to force them to acknowledge the risks that many of us accept are there. Four or five years ago that was considered impossible. These companies were considered almost to be magical beings, beyond regulation, and indeed that was how companies liked to project their view of themselves to legislators and to civil society. It is tremendous. They have created a regime that engages these companies and drives them through a system of regulation, so that should be applauded. 

What we are seeing already, however, even in the opening exchanges here, are the gaps. Everybody sees the gaps, and we have only had the Bill for two or three months. Can you imagine how many gaps there will be when armies of lawyers from some of the world’s biggest companies have crawled over it for a couple of years? That is because the Government have not quite learned all the lessons, I think, from the long history in the UK of regulating big, broad problems in a fast-changing environment. We learned from the Factory Acts in the 19th century that 150 years of setting detailed rules did not work. Aberfan was possibly the culmination of the horrors of that regime because, in a fast-changing environment that covers a whole range of eventualities, people find a way around the rules, and new problems arise that are not met by the rules.

In the UK, we moved in two or three areas of public life to statutory duties of care to require companies, in a very broadly based sense, to take reasonable steps to prevent reasonably foreseeable harms occurring to certain categories of people. This was the work, of course, that we at Carnegie proposed two or three or years ago, and the Government have taken some of that. I believe as well that much of the internet should be completely free for people to do what they want, and that is the joy of it, but there are areas where great harms are occurring, particularly in these large social platforms. In trying to specify and constrain the regime, the Government have created gaps and complexity through the act of constraining, in particular complexity.

We have advocated, and will present detailed amendments in due course, that an overarching duty of care should be brought in, but the Government, and indeed Parliament, should be given a proper way of setting what their priorities are for Ofcom within that regime and having counterbalancing guarantees for important fundamental freedoms, such as speech, but also freedoms such as personal integrity and private life and so on. 

The Chair: Do you think there should be an overarching duty of care as well as the specific requirement set out in the Bill?

William Perrin: We do not know yet. I will not try to give a smart answer and say yes, so—

The Chair: A unique response in front of a Select Committee.

William Perrin: My colleague Professor Woods is doing some work at the moment, which we will share with the committee when it is complete, on a number of options for inserting an overarching duty of care. We think the 5Rights amendment is very promising, because the Ofcom Clause 61 risk assessment duty is almost there in its breadth. The trick then, if you want to, is to insert an overarching duty by ensuring that that is driven through into risk assessments, but at the moment the chain is broken. It does not quite drive through.

This is a complex Bill. We have all read lots of Bills, but sometimes this is baffling; you cannot really read it from one end to the other and work out what is going on. It is very tempting to do a simplistic thing and say that you just delete the bit in the middle, put in an overarching duty and then limit that overarching duty carefully, either by delimiting bits of it or by saying to Ofcom, “Here are your priorities”, and then that would lead to a simpler read.

The Chair: Thank you.

Professor Sonia Livingstone: Thank you so much for inviting me, and I am sorry I cannot be with you in person. There are two kinds of areas that I would like us to discuss today.

The first is the scope of the Bill and concerns about what is left out if one just focuses on search and user-to-user services. I know there has been a lot of discussion about pornography, but pornography on sites that do not host a user-generated content is the No. 1 concern of children and indeed of many adults, according to research, so it is about including that and a host of other kinds of content that come under legal but harmful, which are unclear or currently not included. I am thinking of content around suicide, self-harm, pro-anorexia and so forth. Such content is harmful when aggregated and targeted via automated processing via algorithms. Even though individual items of content might themselves not be, or often are not, harmful, it is about the way in which algorithms aggregate and target, and particularly target those who are vulnerable. Picking up on the earlier point made about victims, that requires much more attention.

As some of you may know, I played a role in drafting General Comment 25 for the UN Committee on the Rights of the Child and came to appreciate the importance of process points. One that I really want to highlight today is the question of risk assessments and the importance of strengthening risk assessments, making sure that they are evaluated according to minimum standards set by the regulator, and that risk assessments conducted by service providers are made public. I have more to say about how the risks addressed should be those that arise from evidence through the experience of victims, through cases, rather than, as I think at present, that which the service providers choose to include.

The Chair: On that final point, do you mean that, as new things come along that the regulator determines to be bad, it should request or require that the companies adjust their own risk assessments to take account of that?

Professor Sonia Livingstone: Absolutely. We see risks arise from changes in technology; they arise from changes in social norms and practices; and we see them coming up through evidence. I speak as a researcher, so to have a set check-list or tick-box approach to risk assessments is deeply problematic. It should be, yes, for the regulator to set the requirements against which service providers assess their risks.

Q70            Debbie Abrahams MP: Perhaps I could pick up on Mr Perrin’s comments just now about the duty of care. Why do you think it has not been included in the draft Bill? What are both the pros and the cons of having a statutory duty of care for providers?

William Perrin: It is hard to know why the Government have not gone for an overarching duty, and I can only guess, but I think it is because, as I hinted, they want to take care, quite rightly, not to overregulate the whole of the internet and the whole of social media, and I think that is a good objective. They have gone to the other end of the spectrum and said, “Here are two areas that we care about a great deal, illegality and harm to children”—everyone would agree with that—“and here is one we care about a bit less, harms to adults”, a very weak duty. They have put those three duties in, but they are duties towards particular types of content in the main, not the whole, so there is a huge infrastructure off the back of that to work out what those types of content are, which is complex, and what the risks are.

We think the benefit of an overarching duty is that it is very durable and very long term. If we go back to the Health and Safety at Work etc Act, that duty has been around since 1974. The duty with respect to safety on land was introduced in, I think, 1956 by Viscount Kilmuir—the great Nuremberg prosecutor—and that duty persists, but it does not have a regulator. We think there is durability in these duties, which is particularly important at this time of accelerated change and in this industry above all others.

In such a duty, one needs to have confidence in the regulator’s capability to determine what is harmful and keep up with research, as Professor Livingstone is doing, and to ensure that companies are staying in touch with what is harmful. A general sense of taking reasonable steps to prevent reasonably foreseeable harms is a powerful tool, especially against companies at this scale of operation, where we cannot always know from the outside the extent to which they perceive risks to their customers. Reporting in the Wall Street Journal recently has been very instrumental on that. It also gives a great deal of legibility and simplicity, which reduces the regulatory burden, and I think it is a good principle in society that companies should be held accountable for harms that might arise from their operation towards customers. That is particularly the case in this area.

Debbie Abrahams MP: Thank you very much. Can I ask the other witnesses for their comments about that—the pros and cons?

Dr Edina Harbinja: If I may add to the three duties that William has kindly identified, there is also a duty to have regard to the importance of other freedoms of expression and privacy in the Bill. As for the overarching duty of care, I would disagree that it is the right approach, mostly for very professional legal reasons.

I believe, as do many other colleagues who have written about it and have submitted written evidence, that it is an inadequate analogy between the health and safety laws, common-law duty of care and the duties that we would like to impose on the platforms. I agree that we need to regulate platforms and impose certain obligations and duties on them. I just disagree that we should draw analogies that have been long-established in the law by the courts and statutes, because there might be implications from that.

The analogy between, say, tripping on a floor and speech on the internet is completely inadequate, so I disagree. I think the duties in the Bill should be re-phrased to read something like, “Duty to take steps, duty to protect, obligations on platforms and service providers”, et cetera, so as to avoid conflation and confusion with the duty of care.

Professor Clare McGlynn: There is value in having an overarching duty of care just because of its flexibility and being future-proofed. On the specifics, I defer to William’s more detailed technical knowledge, but, in principle, yes.

Professor Sonia Livingstone:  In support of the single duty of care—I defer to others on the exact legal phrasing—I want to emphasise the way in which risks to victims do not occur under separate boxes. They interlink and cross platforms. One risk leads to another; they are associated, and some of them add up to collective harms as well as individual harms. I think the shopping list, as it were, of separate risks of harm does not capture the dependencies among them and perhaps impedes by design anticipatory solutions that really build in anticipatory risk and rights assessments at the level of a platform or service rather than harm by harm.

Debbie Abrahams MP: Mr Perrin, in your evidence you also talked about the need to have a better understanding of what is harmful, not so much the content side but the actual effect. Thinking about the cognitive effects on children, there is some evidence around the impact of the internet on the hardwiring of the brain, much in the same way as there is an impact on the hardwiring of the brain of children living in poverty. There is emerging evidence about the impact on cognitive processes in adults as well. Do you think that we are just scratching the surface in understanding what is harmful as an impact?

William Perrin: That is certainly the case, and points to the need for prioritisation for Ofcom, otherwise Ofcom will have to set its own priorities. There is a wide range of harms, but Ofcom and its predecessor bodies, the IBA and so on—you have excellent evidence submitted from the BBFC—regulators in the UK, have decades of experience of evaluating harm in the context of media. These platforms are big media distribution platforms that sell advertising against their success in distributing that media. They have some similarities at that structural level with regulation of broadcast, but the approach, given the scale, has to be looking at the systems and processes of the platforms. Are the systems and processes themselves generating harmful outcomes? If I google for suicide information that is not glorification of suicide and then the algorithm comes up with, “He’s interested in suicide. Let’s send him some more suicide information at the same time”, that is where the harm often lies rather than in the specific pieces of content.

We have regulators that are extremely well equipped to assess harm, and companies that are of course in the sentiment business. The social media platforms are in the sentiment business; it is what they do, and they should be acutely equipped to assess harm once they are told to do so.

Debbie Abrahams MP: I absolutely agree with what you said about the regulation and so on, but if we are looking at risk assessment processes and tools, they need to be comprehensively picking up on what the impacts are, do they not? If we are only scratching the surface about what we think are—

William Perrin: Ofcom’s Clause 61 risk assessment process underpins the whole regime. It is designed to be quite broad, but its application in the duties is narrowed to some extent. It is very important in the future of the Bill that Ofcom’s baseline risk assessment for the industry as a whole, and its composition of risk profiles, if that persists, is as broad as is humanly possible.

That is important, but if you go back to, for instance, the UN guiding principles on human rights in business, risk assessment is at the heart of that. It is at the heart of the OECD process on business impact on a wide range of issues, and it is well understood in industry as a tool for assessing risk. We are starting to see signs that risk assessments are going on inside the companies, as media stories are leaked out, and they need to be formalised and come under the clear direction of Ofcom, rather than the industry itself becoming what I might call wilfully blind to risks because they have a messianic mission to do good. You see this quite a lot in Silicon Valley companies: net, we do more good than we do harm. There is a sense that one can wilfully be blind to risks, and that needs to be addressed through a regulatory process.

Professor Clare McGlynn: Could I follow up on that?

Debbie Abrahams MP: Yes, of course.

Professor Clare McGlynn: This is where I would come back to the point about making sure that we identify specific types of harms, such as online harms against women and girls. The trauma from some of those harms is often not recognised, so possibly it is not usually covered in standard risk assessments because those harms are often minimised or trivialised. It is very important that in any of these detailed guidelines there is, as Sonia Livingstone talked about, interaction with victims groups to understand the particular harms and that we identify them as something that you need to consider in your risk assessment.

Debbie Abrahams MP: That is very helpful. Professor Livingstone?

Professor Sonia Livingstone: Yes. I will speak now as a psychologist by training and pick up your points about cognitive effects and the brain. I am sure the committee knows that there is a lively debate going on in the research community about the strength of evidence of various cognitive and mental harms. I am obliged to make the point that there is not enough research in this country and it is not generally brought together. Ofcom’s assessments of risk and harm are largely self-report based, rather than looking at the kinds of clinical and physiological or psychological evidence that you are referring to. We need that evidence base urgently.

A really crucial point for the Bill is that most of that evidence shows that the risk of harm is to those who are in some ways vulnerable already. In kind of Covid terms, [the risk is to] those who have underlying conditions rather than necessarily, as in the Bill, the person of ordinary sensibilities. In other words, multiple factors come together and online harm can be a trigger. It can exacerbate; it adds to other risk factors.

When we are talking about significant harm to those who are vulnerable, we need to see how the internet plays into those other factors. That is why I am very worried about the idea of the person or the child of ordinary sensibilities, because harm can be experienced by very significant numbers of people who are none the less a minority in the population. The analogy might be that most of us are able in fact to withstand Covid but those with underlying conditions are not. Such an argument means that we not only address the underlying conditions but that we also address the virus. That is where we need to see how the vulnerabilities, for many reasons in society, interact with online risks in ways that cause harm in significant ways.

Debbie Abrahams MP: That is very helpful. Thank you so much.

Dr Edina Harbinja: May I briefly come back? I agree that we need more data and research, and this is where platforms need to be more transparent and we need to make them more transparent and allow access to researchers, for example, to access the data. What the Bill does not include, and I was going to suggest, is the independent audit that, for example, the Digital Services Act proposal includes, so I agree with that.

On the other hand, I would warn against the notion of the Secretary of State prioritising harmful content that may have implications for freedom of speech. I worry that this may not stand human rights tests as clearly prescribed by the law—something that is limiting free speech; in addition, of course, to proportionality, foreseeability and necessity in a democratic society.

Q71            Lord Knight of Weymouth: I have a couple of quick follow-ups from that questioning, first to William Perrin. If we were to have a more general duty of care, should that be something, as with some of the others that you referenced, that could be pursued through the civil courts, and should the Bill clarify that?

William Perrin: Carnegie’s approach to these regulatory proposals has always been to be pragmatic: will it work in practice? We rejected a civil right of action against the duties because we felt it would not lead to a good regulatory outcome; it favours people who have resources to sue, and the courts do not work terribly quickly and are rather overloaded at the moment. The Bill would not, as I understand it, remove people’s ability to sue for negligence after the event if duties had not been met. Indeed, that is the case, as I understand it, with the Health and Safety at Work etc Act where you cannot sue individually for a breach of the duty in the Act, but you can sue for negligence, and then a failure to meet that duty is a factor that will not help the employer in court, if I have understood it properly.

Lord Knight of Weymouth: Thank you, that is really helpful.

Professor Clare McGlynn: There is a halfway house between the two. The regulator in Australia is able to exercise a civil penalties regime, and proposals in Canada would set up a separate kind of tribunal to hear those sorts of complaints.  I agree that just a general civil right that any of us have to take to the ordinary courts would not be particularly effective because you basically need to know and have the resources, but we could have a specific tribunal and/or give the regulator civil penalty powers as in other countries.

Lord Knight of Weymouth: Thank you. Dr Harbinja, in respect of the alternative approach that, as I understand it, you are advocating of relying more on the courts and the enforcement authorities, do you think they have the capacity to do what you want, going back to what Carnegie was saying about being practical?

Dr Edina Harbinja: No, I do not think they have the right capacities and there need to be investments in those capacities. I agree that an overarching statutory tort, for example, would not perhaps be an ideal solution to regulating platforms in this space. I agree with William broadly on that.

The Bill is unclear about redress and appeal by individual users. There are super-complaints by entities that again will be defined by the Secretary of State. That is another loophole in the Bill where I think the powers of the Secretary of State are too broad. There needs to be a mechanism for individual users to appeal and redress decisions by platforms in their terms of service. Independent redress does not have to be Ofcom; it can be an independent body or authority, as some have suggested. 

Lord Knight of Weymouth: Thank you. I want to come back on scope, but that is all.

The Chair: Darren Jones is joining us remotely.

Q72            Darren Jones MP: Thank you, Chair. My first question is to Mr Perrin, again a follow-up from your original submission. You made the point that government has not learned the lessons about legislating in a fast-moving environment. I am uncomfortable with where the drafting is, but I think what Ministers would say in response is that that is why they have given the Secretary of State powers to update priority content or to intervene with the regulator without bringing back more legislation to the House, indeed without parliamentary approval, or parliamentary debate I should say, in amending those things. What would be your answer to that?

William Perrin: Having worked in government myself for about half my career, I would always advise Ministers to resist temptations that befall them, and there is a very strong temptation here to go intervening in speech in response to political expediency, around the environment of elections, and that just does not feel right. As we have said in our evidence, we think it goes against broad international conventions on how you regulate media. Because, of course, very few countries have put in systems to regulate social media, we take the precedent of regulating broadcast, which is explicitly provided for in the European Convention on Human Rights. The regulation of broadcast and cinema is provided for, but the state is generally required to forbear from direct involvement in content-related decisions and direct involvement in process decisions in the media regulator in a broadcast environment.

I noticed in a masterpiece of subtle drafting in Ofcom’s submission that it very gently pointed out that interventions from the Secretary of State would undermine the legal basis for its taking some decisions and—I would say this, but Ofcom would not—that that would be because Secretary of State interventions would probably only ever be arbitrary, if a regulator was doing their job, almost by definition. One needs to keep a healthy distance. We have made a number of proposals, essentially, to strike a better balance between the Secretary of State’s powers and Parliament’s powers to set out priorities for Ofcom on the face of the Bill, which is a little inflexible, but then to provide a system for Ofcom to respond to changing circumstances, based on research and evidence, and come back to the Secretary of State for an SI based on due process, but without the Secretary of State initiating that, because I think there are too many temptations there for a Government that are best not put in front of them. 

Darren Jones MP: I thought that is what you would say and I am glad to have it on the record. Thank you.

Professor Livingstone, could I ask about the risk assessment comments you made earlier? It is slightly different because the regulatory and enforcement context is very different, but the modern slavery statements are something where companies have to think about their supply chains, they have to write them up and they have to publish them on their website, but it is widely recognised that they are hopeless and do not really achieve anything. Could you elaborate a bit on how we could best mandate the minimum requirements of a risk assessment? Does it need to be on the face of the Bill? Does it need to be in broader powers for the regulator that they can then decide for themselves, or does it need to be in another way?

Professor Sonia Livingstone: Those are complicated questions. I certainly want to argue for risk assessments, but, in a way, I completely agree with the other points made by colleagues here. [It should be] part of, as it were, a human rights risk assessment, a broader risk assessment that, rather than focusing on just harm, identifies risk in relation to the provision of particular services and in relation to the potential costs to other rights of certain forms of regulation or certain forms of safety provision. So that we assess risk in relation to civil rights and liberties, privacy requirements, equality and non-discrimination. I think it should be stated on the face of the Bill. It should be required of Ofcom and, clearly, those minimum standards are needed to ensure that risk assessments are appropriately inclusive of the risks and do not mandate action that tramples on other rights. There need to be minimum standards and there needs to be enforcement.

It is depressing to learn that the modern slavery legislation has not proved effective. It must be within the compass of government to find examples or cases where risk and human rights assessments are effective. But at the moment we can see very clearly that anyone can risk-assess current providers and find them wanting, so the risk assessment must be linked clearly to enforcement powers. In the midst of that is also the question of transparency, because it is hard for the regulator and, of course, the public to understand exactly what is going on.

William Perrin has already referred to the recent evidence in the Wall Street Journal that Instagram knew perfectly well the harm it was doing to vulnerable young girls. That is an interesting test case. The risk assessment by the regulator must have a mechanism for identifying risks that the service provider does not provide information about, or for ensuring transparency so that that can be addressed.

Darren Jones MP: I cannot see the faces of our other two witnesses. If you want to come in or add anything, please do, but, if not, do not worry.

Dr Edina Harbinja: I agree with William’s point about the Secretary of State’s powers and the worries around those powers, especially for me in determining priority content, category 1 service providers, exempt services, and entities who will represent users for super-complaints. I think a lot of that should be under parliamentary scrutiny, especially with regards to the priority harmful content. We will probably come back to the notion of harmful but legal content, where I have quite strong views. Parliament surely needs to scrutinise what sort of content is lawful and acceptable and how the regulator regulates and decides what is priority content. The regulator is not a lawmaker and, under the human rights framework, it does not prescribe the law that limits free speech, but Parliament does.

Darren Jones MP: Thank you.

Q73            John Nicolson MP: Thank you so much, witnesses, for joining us this morning. Professor McGlynn, can I start with you and return to the gaping holes in the legislation that you mentioned right at the start? One of those gaping holes appears to be cyberflashing. A lot of people tuning in and listening to these sessions will have no idea what cyberflashing is. Could you explain to us what it is, please?

Professor Clare McGlynn: We can think of cyberflashing as the online equivalent of physical flashing—a person in the street. It is the sending of unsolicited penis images to someone else without their consent. We have seen it on the rise in recent years because of the advance of technology with AirDrop, for example. You can be on public transport and, if your AirDrop is open, someone can send you an unsolicited penis image and it can come up on your phone without you knowing. It is also very prevalent in all social media, online apps, et cetera. Some of the data shows particularly young women being affected. The Ofsted review of sexual abuse in schools saw a phenomenal amount of young girls having to deal with being sent unsolicited penis images on a regular basis.

John Nicolson MP: We know that nearly half of women aged between 18 and 24 have received these images. Who are the people who send the images? In the olden days, the person you would have in mind as a flasher was the proverbial dirty old man in a raincoat. Who are the people who do it today? Who are all these folk who send images to 18 to 24 year-olds?

Professor Clare McGlynn: Your average, everyday person—it is mostly men—is sending images. This is where the technology changes why abuse is happening. You are right that physical flashing is associated with particular types of person, but people are sending these images to cause direct distress and harm because they want to, for sexual gratification and, ostensibly, for humour. A large amount of it is a kind of male bonding among their peers. That is why they share unsolicited nude images as well; they want to share among their peer group, “Oh, we’ve sent them”, et cetera. It is about male bonding really.

John Nicolson MP: It is a curious way to want to bond, I think. The Law Commission has recommended that legislation be put in place to criminalise cyberflashing. However, as I understand it, the problem with this legislation would be that it would require very specific intent, such as an intent to distress or for sexual gratification, but it would not cover people who say that they are just having a joke or they are trying to initiate a romantic relationship. Presumably anybody and everybody, if they got caught by the police for doing this, would say it was a joke, if they knew that was their get out of jail clause.

Professor Clare McGlynn: You are absolutely right; you have identified the drawbacks in that Law Commission proposal. It is very welcome for a start, but it does not cover all types of cyberflashing. We know from the law on non-consensual distribution of intimate images that the threshold of requiring proof of the motive limits people coming forward and limits prosecutions. It is not a motive requirement that you see in most other sexual offences. We do not have to prove a particular motive for sexual assault or rape. Why do we have to do it in these sorts of circumstances?

I think, absolutely, it needs to focus on consent. That is the message we need to send out: if you send a sexual image without someone’s consent, it should be a criminal offence.

John Nicolson MP: What should the penalty be?

Professor Clare McGlynn: I would alter the penalty structure. I would have a lower threshold right at the very beginning because, for me, the law on cyberflashing is not about sending people to jail for a long period of time; it is about setting a minimum standard by which we say that this activity is unlawful, and it is harmful. I would go for a much lower threshold. In the US, for example, some of these are misdemeanours, so it is a fine first, and it can rise. That is where I would start.

John Nicolson MP: Would you like to see us include that specifically in the Bill?

Professor Clare McGlynn: Absolutely, because it then impacts on what are the unlawful and illegal harms that feed into all other aspects of the Bill. Most definitely. It is a gaping hole and it is affecting large numbers of young women particularly.

John Nicolson MP: I have talked about 18 to 24 year-olds, but there is an additional problem, is there not, which is those underage sending these kinds of images to one another? My adolescence was a time of torment and embarrassment, so I cannot begin to imagine what would motivate a boy of 14 to send these kinds of images, but I am told it is quite widespread. How do you deal with children under the age of consent who send these images?

Professor Clare McGlynn: Do you mean if they have sent it to someone without consent, so ostensibly—

John Nicolson MP: Yes.

Professor Clare McGlynn: For under-18s, what we have to remember is that, yes, they are committing an offence because the harm on young girls is significant, but this is why, obviously, I do not think it is about sending those young men to prison. I think it is about much stronger education in our schools that specifically addresses these sorts of issues. It is also about the environment in schools, and that is what came out in the Ofsted review, Everyone’s Invited; schools are not quite sure how to manage these issues. It is almost as though there is an extreme between do nothing and report to the police, so we need to empower schools to be much more proactive so that when wrongs come before them, they do not have to go straight to the police; they can deal with them and tackle them.

John Nicolson MP: Is it always boys sending images to girls, or is it girls sending images to boys too?

Professor Clare McGlynn: If you are talking about—

John Nicolson MP: I am talking about children.

Professor Clare McGlynn: If it is just nude images generally, girls and boys are sharing nude images to each other consensually. If we are talking about sending, effectively, penis images to someone without their consent, it is boys, mostly.

John Nicolson MP: I kind of guessed that, probably. What do you do with children who are sending nude images consensually to one another?

Professor Clare McGlynn: Obviously, at the moment, under the current law, you would be committing child sexual abuse image offences.

John Nicolson MP: Exactly.

Professor Clare McGlynn: For myself, I think some of the laws around that need changing because that then impacts on how we educate our young people. When the messages to the young people are, “When you send nude images, you’re committing a criminal offence because you might then be possessing or sending a child sexual abuse image”, it makes it much more difficult to recognise that they are doing that consensually sometimes. It also makes it much more difficult to recognise that there can be coercion, because a lot of girls are coerced into sending nude images and they then do not want to come forward because they might have committed a criminal offence. I know that is slightly beyond what we are talking about, but sometimes that underlying law is a problem.

John Nicolson MP: On a related issue, Mr Perrin, if I can move on to you, a lot of people have talked to us about online anonymity. I get the impression just from our questions that members of the committee are kind of torn on this. Speaking personally, I have not really made up my mind. On the one hand, I think the idea that you should be able to hide behind a completely false identity and send abuse or images to somebody is absolutely horrific. On the other hand, if you require people always to say who they are, you have a hugely dampening effect on free speech, arguably. It poses specific problems in countries such as Russia, for example. Is there a hybrid system that might be available where people have to identify who they are when they register, but they can use a nom de plume, in effect, when they are online and users could choose whether or not to react or interact with people who have not identified themselves?

William Perrin: There is a set of factors at play there. First, one must be victim-led. I was very struck in the first round of evidence this committee took that people who were victims were particularly angered and incensed by people who chose to hide or could not be identified through some process, so one needs to be victim-led.

One has to ask whether it is proportionate to identify 30 million to 40 million social media users in the UK, and will that deal with the risk? That is one area where I agree with the Government on the use of the word “proportionality”. We need a bit more research, I think, as well as some in-between options that give what we call user defence tools, so that users have the ability, as some have suggested, to select only to see content from people who are verifiable, whether they are using a nom de plume or their real name.

Underneath that is the issue of how people who are in public life and involved in important democratic processes are protected from some of the worst harmful effects of the operation of social media platforms, often by anonymous accounts. The Government have been very soft on that. We put forward a set of proposals to ensure that social media platforms should recognise that elections and the process of election are high-risk periods for the people involved. We know from the Committee on Standards in Public Life report on intimidation in public life and the International Peace Initiatives report on women’s participation in Northern Ireland, which I strongly recommend reading, that intimidation of people coming forward into public life, identifying themselves and putting themselves out there, is having a deleterious effect on the democratic process. That is a bad thing and can be protected against by proper risk management in that system; and, to some extent, issues around anonymity may be part of that. We see dealing with that as a risk factor issue. 

John Nicolson MP: What is the answer? What would you like to see? What is the solution?

William Perrin: I am not an academic but I am going to give you the academic’s answer, which is that we need more research to work out how effective it would be. There are some fairly horrific court cases that have got through the criminal justice system involving senior politicians in the UK, where often real names have been used. If it is not deterring people, is it worth doing? 

John Nicolson MP: Dr Harbinja, as our expert in privacy, what is your take on this? How do we both allow free speech and protect users from abuse?

Dr Edina Harbinja: I think it is absolutely vital to preserve anonymity.

John Nicolson MP: You do?

Dr Edina Harbinja: Yes. Even though there is no common-law right to anonymity as such, as you said, the US Supreme Court, for example, has noted that anonymity is a shield from tyranny in a lot of cases. It is important for the development of children, and Professor Livingstone can talk about that. It is important for marginalised groups, as you have heard in some of the previous sessions. It is important for people who have not come out yet, for example from the LGBTI+ community, and other minorities and communities that may not wish to be identified and whose speech would be stifled.

I think the abuse that comes from anonymous accounts can be dealt with in those platforms, and the platforms need to enforce terms and conditions more strongly, removing bot accounts and anonymous accounts that are— 

John Nicolson MP: Why are they not doing that?

Dr Edina Harbinja: Because, as we have seen repeatedly, their internal enforcement is not strong enough and content moderation is not strong enough, and that is where of course we need to improve. 

John Nicolson MP: How do we force them to improve?

Dr Edina Harbinja: Through regulation, through duties that—

John Nicolson MP: And financial punishment?

Dr Edina Harbinja: Financial punishment as well, yes. 

John Nicolson MP: I have said this before. As a gay man, I have been sent tweets calling me a “greasy bender and I have reported them repeatedly to Twitter, which writes back and tells me that they do not breach its community standards. I then write back and send the exact wording in their community standards, and some cheery Californian writes back and says, “Thank you. That does not breach our community standards”, and you bang your head against the wall. Of course, they might not know what a bender is if they are in California, but it would not take long to google it and work out why this gay man keeps saying it is a homophobic term, and in fact I did explain to them that it was clearly a homophobic term. Do we just have to hit them with huge fines?

Dr Edina Harbinja: In clear cases of abuse, of course we have laws that can address some of those concerns. In the Malicious Communications Act and the Communications Act, et cetera, there are criminal offences under which that could fall and where users could seek redress. That clearly violates Twitter’s community standards and global community standards. 

John Nicolson MP: They do not care.

Dr Edina Harbinja: That is where I think the regulator needs to step in requiring Twitter to enforce its own terms and conditions, of course using enforcement powers.

John Nicolson MP: Finally, in this quickfire round, I would like to ask you all whether or not you think that disinformation should be included in the Bill. Is disinformation something missing from the Bill that should be included—for example, all the Covid disinformation that we see? Professor Livingstone has her hand up.

Professor Sonia Livingstone: Thank you, and, yes, I think that disinformation should be included in the Bill.

John Nicolson MP: Why?

Professor Sonia Livingstone: Because, as you have just said, with the example of anti-vaxx information, it is dangerous, and it is harmful to the collective more than to individuals. I would say the same about disinformation surrounding elections, where the harm is to our democracy.

At the moment, the focus of the harms in the Bill is very much on individuals, without recognising the collective harms. I think we have been talking about collective harms all the way through, actually. And if I may go back to your previous example of homophobic abuse, we can think about all these kinds of aggregated content, content en masse, that are harmful to individuals and to our collective norms. But, to reiterate an earlier point, it is not about censoring individual pieces of content: it is about recognising and identifying bad actors who are producing disinformation or other kinds of hateful or problematic content, as was just mentioned—those who produce the bots, and who promulgate this content. It is also about the algorithms that amplify and target.

That, I think, links to your point about anonymity because the task is not to find every individual who posts every image or every piece of disinformation or hostile content. On those grounds, I would support the argument for anonymity. The task is about regulating the algorithms that up-rank such content, that take people down pathways in which ever more extreme content is recommended to them and made visible to them, whether it is anti-vaxx, homophobic or hateful to women politicians. It is the mechanisms that platforms manage, whereby this content reaches and engulfs, and becomes a kind of avalanche, that we should be really prioritising in the regulation, and we should focus a bit less on individual pieces of content as they reach individual users.

John Nicolson MP: Mr Perrin, I see you want to come in.

William Perrin: A lot of the debate around disinformation has been quite loose, and there is often a sense that, “Disinformation is something I don’t politically agree with”, or it is a different country’s point of view. Unusually in the UK, we have a series of systems that assess genuine threats to our national security, our public health and public order. They are the intelligence assessment machinery around the JIC and MI5 in central government, the 46 territorial police forces and the 114 directors of public health. It should be a straightforward case that if they assess that there is a threat to one of those factors that arises from disinformation—I know that the riots in Northern Ireland some months ago were a case where the police were concerned about disinformation arising from social media not improving that situation—there should be a mechanism for passing that threat, which has been professionally and neutrally assessed, to the social media companies under regulatory supervision so that it can be acted upon.

John Nicolson MP: That gives them a lot of power.

William Perrin: They have a lot of power.

John Nicolson MP: I know, but that gives them even more power.

William Perrin: At the moment, it is being exercised in an unsupervised way through the counter-disinformation cell in DCMS, which has no statutory accountability whatever. That cell aggregates, for instance, issues around Covid and then passes messages to social media companies where the Government would like to see action. There is no parliamentary scrutiny of that; there is no log kept for inspection of what those messages are. I think this Bill gives an opportunity to tighten up that system, where we have genuinely assessed—professionally assessed—threats whereby disinformation threatens major aspects of our public life, and it could put that counter-disinformation cell on a proper statutory footing to have regulatory oversight over its operation.

The Chair: Thank you. We might need to move on because a number of other Members want to come in, and I know, Sonia Livingstone, that you have given notice that you may have to leave at around 11 o’clock. For the record, if that is the case, I imagine this panel will still be running at that point so, if you leave, we will not regard it as the equivalent of a cyber walk-out from the committee.

Professor Sonia Livingstone: Thank you.

Q74            Baroness Kidron: Chair, I declare an interest. Sonia Livingstone is a close colleague of mine; indeed, I worked very closely on the General Comment, and she is in charge of the Digital Futures Commission that was set up by 5Rights.

With that in mind, I was struck that right at the beginning the Chair asked a question and I think a couple of you said it is a safety Bill, and ever since then we have been talking about content, specifically harmful content. Many of you will remember that in the original government full response it was content and activity. I am thinking now about what Sonia was just saying about it not being individual pieces of content but that it is the algorithm, the amplification and so on.

Do you think the re-badging of the Bill as a content Bill has taken the eye off the ball of the activity that we are trying to grapple with? Does it drag us back to the person who created the content and away from the machine that is spreading it, or are you perfectly content that it be a content Bill? Do you want to go first, William?

William Perrin: The Carnegie position has always been that this legislation should be very much about systems and processes and that it is the nature of the platform itself that often causes harm for quite innocuous content by its aggregation, its fire-hosing of particular issues at certain people or externally. Content is being used in the Bill as what I would call a legible proxy, something that is understandable for harm, so one needs to go back. When we come forward with proposals for an overarching duty, we will be focusing, I think, on how harm arises and who harm goes to—the victim focus that others in the panel have commented on.

While the Government have kept in the Bill, in many cases, the focus on systems, processes and characteristics, they are secondary to content. One should start from the harm and say the harm arises maybe from two things: the piece of content and the manner in which it was delivered, accessed or surfaced. We would support moving away from a focus on content to harms.

Our experience over the many years that this debate has been running is that people grasp content—they get it—and of course we already have regulatory regimes that are content focused for television, radio and, to some extent, cinema and advertising. They are mainly content driven with an element of harm on top, so people tend to revert to that thinking rather than saying that we need a slight change of focus, to start with the harm and focus less on the content itself.

Baroness Kidron: Do you see that as a danger? Ofcom, which looks at content in broadcast, is not offering to look at individual pieces of content in relation to this. Do you think the use of the word is confusing and difficult, or do you think it is fine?

William Perrin: No, I think Ofcom is perfectly capable of adapting. It is not a vast leap, and of course, as part of broadcast regulation, the process the companies run—the editorial process—can be part of the evaluation of a problem. We have always suggested that Ofcom should be the regulator in this area because it has a long history of adaptation to accepting new things. It is of a scale that makes it very robust. What did someone say to me the other day? Its processes are drearily effective, and in regulation that is often what you want.

Dr Edina Harbinja: I agree that the focus is on content, and rightly so, because there are very unclear concepts of content in the Bill, in my view. Yes, the Bill aims to provide and set out mechanisms and processes, impact assessments and everything else, but it talks about journalistic and democratic content, which it defines, and legal but harmful content.

When we look at this from a digital rights perspective, where I come from, we need to consider that content and ascertain whether the definitions are right, and I do not think they are for democratic content. If you wish, I can elaborate on democratic and journalistic content; those are conflated. I think they should be merged. They prioritise certain content, such as political content in the democratic content concept. In the journalistic concept, there is a right to redress for users, unlike in democratic content, so again there is a discrepancy.

Content is very important and we need to get it right. This is where I think Parliament needs to be the one that decides what is the content that we wish to regulate, and not the Secretary of State or Ofcom, and what content we should outlaw or regulate.

Baroness Kidron: Thank you.

Professor Clare McGlynn: Systems and processes are absolutely essential. It is an interesting perspective. I want to keep emphasising content—for example, pornographic content and online abuse content—because I feel they are not getting sufficient discussion in the debate. It is a different perspective, but I feel we need to be doing both; we need to be ensuring the content and, obviously, systems and processes for the long term.

Professor Sonia Livingstone: It seems to me that the word “content” is being used in two different senses, and there are some contents that the Bill seeks to regulate and others that it does not but it should, such as pornography as discussed in relation to the Digital Economy Act. I agree that those areas of content should be set by Parliament.

Then there is the idea of content which is, as it were, anything recorded on a digital service, which includes the kind of record of activities, interactions and contact, often, for example, between a stranger and a child. It represents a message as in content, but it is the contact that one is really concerned to regulate. That is where I think the emphasis on systems, processes and robust risk assessments is crucial. We should separate the different notions of content and be clear where oversight and scrutiny take place for each.

Baroness Kidron: Thank you. I have a question for William around the safety objectives. We have the safety objectives and then we have the duties. I was interested that you did not say that one overarching duty of care against the safety objectives would be a good way to go. I wonder how comfortable you are with the safety objectives of the Bill, because we have a double layer.

William Perrin: Forgive me if, in having a riffle through this Bill, I get confused between the different lots of objectives, despite having read it many times, so I am sorry.

I thought the safety objectives were a bit better than I had been expecting. There was more focus on process than there was further downstream in the duties, but there was a break in the chain between the safety objectives, Ofcom’s Clause 60 risk assessment and its then composition of risk profiles; and then there was a break between the risk profiles and the company’s own risk assessments.

It is quite right that in the risk assessments, for good regulatory practice, companies are allowed to comply or explain—that is a good way of doing it—but there has to be an objective standard against which they are complying, and that bit, I think, was missing. I think Ofcom also identified that that bit was missing. There are codes of practice as well. You can have great objectives, but if they do not flow all the way down into the final duty, they have not worked very well.

The Chair: I think Jim Knight has a quick question for Sonia Livingstone before she goes, if that is okay. 

Q75            Lord Knight of Weymouth: Thank you very much. Professor Livingstone, you have written interestingly about media literacy in the context of this Bill. How would you like the Bill improved in that regard?

Professor Sonia Livingstone: I would like, in brief, two things. The first is a very clear recognition that media literacy is required for citizens in a democracy. It is not just a matter of knowing how, as an individual, to protect yourself from risks to your safety; it includes knowledge of how to participate actively in a digital world, and I would include children’s right and need to have the knowledge to participate as political actors and civic actors, as well as potentially vulnerable individuals.

Secondly, I find it curious, shall we say, that a communication regulator, a market-focused regulator—is given responsibility for what is essentially a pedagogic task, an educational task. I think it is vital that the Department for Education and the school curriculum, and indeed educational efforts to reach the wider population, perhaps through libraries, are recognised for their role, and indeed mandated to take on responsibility for media literacy.

Lord Knight of Weymouth: Thank you. Anything that you want to send us around how you would like to see the regulator engage with the Department for Education would be helpful.

Professor Sonia Livingstone: Brilliant. Thank you. I am sorry to have to go. I am going in fact to an event on children’s participation in the digital world run by the Irish ombudsman. If I may have a parting shot, it is to ensure that children’s voices are also heard in the present scrutiny process here.

The Chair: Thank you very much for joining us this morning. To complete the rest of the panel, the next committee member is Guy Black.

Q76            Lord Black of Brentwood: Thank you very much. This is actually another content-related issue and comes from something written in the excellent Carnegie document, where you say: “We agree that there should not be double regulation of the traditional media sectors”. I wonder whether you thought we are actually already in that area because we have supervisory boards and fact checkers who are actually quasi-regulators, but they are not regulating in the way that we recognise regulation, in that there is no independence, no transparency and no accountability in the processes. As anyone who knows who has tried to complain to them, they are often opaque and sometimes, arguably, discriminatory. Do you think that the Bill actually does nothing to tackle that but makes it worse by allowing the platforms to apply standards that they believe to be meeting the codes, and therefore become censors of content?

William Perrin: In our very earliest work, Professor Woods and I were clear that there should not be double regulation at all. Whether one likes it or not, and it is hard to find anyone who does like it, there is a national settlement on how traditional print media are self-regulated. That is where we are, so introducing a new layer when one is trying to regulate in a complex new sector was always going to be counterproductive.

As to the mechanism by which content from traditional media passes through social media and then has a layer of fact check, or algorithmic adjustment or whatever it is, applied to it, we never fully elaborated that, I have to say. I know there is a very healthy debate going on between the traditional media, both companies and the Society of Editors, and DCMS and others, that we have not been wholly party to. If you are going to exclude traditional media from the regulatory impact of this Bill, you need to find a way to pass them through, to some extent, other controls that social media platforms may put on them, but I do not have detailed thinking on how to do that. 

Lord Black of Brentwood: Would one of the issues be the transparency of the algorithms? At the moment, they are not transparent at all, and it is not clear to me that the Bill gives the regulator the power to produce that transparency.

William Perrin: I have great sympathy with the arguments put by the Daily Mail and General Trust group around the algorithmic down-ranking it suffered several years ago rather abruptly overnight with no explanation. For a dominant company such as Google Search to have done that to another company that was competing in the advertising business felt profoundly wrong to me.

The Bill has adequate powers, to my mind, for Ofcom to ask companies, because its information powers are quite extensive, for an explanation of how certain things have happened in certain locations, but that is very different from publishing that or making it available to victims. Some of it, of course, will be commercially confidential—that has not quite been worked through—but I think Ofcom has the powers to get the information it wants, but how that is then conveyed out of Ofcom is not yet worked out. Nor indeed is it clear in the Bill, and this is probably germane, how that will be passed to the CMA if there are anti-competitive issues that would be best addressed by it. There is no duty to co-operate in this yet with the CMA, although that may be something there are negotiations about. We are not entirely sure. 

Lord Black of Brentwood: That is probably an area where the Bill still needs to be tweaked.

William Perrin: The relationship between the coterie of digital regulators—you have Elizabeth Denham up later—between Ofcom, the CMA and the ICO, is going to be absolutely vital. It will be a great test of modern management techniques for the leaders of those organisations and their boards to work together in a pragmatic way to achieve the right social outcomes, but I think they should be up to it. It is a fascinating time to be in those roles. 

Lord Black of Brentwood: Are there any other comments?

Dr Edina Harbinja: I agree about the transparency, and algorithm transparency in particular; and, as I mentioned before, I would reinforce inclusion of the independent audit that might be necessary as well. We have information powers and transparency reports by the companies in the Bill already. What we do not have is access for credible research to the data of the companies, and that is one of the reasons why we do not have enough evidence.

William Perrin: It is very unusual that the transparency reporting in a framework Bill is described in extreme detail by the Government, and Ofcom is not allowed to add to that. That is a strange error, to my mind. Ofcom should simply be allowed to add to the list of transparency reporting any other thing it feels it needs to deliver its duties under the Act. Ofcom does not have a track record of over-intrusive regulation, and I do not think it is justified to keep it out of editing the list in Clause 50-something of items to be included in annual transparency reporting.

Lord Black of Brentwood: Thank you very much.

Q77            Lord Gilbert of Panteg: This has been a fascinating session and I think we have come up with some strong ideas, but you have reinforced some of my concerns.

One of the things I am most concerned about is how on the one hand we future-proof legislation and have a flexible approach to regulation, yet we have scrutiny and protection for human rights. I am attracted, Professor McGlynn, by your proposal that we should start with a look at criminal law and illegal content and extend that, probably quite extensively, I would have thought. You raised one or two particular areas, but it seems to me that the laws around harassment could play an important role in taking on the Law Commission proposals that play a potentially important role—some criminal offence around promoting physical harm. I am attracted to those because they bring certainty. It is not just about criminalising the perpetrator; it is about an inescapable duty on the platforms to take down illegal content because it is absolutely clear what it is. I think that addresses Dr Harbinja’s concern about parliamentary scrutiny. It is legislation, and Parliament has decided that those are proportionate.

Then you go on to a sort of set of harms that are less defined and are more context dependent. Sometimes the harm is caused by the platform design rather than by the content itself, or it is harmful to people who have particular vulnerabilities but it is not generally harmful in many ways; it could be innocuous. That is where the various duties come in. I get that approach.

What worries me is how you keep the illegal content up to date and how you safeguard against overapplication of codes and too much take-down of content that is potentially harmful, because platforms have become incredibly risk averse. My thought is that we need to find a role for Parliament—Parliament’s current models do not work. That may take the form of a standing joint committee of Parliament that continuously holds the regulators and the platforms to account and has a sort of horizon-scanning function, where it is looking down the road to potential future harms and trying to get ahead of them rather than react to them. It would ask not just Ofcom but all the regulators in this patch, which include the competition regulators, and probably the financial regulators and data regulators, to identify gaps in the existing legislation and regulatory framework and to act quickly to address those gaps through recommendations to Parliament and government. That seems to me a stronger approach than giving powers to a Secretary of State or periodically reviewing the work of the regulators.

Do you think that approach addresses it, and, specifically on illegal content, as well as the individual measures that you propose, have you seen work on how existing law might be codified and clarified so that it undoubtedly deals with much of this content and what additional measures might be passed into law? That is a bunch of things, but I thought it was easier to put them all together.

Professor Clare McGlynn: First, with the criminal law you are absolutely right that there are proposals from the Law Commission that would deal with some of the online communications and harassment and bring that up to date. There are then a number of other sorts of offences that could be introduced. We have talked about cyberflashing, and we can talk about deepfakes. There is also a review of all intimate image abuse offences, which is very much needed.

While I have the opportunity, perhaps I could add that if we are talking about pornography, for example, we could also have a criminal offence that, if you make a false representation, you have the consent of someone when you upload an image, to try to reduce the swathes of non-consensual material on porn companies; and to make porn companies have the consent of individuals. There is a raft of criminal law. Yes, the criminal law is often not easily kept up to date, although that is a political issue, but I think we could be swifter in updating the criminal law.

In terms of the follow-on design of regulation, you have just—

Lord Gilbert of Panteg: To be clear, do you agree that the benefit of that is not about criminalising individuals but putting a clear duty on the platforms to take this stuff down?

Professor Clare McGlynn: I think it is both. It is a clear duty on individuals that this is a criminal offence; it is wrong. It is also an important message to victims because it says, “We recognise your harm. We understand what has happened to you”, and, in the context of this Bill, it is about making clear what is illegal and therefore what the companies must do.

On your point about general regulation, all of you are more expert on what sorts of processes might be set up for that, but it sounds like a very plausible alternative to the Secretary of State powers, which I have some concerns about, as do others. 

Lord Gilbert of Panteg: Dr Harbinja?

Dr Edina Harbinja: I agree with the point about the illegal content and that it needs to be absolutely clear. As you noted, a lot of the content that is harmful but legal might be overcensored by platforms in their fear of compliance issues. As you heard in the House of Lords inquiry on free speech, Google has itself admitted that it would censor more content if it were unclear, so as to err on the side of caution. I would definitely support the approach to make illegal harms that we think are significant in our society, rather than having the vague notion of legal but harmful content in the Bill, which then is defined by Ofcom or the Secretary of State, or the platforms, in particular indirect harm, which is difficult to measure.

With regards to future-proofing laws and regulations, I absolutely agree; the law will always be catching up, as we have seen in this space. Think about virtual reality, augmented reality and the Metaverse that Facebook is now building: is the Bill addressing that? I do not think so. It is the same with deepfakes, chatbots of us after we die, or before, et cetera. I support the idea of a standing committee, but I think it is important to consult civil society and user groups in making those determinations.

The Government have, in my view, slightly competing objectives at the moment with regards to the internet—for example, their proposal on “Data: A New Direction”, the ongoing inquiry that has just started and which I am going to respond to with BILETA, the association whose executive committee I am on. There, the Government are trying to liberalise data protection and privacy, and perhaps amplify some of the harms regarding privacy and data protection as a result; we will see what happens. On the other hand, with this Bill, the Government are trying to protect users and potentially impact free speech.

Freedom of speech under the Higher Education (Freedom of Speech) Bill is, I think, another approach that is quite concerning. For example, as an academic, if I have freedom of speech guaranteed on campus and in my university, Aston University, I am not sure whether that speech would fall under journalistic or democratic content in the Online Safety Bill if I say something online. I am quite unsure, as a lawyer, as someone working in this area, as to the implications of different Bills and government efforts, let alone individual users and companies, and that is leaving aside the text of the Bill, which is incredibly complicated and should definitely be simplified.

William Perrin: Lord Gilbert, I think the suggestion for a new scrutiny mechanism across these regulators is absolutely right. That is one of the very strong recommendations from your committee’s report. We already see with the tensions between this committee and the departmental committees that we need a new structure to address that, and it needs to be up to date—on the money, as they say.

We are here of course having this discussion because the criminal law has not worked; the criminal justice system has not worked. It does its best. There was a laudable response from the police to the Euro finals horrors, because they kind of had to, but they do not have the resources; the Crown Prosecution Service does not have the resources; and there is not the capacity necessarily in the courts to work at the scale of the problem.

It is important that the criminal law is in the background with sanctions for the worst possible cases, but in regulating other powerful media, such as broadcast, advertising and cinema, we have had the criminal law in the background and then a regulatory system free to make judgments, whether it is an industry-led one or a government-appointed one that sits underneath the law. The implication of this Bill, of course, is that platforms should act when they have “reasonable grounds to believe” that an offence has been committed, which is in itself taking one step back.

I am a little wary, I think, of criminalising huge swathes of speech in order to back up the regime. I do not think that is necessarily needed, but we will see, if this regime comes in, how the Law Commission proposals, which make this Bill look straightforward to read, actually are manifest. The department, or the Government, because it is a Home Office thing as well, need to address how they are going to deal with a recommendation from your committee, from the Law Commission in the context of this Bill. It has to be seen as part of it.

Whether there is time to bring in the offence that the Law Commission has proposed in the passage of the Bill, I am not really sure, but we need clarity on that from the Government. It is very important.

The Chair: Thank you. Joining remotely, Wilf Stevenson.

Q78            Lord Stevenson of Balmacara: Thank you very much, Chair. I notice that we have been talking very widely and usefully about lots of different parts of the Bill, but we keep focusing, I think, on the way in which the system and the processes will deal with any individual who has a concern or is being attacked, for example, but we do not talk about the collective responsibilities.

There is not time to go into it in huge detail, but perhaps the panel could respond on what they think needs to change in the Bill in order to redress the imbalance between the harms that are targeted on individuals and the harms, which I think we all recognise, that are also going to happen on a collective basis.

William Perrin: To take a polar example, an avowedly racist but not illegal mainstream platform would not be something that I think any of us would encourage. A platform that became known as a place that was full of pervasive racism and, essentially, put off all sorts of people of colour and so on from participating in discussion there would be a very bad thing; similarly with a wide range of other protected characteristics. There is, as witnesses have revealed to you before, a general sense of harm that arises from things that are not criminal, but they reflect broader social issues. Ofcom’s research, which it published yesterday, on broadcast standards said that there had been a marked shift in the general public away from concern about swearing and foul language to concern about discrimination and the way it was portrayed on screen. That is a classic social issue, and Ofcom in the past, on its content regimes, has been charged by Parliament to make judgments about what is acceptable in society.

Those judgments have shifted over time. Ofcom is covered by the Equality Act, so it has to reflect the evolving implementation and interpretation of that, but the Bill does not easily provide for these issues to be addressed should Parliament or the Government want to address them. At the moment, the Government seem to be saying quite clearly that they do not really want to address them—that is fine—but that is a gap, and then one has to decide whether one wants to address that, and indeed what steers and nudges one can give to platforms at that high level. 

Lord Stevenson of Balmacara: Do any of the others want to come in?

Professor Clare McGlynn: I have nothing to add to what William Perrin said.

Dr Edina Harbinja: I do, if I may. Our system of human rights has been based on the individual and the harm to the individual, and in a very diverse society, as the UK is, it is very difficult to determine a collective harm—one size fits all—even for groups, for majority or minority groups. I would be concerned as regards that harm. An example would be that the Bill indicates collective harm, I think indirectly, when it talks about legal but harmful content and characteristics of a group. If a person belongs to a group, the harm might target a particular characteristic of that group.

An example that we have heard before, and that I agree with, would be blasphemy. Does that amount to collective harm and do we want, as a diverse, democratic society in the UK, to bring it within the scope of the content that we wish to regulate? I would disagree. I think we need to be careful when we talk about collective harms because of both the nature of society and the structure and goals of human rights frameworks.      

Lord Stevenson of Balmacara: Thank you very much. Having heard Lord Gilbert’s suggestion of involving a parliamentary process, maybe that is where this might sit as we go forward.

The Chair: Thank you. Dr Harbinja, on that point exactly, where do you stand on the question of racist abuse? A group of people being abused because of the colour of their skin would be a collective harm, but a named, individual person being abused would be a personal harm. Should both be within the scope?

Dr Edina Harbinja:  What would be redress for a group? Would a group go to a court eventually if there is a problem with content moderation? That is my question. How would the group be treated and defined, and is it a definable harm in terms of how the courts would see them?

I agree that these harms are very concerning. Where I would disagree with some of the previous sessions and witnesses is the idea that it is something endemic to the online world only. I think the problem is that it is endemic in society, and some of the conversations that happen not only online but in the traditional media, in politics and everywhere else, cause amplification and entrench and perpetuate the harm. As a society, this is not a silver bullet for issues of racial abuse at all. We need to work more across the board and have a cross-sectional approach.

The Chair: But is this not the problem with how platforms moderate, say, racist speech now? It is up to a group to determine how they have been harmed, rather than saying, actually, the person posting the content in the first place is clearly abusing others. Therefore, is it about whether that behaviour is consistent with the online safety regime, rather than saying, “We can’t really be certain that anyone is being harmed by it, so we will let them carry on with impunity”?

Dr Edina Harbinja: You will recognise in the Bill, or the Secretary of State or whoever is tasked with defining those harms will recognise, that the harm is important to regulate and, therefore, that the platforms will have the duty to remove it. A lot of that abuse is already outside the platforms’ terms of service and community standards at the moment, and it would breach their terms of service. I think it is a question of enforcement, definitely, and the speed of take-down and response to that abuse when user support and other mechanisms might deprioritise the content, and deal with it within the service and the platform.

Q79            Suzanne Webb MP: I very much welcome the idea of a committee accompanying the Bill. May I ask a simple question? What would you see as the measure of the success of this Bill?

Professor Clare McGlynn: It would be if immediately there was a resource for victims to get material swiftly and effectively taken down. If you ask victims of most forms of abuse what is the first thing they want, they want the material off the internet, and at the moment it is very difficult, as you know and have heard, to get some of that material taken down. As you know, in Australia they have just amended their laws so that they can order material to be taken down within 24 hours. If you can have a regulator that can order that, it would be a huge success. Then in a few years, in the years to come, we would see a reduction in the amount of abuse online, but that immediate result would be what I would see as a success.

Dr Edina Harbinja: I would like to see the success of society in addressing those concerns generally, offline and online—the parity principle between the regulations that we have talked about, and the Lords and the Government have talked about. We do not want just to remove speech from online while it stays in the traditional world where it boils and creates other societal implications and concerns.

For me, the success of the Bill would be to strike a delicate and important balance between safety and freedom with an open and free internet, because I think there is a danger, and there are legitimate concerns, that this Bill may produce a UK internet that might look very different from, say, the American internet or the internet in some European countries. If we used a VPN or moved or travelled elsewhere, we would see the internet and access to information in a very different light. That is something Parliament really needs to think about when you are thinking about striking the balance between safety and freedom; it is important and delicate.

William Perrin: I would say systems and processes that work in a way that an objective, reasonable observer would say that they have worked. If I make a complaint about something, that complaint is properly addressed and dealt with—I almost fell into the Bill’s language there—is properly addressed to a sense of satisfaction; and that there is an objective reduction of harm that has been caused to victim groups, and that can be measured by survey. In Carnegie’s work, we suggested a general harm survey against which platforms are held to account year after year. Ofcom is exactly the sort of organisation to produce that kind of qualitative and quantitative data. Overall, it would be systems and processes that work in a normal way, in a way that we would expect from any other large business or company.

Suzanne Webb MP: Without going back over all the detail, but bearing in mind what you said, do you think this Online Safety Bill is the right vehicle to deliver exactly what you believe should be the success of the Bill?

William Perrin: At this stage in its evolution, it is in a good, strong position. It needs some changes, though, as we all agree, in order particularly—I agree with my colleague—to make sure that there is the right balance of human rights around speech and that the role of Secretary of State is put in its appropriate place.

Dr Edina Harbinja: I think in its current form it will not deliver on the aims and objectives, and I agree with the aims and objectives set out by the Government and in the Bill. The Bill needs to go through very broad scrutiny, and, as it does, it will probably need a great number of amendments regarding, for example, legal but harmful content, democratic and journalistic content, redefining some of the obligations and duties of the platforms’ enforcement mechanisms and everything we have talked about. I think quite significant change is required.

Professor Clare McGlynn: Not yet is my answer. On that, we should not let the porn companies go under the radar. They need to be included even if they then remove their user-generated functions.

Secondly, there is a point about freedom and safety. Women’s freedom of speech is currently constrained because of the amount of online abuse, so we need the safety in order to give us the freedom. They are not a dichotomy in that sense.

Suzanne Webb MP: Thank you.

Debbie Abrahams MP: Can I quickly check that I heard Dr Harbinja right? Did you say that you do not think that hate speech is being amplified online?

Dr Edina Harbinja: I think it definitely is, yes.

Debbie Abrahams MP: It is being amplified.

Dr Edina Harbinja: It is being amplified, but I do not think that online is the generator in itself of hate speech. I think that the underlying problem is the endemic problem in our society, in the offline world.

William Perrin: We have known about the disinhibition effect since around 2004, and platforms are designed to be as slick as possible so that you can post without thinking, without any challenge to the sort of hatred that one might spew. Hearing from victims, there is the sense that the online world really has changed that. I think it is an overly theoretical view to say that they are the same.

Debbie Abrahams MP: Thank you. That is very helpful.

The Chair: In the first evidence session, when we looked at racist speech, we heard the concerns of victims in that panel that social media were giving licence to racist speech and normalising it. That is clearly a matter of concern for us.

That concludes the first panel. Thank you very much to our witnesses.

Dr Edina Harbinja: Thank you.

William Perrin: Thank you.

Professor Clare McGlynn: Thank you very much.

 

 

 

Examination of witness

Jimmy Wales.

Q80            The Chair: Welcome to our witness, Jimmy Wales, the founder of Wikipedia.

This morning, I thought I would have a quick check on Wikipedia to see what it said about Covid-19 vaccine. There is a very well put together section, describing the evolution, providers and all the rest of it. There was a very good and helpful piece discussing misinformation about the vaccine, and the fact that there is a lot of misinformation out there. What there is not on the Wikipedia page, though, is misinformation about the vaccine itself. Clearly, it is very well policed and very well regulated. There must be attempts all the time to insert anti-vaccine conspiracy theories into it. I was interested in that. Is it an example of effective community moderation, or is it Wikipedia itself deciding that it will make a decision on Covid-19 vaccines and the sort of information it will make available?

Jimmy Wales: That is a fantastic question and a great opening for one of the things that is really important for us to remember as we think about these issues. We often have a model in mind of content regulation and content moderation that I refer to as the lord and the serfs. In other words, the companies run the platforms, the users do whatever the hell they want to do, and the platforms try to keep them under control, to moderate.

Our model is completely different. The Wikimedia Foundation, which is the non-profit charity that runs and operates Wikipedia, has almost nothing to do with that. There are very rare cases where it gets involved, which have more to do with specific safety issues and things like that. We have a community that is incredibly fierce about regulating that type of misinformation. That is by design. From the very beginning, from the early days of Wikipedia, there has been a commitment to designing a social system that drives towards truth and drives away from misinformation/disinformation. It is not perfect. We have all seen stories of errors in Wikipedia, and sometimes those errors persist for longer than we would like.

In general, it is a very different approach. In fact, we are very proud of our work on Covid-19 and coronavirus across many languages around the world. That is really down to a group of volunteers. In English Wikipedia it is largely a group called WikiProject Medicine, a quite fearsome group. They are very strict and they have tools and processes and procedures in the software to give them the power to deal with that. You do not really see that on most platforms.

On most platforms, if you see someone doing something bad, whatever that bad thing might be, effectively, there are three things you can do. You can complain and report it, although as with your story earlier, reporting it to, for example, Twitter often does not end the way you would expect. You can block that person, which helps you a little bit but does not help anybody else, and often does not help you, because the point is not that you are reading abuse about you but that someone is abusing you. Or you can yell at them, which is very popular of course on Twitter.

On Wikipedia, if you see something wrong you can leave a comment and get involved with the community. The more esteemed and more respected members of the community who have been around for a while have the power to block, to ban, to revert, or to lock pages temporarily, all the things that you think would normally be handled by a company. It is a very different model from any other social media.

The Chair: Thank you for that. On the question about Covid-19, clearly in that case the community moderators have made a decision about what they think is useful and helpful and what they think is harmful and unhelpful. They have even gone beyond that; they are warning people about the existence of misinformation. What is the process by which those decisions are made?

Jimmy Wales: There are a few different elements. One of the core basic principles of Wikipedia is the idea of reliable sources. We have always said from the very beginning that Wikipedia is not a wide open, free speech platform. It is not the place to come and just post what you randomly think about anything. In the early days I used to say, Go start a blog”, and now it is, “Go on Twitter, or wherever you want to go, and post your random thoughts. It is really about saying, “Here is something that is backed up by a reliable source”. Of course that raises the meta question of what counts as a reliable source, which is an endless topic of discussion and debate in the community. We tend to be old-fashioned about it. We are interested in books published by reputable publishers, academic journals, serious high-quality newspapers, magazines and the like.

For the most part, it is very complicated and very nuanced. There was a bit of a news dust-up here in the UK a few years back when we deprecated the Daily Mail as a source. It was widely reported as a ban. It was not an absolute ban, but that is really my point. It is a nuanced thing to say that the Daily Mail is not a great source for a lot of things, but, of course, sometimes it is. The idea is that you should try to find a better source than the Daily Mail, but if it is a good source and they have backed it up that can be okay.

Of course the Daily Mail, if we are talking about online harms, launched quite an abusive campaign against some of the volunteers who voted against it. I mention that just to remind us that the kind of orchestrated invective against individual people in society is not a problem that comes only from the online world; it can also come from major newspapers.

The Chair: I am sure the Daily Mail will have its own views on that. I know the Daily Mail and General Trust has submitted evidence to the committee.

Jimmy Wales: I am sure it has.

The Chair: As regards community moderation, what happens if the community moderators make the wrong call? Is there an internal challenge on Wikipedia? When you look at the Online Safety Bill, it could be that the regulator may get in touch with Wikipedia and say, “We feel that you are allowing this speech that falls within the definition of harm within the Bill and we want to understand why you are letting it go through.

Jimmy Wales: If that were to happen, it would be clear evidence that the Bill is deeply flawed and that something had gone very horribly wrong. Of course, we can imagine an alternate universe where Wikipedia is some sort of troll farm that is just spewing out invective and horrible things. That is not the universe we live in.

The question is what happens if the community gets it wrong. As an individual acting in my own personal capacity, as me, I sometimes read a debate in Wikipedia. A typical type of debate in Wikipedia is whether or not a person, say, is noble enough to have an entry in Wikipedia. Sometimes we say, “Look, there are not enough sources. We cant really write a biography of this person, so we shouldnt try to have a biography if we dont know enough about them”. Sometimes I read those and think, “I would have voted differently”. By tradition, I do not vote on such things; I just hang around and chat with people.

Sometimes we have gotten it wrong on a specific issue like that, but not to a sufficient degree that I would say it was an absolute travesty. In all of these issues about content, potentially harmful content, issues around biographies and so forth, the best way to approach them is through kind and thoughtful discourse, where people are chewing on the issues and thinking them through. They are weighing up all the issues, and deciding whether this, say, negative piece of information about someone is sufficiently important to put into their biography or not. Different people can have legitimate different opinions about whether an incident that happened a long time ago is actually important or not. We do not always make the decisions I would make, but we make decisions by a process that I appreciate and very much support.

The Chair: Finally, from me, do you think the Wikipedia approach works because the community is sufficiently large that it has standards for content moderation, and, as you said, it requires backing up by reliable sources? You have defined what you think those reliable sources are. Do you think there is an absence of any of that kind of structure on other bigger social media platforms and that is why we are looking at this now? We are looking at legislation, because we feel their policies are not sufficiently clear and not robustly enforced.

Jimmy Wales: One of the things that differentiates us from other platforms is an area where I feel a lot of sympathy for those other platforms because they have a much harder problem. We are a project to build an encyclopaedia. If we have a page on Donald Trump, the discussion page about Donald Trump is not a discussion page about Donald Trump; it is a discussion page about the article, and how we make the article better.

When you go to edit Wikipedia, there is no box that says, “Tell us what you think. Whats on your mind today?” That is not what we do. If you go to the talk page about Donald Trump and you start ranting against him, or in favour of him, or whatever, people will say, “Thats really not relevant. Do you have a comment about the article? Is there an error or a better source?”—that sort of thing.

If you look at social mediaFacebook, Twitter, et ceterathey really are a wide open, free speech platform in a very meaningful and real sense. They are a place for everyone to come and express their opinions and thoughts. I would put myself into the camp that says it is a really good thing for society that we have a place where people can come and post their random thoughts. It turns out that a certain number of people in the world have very horrible random thoughts. We have racists in our midst, and they will make comments and so forth.

I would go a step further and say that I do not think that is any more of a problem than the stereotype I always use—your crazy racist uncle. I always point out that my uncles are lovely, I do not have a crazy racist uncle, but we all know the stereotype, down at the pub spouting off nonsense to his mates. That is a problem, but it is not a problem requiring parliamentary scrutiny. When it becomes a problem is not that my crazy uncle posts his racist thoughts on Facebook, but that he ends up with 5,000 or 10,000 followers, because everyone in the family yells at him and the algorithm detects, “Ooh, engagement”, and chases after that, and begins to promote it. That is a problem, and it is a really serious problem that is new and different.

Following a remark from Baroness Kidron, I was not sure if she was just asking the question or if she agreed with what she was asking, but there is a concern about this Bills focus on content rather than on certain activities and algorithms and so forth. Once you get into content, you are in a very tricky and very difficult place, because drawing the boundaries between an annoying opinion, an obnoxious opinion, and an illegal statement of some sort is notoriously difficult, and not the sort of thing we want to be careless about incentivising companies to be super-aggressive about. One of the issues I have with the Bill is certain of the carve-outs envisioned for journalistic content or content of democratic importance. To me, that raises some alarm bells.

In Florida just recently, there was a Bill passed, which will be held unconstitutional in due course I am quite sure, saying that it is not acceptable for social media platforms, and illegal for them, to de-platform or kick off anybody who is an active political candidate. That just means you have filled in some paperwork and submitted it, and now you are completely free of all the rules. That is concerning. Anything that says you have to protect content of democratic importance or journalistic content gets to be very problematic when you are making judgment calls, certainly for something like Wikipedia. If we fell under the rubric of that Florida law, it would be very hard. If political operatives came to Wikipedia and were causing a lot of disruption and we tried to kick them offwe the community or we the Wikimedia Foundation—and found that it was illegal to kick them off because they were running for office, that would be kind of crazy.

The Chair: By your definition, Wikipedia is qualified speech rather than free speech.

Jimmy Wales: Yes, I do not know the particular definition of qualified speech.

The Chair: You cannot say whatever you want and you have no right to do that. You said earlier that it is not a free speech platform, so I would say it is qualified on the basis that it is sourced and checked.

Jimmy Wales: Yes, but, of course, we have discussions and debates. You definitely will find cases of debates where I am quite uncomfortable with arguments that some people are putting forward, but it is part of that chewing process that we find very useful.

The Chair: There is a very good Wikipedia page on the Brandenburg v Ohio Supreme Court freedom of speech case as well.

Q81            Dean Russell MP: Thank you, Mr Wales, for allowing me to ask you some questions. You mentioned briefly politicians and a very high-profile one. I am interested in your views on reputational harm. One of the things that has been interesting when chatting to colleagues in the past few days and talking about Wikipedia is how many MPs have said, Oh yeah, on Wikipedia there was this entry about me that wasn’t true”, or, “Theres that entry”. I had quite an interesting one where one of my colleagues had been reached out to by a university to invite him to its alumni event, and it turned out it was because on Wikipedia it said he went to that university, and he had never attended. The weight and importance of Wikipedia in peoples minds is that accuracy is paramount. What is your take on how that impacts peoples views when they assume it is going to be correct and it is incorrect?

I appreciate there are moderators, but I must admit in my own instance I have not touched my Wikipedia page because I am scared stiff that as soon as I edit it myself, even though I should be the expert on me, and even though I would put accurate information on it, it is instantly going to attract loads of people to put fake stuff on that I will not know is there unless I check it. I am interested in your take on that. How do you ensure accuracy, and how do you ensure that people can correct misinformation?

Jimmy Wales: First, I applaud you for your wisdom. Getting involved in your own Wikipedia page is seldom a fun thing to have done. We have several different mechanisms whereby people can raise issues. The most straightforward is the talk page of the entry. People can come and say, Youve missed this source. Youve missed this or that.

This is one of the areas where we think anonymity is quite important. If you come to Wikipedia and go to the talk page and sign Dean Russell MP, “Youve left out this bit about me”, that will be kind of awkward whenI do not want to keep picking on the Maila tabloid newspaper notices it and rakes you over the coals for harassing Wikipedia, which of course you would not have been doing. You can just say, I’m an interested citizen. Here’s a source. You might have missed this fact”, or I don’t think this person went to that university, and so forth.

In other cases, the right thing to do is to email. If you email us, it is handled by a group of selected editors, all volunteers still, who are trusted members of the community. They look into issues for people to say, Oh yes, there’s this problem or that problem. They will, hopefully, direct and help you in some way. I am not sure that process always works to everyones satisfaction, because sometimes the problem is that there is a scandal in your past, and it is part of history; you do not like it, and that is tough. I do not know anything about you, so I am not talking about you.

Dean Russell MP: There genuinely is not.

Jimmy Wales: You get the perspective.

Dean Russell MP: Yes.

Jimmy Wales: It is a complex matter, but as a matter of moral principle it is really important that Wikipedia be responsive to concerns by subjects. Many years ago, in 2006, at a conference at Harvard, one of my last royal proclamations before I became a figurehead, when I could make rules directly, was that we have a biography of living person policy. It says that, if anything negative about a person in Wikipedia is unsourced or poorly sourced, it should be removed immediately, and then have a discussion on the talk page. The idea is, whether it is right or wrong, if it is negative and does not have a source, it should be pulled out immediately. There are clear exceptions in our rules, so that even the subject of the entry has the right to do that. We should not expect random people who have a Wikipedia entry to understand the intricacies of the Wikipedia discourse process.

Dean Russell MP: May I broaden it out a bit? It is fascinating that you say that. What is interesting is something we have talked about before, and that is immediacy. One aspect that has come up repeatedly is the risk in the Bill that the initial harm done by not taking content down immediately is not solved by the Bill because it is going to take a few weeks or months for Ofcom to look at it, but actually the damage is already done. I was interested in whether you think that your approach is against the grain of other platforms. I have not heard of that particular approach from others.

Jimmy Wales: It is interesting because we can think about this, and we should think about it, not as one-size-fits-all problems of harm. If Wikipedia says you went to that university and you did not, it is not a major, major harm. It is an error that should be fixed, and if it takes a little while that is probably not a great big deal to anyone.

In other cases, if we are talking about non-consensual pornography and someone sees something about themselves on a major porn sitetypically, a video that a boyfriend posted without permissionthey need a really fast response. A process whereby they can report it to Ofcom that can then launch an investigation sounds quite slow to me. I would propose an alternative in that particular case, because the harm is so extreme and so clear. First, as I think one of your witnesses said, if there are enhanced penalties for lying about having permission, that is a piece of it, but it does not solve it immediately. It could be a notice and take-down type of procedure, to say, “Look, if you receive a notification you must take it down within a short period”. There is a possibility then that someone could say,No, actually we’re a major porn company and we have signed releases with whatever”. That is a whole other story.

That is the way the Digital Millennium Copyright Act in the US treats copyright notifications. That process works reasonably well. It allows the balance between copyright holders to be able to complain to say, Look, this is pirated content. Take it down, and the platforms either have a duty to take it down or to stand behind it, in which case they may become liable. Those kinds of processes can make a lot of sense if we are talking about specific harms.

Dean Russell MP: I would be really interested to get some sort of outline of the process that you use for that immediacy of content. I imagine you have literally millions of pages of content, and I wonder whether you would be open to providing that.

The other question I have, which connects very deeply to this, is about who owns ones identity online. Even in the relatively light situation of an incorrect university profile, is there a burden of responsibility on platforms, Wikipedia being one, but definitely social media platforms, that, if somebody posts something about you online, you should then be notified that that content is there? I know it is very complicated, but, ultimately, in the instance of, say, a Wikipedia profile, should not the person whose profile has been edited be notified, rather than that person having to go to Wikipedia to see what has changed, or is that already there? More broadly within platforms, for example, if somebody has something put online that has their name and details on, should they not be aware that has happened?

Jimmy Wales: Let me take that in smaller pieces.

Dean Russell: Sorry, I know it was a long question.

Jimmy Wales:  At Wikipedia, if you log in and sign up for an account, you have a watch list. You can watch list any article, and you can subscribe to changes so you would be emailed every time it is changed. That exists for everyone for anything that you are interested in. A lot of our active editors use that functionality to keep an eye on their particular pet topic, Elizabethan poetry or something like that.

On the broader question, I think it is very difficult because the number of platforms and places where people might be discussing you is near infinite. Dean Russell is not a particularly unusual name, so if you said to Facebook, I would like to know any time anyone mentions me, ever, it is going to have a really hard time complying with that without just burying you in a lot of Dean Russell from Alabama has gone to the park.

Dean Russell MP: There is a Dean Russell in Alabama.

Jimmy Wales: I am sure there is. It seems quite difficult for them to do that. In my case, I occasionally, slightly embarrassedly, search my name in Twitter. It is usually fine—people like Wikipedia and so on—but sometimes I find things that are disgusting and alarming. I do not think it would be very easy to mandate that, even for major platforms.

Dean Russell: Thank you.

Q82            Lord Knight of Weymouth: In your evidence, you mentioned, perfectly reasonably, that Wikipedia includes encyclopaedic information around a variety of potentially harmful topics, such as sexual activity, drug use, suicide and self-harm. In the earlier evidence session, it was mentioned that, for some people, accessing that might be problematic and harmful. Under the draft Bill, what are the implications for you of having that sort of encyclopaedic information?

Jimmy Wales: It is very difficult to say because I think, as others have expressed, a lot of the terminology in the Bill is quite vague and therefore you can read it and squint one way or the other way, and see it as quite alarming for Wikipedia, or not alarming at all. It is a very complicated matter. In general, we take great pride in being responsible and thoughtful in trying to deal with complex and difficult information, but we may not always get it perfectly right. I think it would be harmful.

I have described for you the process of discourse, dialogue and debate that goes on within Wikipedia. If there were suddenly a mandate that in a very quick fashion the Wikimedia Foundation in California needed to take top-down action because we had received some sort of notice from Ofcom to intervene in the debate and settle it one way or the other, that would be pretty damaging, I think, for the overall process of Wikipedia and how we go about doing things, particularly if we had to intervene at the level of the talk page in the discussion among the community. It is hard to say. One of the concerns I have about the Bill is what it would really mean for our model of dialogue and discussion.

Lord Knight of Weymouth: In the debates we had this morning, there was a discussion about whether we should try to improve the Bill with a more generalised duty of care and a stronger, more independent regulator who is then regulating on the basis of the risk assessments that companies like yours would carry out. Would it make it easier for you if we were to move in that direction?

Jimmy Wales: I think it would make it harder, in the sense that we had made it vaguer and vaguer and were trusting to a regulator to evaluate things in a way that is very difficult to predict. I would be much more inclined to say what would make it easier would be clearer definitions. For example, we can take a fairly easy case of non-consensual pornography, where we can say, “This is the definition of what it is. These are the requirements for take-down. These are the legal penalties for failing to do the right things”, and so on and so forth, versus a vague, “This content might upset some people. It could be dangerous for people to know certain things. That gets to be really tricky.

Lord Knight of Weymouth: If the regulator is, essentially, being asked to look at your processes, and as you have described your processes to us they seem perfectly reasonable, and it also looks at the risk assessments you have around particular sorts of encyclopaedic information that might be deemed to be harmful to some people, and says, “This is fine”, what is the problem with that?

Jimmy Wales: It would be great, if they left us alone. Perhaps we could put that in the Bill “PS. Leave Wikipedia alone. I would not actually advocate that.

Lord Knight of Weymouth: Is it essentially about your trust in a regulator being able to make the right sorts of judgments?

Jimmy Wales: As a matter of broad political philosophy, I believe that when we are considering legislation, particularly legislation that impacts on and intersects with some fundamental human rightsthe right of freedom of expression, the right to share knowledge and the right to know thingswe need to be very careful to think about worst-case outcomes. Pick your favourite bad politician from around the world and ask yourself whether you want them to have this power. When, some future 50 years from now, a problematic thing happens, do we want to make sure in our democratic institutions and processes and systems that there is robustness to the winds of change towards tyranny and things like that? These things can overtake any society sooner or later, and we must have the ability for citizens to defend their rights against someone who has decided that it is definitely harmful information to oppose the Governments budget this year, or something like that.

Lord Knight of Weymouth: Which could perhaps be dealt with by the independence of the regulator from the Executive and the power of Parliament, as we have been discussing this morning, or you might have an alternative solution for us as to how we are to regulate the platforms that we are clearly concerned about. I do not think we are desperately concerned about Wikipedia, but we are concerned about the behaviour and the systems and processes that some platforms have in place to amplify harm.

Jimmy Wales: In some cases, I do and in some cases I do not. I would say that for non-consensual pornography, as I have mentioned a few times because it is the easiest example, there is a pretty clear path forward that it is unfortunate the Bill is not addressing.

In other cases, I think it is very tricky. To tell my own personal Twitter story, which is not dissimilar to Mr Nicolsons, someone said something horrific about me, accusing me of terrible crimes, paedophilia, and posted it on Twitter. It was very specific as well, not just vaguely, “He’s a this”, or, “He’s a that. As anybody would, I reported it to Twitter. A few hours later I got, “Were looking into this. Great. It is going to take some time. Then a few more hours later, We dont see anything here that is a violation.

I am in a very privileged position because I know Jack Dorsey and several members of the board of Twitter, so I emailed Jack, and he said, “Dont worry, Jimmy, well take care of this. I said, “Not really my point, Jack”. It happened again, so I emailed and this time I escalated it, and actually got into quite a good discussion with some people from content moderation. I found out that it is not actually against its terms of service despite what I view as the plain language of it.

The rationale really has to do with the fact that the MeToo movement has shown that people need a voice to make accusations against people in positions of power. I said, “I don’t agree with your decision here, but I understand that it is a very difficult fine line. If a minor actress no one has ever heard of begins posting about what Harvey Weinstein did to her 15 years ago, can he just email Twitter and get that taken down? That is a really hard problem, even though we also know that people will be falsely accused and so on. I do not have a simple answer to that. I do not agree with the way Twitter does it, but I acknowledge that what it is faced with is a very complicated and difficult problem.

Q83            The Chair: I want to go back to my earlier question on Covid-19, following on from Jim Knights question. Your team, your moderators, who are volunteers, have made a decision that the anti-vaxxers are wrong. I am glad they have made that decision. They have got that decision right, but they are equally free to get those decisions wrong. In that case, the question is whether there should be an independent external body that can say, “We feel that you are giving credence to content that we think causes harm, that falls foul of the guidelines”.

Jimmy Wales: One thing that most of the active and really important Wikipedians would push back on instantly is, ”We don’t decide the anti-vaxxers are wrong. We follow what reliable sources say. They will say, Look, heres the New England Journal of Medicine. Heres a series of studies. Here’s the NHS. Heres the CDC in the US. And here is some nutcases blog. We are not going to treat those as equal”. We do not decide those things at all. In many cases, our volunteers, particularly in WikiProject Medicine, are actually qualified to make those kinds of decisions, but, in general, we try to avoid putting ourselves in that position and we just say what reliable sources say.

The Chair: I appreciate that you have made the correct and subtle distinction about “we” not being necessarily the personal opinion of people on the content. Nevertheless, they are deciding what they regard as reliable sources of evidence that support it.

Jimmy Wales: That is right.

The Chair: I would contend that they mostly make those decisions correctly, and on Covid-19 they have. The question for us would be what if in the theoretical world they get that wrong and as a consequence they give indirect credence to content that could cause harm. In that situation, should there be the right of an external body to say, We dont think youve got this quite right”?

Jimmy Wales: It very much depends on the context. If the content is the sort of content that you would ban from being sold in a book store, which is quite an extreme level, I do not think it would be illegal. I am British now, but I am originally American so I think of things in a first amendment framework where you can publish an entire book saying the coronavirus vaccine is causing deaths and coronavirus is a myth, or whatever nonsense. You can publish that and it is not illegal. I do not know in this country if that would be illegal, but I suspect it is probably not illegal to publish rubbish. If people say, as many people dolet us move a little more borderlinethat masks do not work, that wearing masks is foolish and does not work, I think that is an opinion that is wrong and harmful, but it does not rise to the level of something that should be illegal.

Once we begin to say that certain bad opinions should be illegal, at the very minimum we can all agree we have opened up a very difficult topic. As long as we are saying that the standards for speech are the same whether they are digital or non-digital, I am fine with it. I may disagree with the banning of the speech, but at least we should not invoke a regime online that is different from what we accept in other aspects of life.

Q84            Baroness Kidron: I have a quick question on the back of what Jim asked to do with the process. You have good processes, and you have just described them. If the regulator had the right to say, “You know what, this is harmful material but youve got to it in a reasonable way. Would you mind having a warning on it?”, is something like that anathema to your community?

Jimmy Wales: I would say yes. I think people would be very unhappy about that, but I am just answering in a factual way.

Baroness Kidron: I understand.

Jimmy Wales: Yes, I would think so. Of course, it raises a further question of jurisdiction, and that becomes quite hard as well. The Bill purports to have impact globally, which is, broadly speaking, a very difficult thing. We at Wikipedia are currently banned in China. If you argue that we should follow the laws of every country in the world that says we should, I disagree. In a case such as this it is very hard.

As a part of our structure we have always been very careful to do this. The Wikimedia Foundation, which owns and operates Wikipedias, is in the United States, and we have local chapters around the world, which are just clubs of users. They have no authority over the website. They do not run the UK Wikipedia, for example. The UK chapter does PR and community events and things like that. We have done that because we know that in various places around the world volunteers or employees might be at risk, for example, in Turkey, where we were banned for a while, until we won in the Supreme Court there. Like many organisations we have that concern, but, unlike many organisations, we do not have a commercial presence in lots of different places, specifically for that reason.

The jurisdictional thing is something that you need to contemplate, not with reference to Wikipedia per se, but with reference to other places. There are random, extremely abusive places online; 4chan is quite famous as a place where a huge amount of abuse is organised and takes place, and they are not going to listen to Ofcom under any circumstances. We should not imagine that there is in many cases the ability to guide in some way.

Baroness Kidron: How many volunteers do you have?

Jimmy Wales: It is hard to come up with a specific number. There are, let us say, 3,000 to 5,000 extremely active volunteers. They are the real backbone of the community. They are the ones who set policy, who do the most edits, and who are involved in international activities, working with galleries, libraries, museums, and all that sort of thing. There are a further 70,000 to 90,000, let us say, who are fairly active editorsfive times a month, that sort of thing. Then there are millions who might edit once a year. If you go on to Wikipedia and you see a spelling mistake, click on it, fix the spelling mistake and hit save, in one sense, you are a volunteer, but you did not even log in. You are just a random person. I would not say you are part of our volunteer community, but we love people who do that. Depending on how you look at it, you could say 80,000.

Baroness Kidron: One reason I ask that question is that a lot of the time when we talk to the tech companies they say, “Theres so much, we can’t manage, theres no way”, yet you have developed that sort of universe. I am not suggesting that the world should volunteer to keep Facebook and Twitter in line, but I am interested in how you see it.

Jimmy Wales: There is a very interesting nugget in that. One of the reasons why Wikipedia has scaled is the same reason that you can scale from a tiny village to a small town, to a medium city, to London. Of course you have different problems that emerge at different levels of scale, but it scales. With Wikipedia, when we had a very small group of people50 people editing—that meant we had 50 people looking out for what was going on. If we have 80,000, it means we have 80,000 looking out for what is going on. It naturally scales. It is a community, and communities naturally scale. Of course, we see new and different problems at the level of 80,000 from when there were 20 of us, and I think that is important.

Can we think about how that might apply to Facebook and Twitter? This is where we go back to the feudal model: they are the company and the users are not really empowered to do anything other than complain and yell. I think that is why it does not scale. I am not sure there is a regulatory response to that because it is very complicated, but one solution that those companies should be looking at is how we put more decision-making power into the hands of the thoughtful people in our community.

If we recognise that there is a problem with troll groups, who is out there on the front line fighting them and yelling at them? If it is just the company and the companys algorithms, you are losing that battle, clearly. If you say to trusted users, “If you block the person, it is actually going to have a meaningful impact on whether their content is more widely distributed”, that brings up a lot of hard questions for the platforms.

You would not want a politically biased way of doing that where you say, for example, that only environmental activists are allowed to block people for environmental misinformation. That is a problem because it is one side of a very complex debate. I am not suggesting there is an easy answer for them, but that is where I would begin to look if I were the head of one of those companies. It would be to say, “We cant do this top down. Even with algorithms we are failing. We are failing to protect even people who personally know members of the board. This is my point to Twitter; if you are failing me, you are definitely failing a teenager who is being abused by someone.

If we are failing at this, we need to think about how we devolve power. How do we put more power into the hands of more good people to make this a space that we are all proud to be a part of? There are no easy answers, but I think there is something to that.

Baroness Kidron: The tough question is: if they are endlessly failing, is it a good business model? Should we ask something more profound?

Jimmy Wales: I have said to people at Facebook that, if the general public become convinced that you are leading to the destruction of western civilisation, it is probably not good for your business in the long run. I hope I am right when I say that. I think they sense that. I am not privy to anything in particular that goes on there, but I think a lot of people at those companies understand. Whether they are willing and able to actually do something about it is a whole other question, but I think they understand that there is a real problem if you are amplifying misinformation.

What you really want, or what I think you would want as any kind of responsible company, and I am talking about making any kind of product, is to not be big tobacco. You want to say, I think my customers lives are improved by the services that I offer. For a service that offers community connectivity, family and friends, news and information, you want to say, Actually, I believe that when people come on to the service their lives are made meaningfully better in the long run. If you do not have that, you really have to question whether you should change your thinking and change your model.

Baroness Kidron: Finally, and very quickly, do you see a correlation between the Facebook whitelisting and the Florida law that you complained about in your earlier testimony?

Jimmy Wales: Yes, I think there is a real problem. I do not know the answer to the problem so I will just describe the problem. It was fairly clear, and I will just have to be specific. I could be abstract, but let me be very specific.

It is fairly clear that Donald Trump was not following the rules of Twitter on multiple occasions, many times. On the other hand, largely because of Donald Trump, Twitter was in every single newspaper every single day for many, many years. There were front-page stories every single day driven by what nonsense he had said on Twitter last night. I can imagine they would say, although I am not privy to any discussions there at all, Wow, revenues are up 30% and traffic is up 30%. We have all these new people signed up, and it’s largely driven by the news coverage of Donald Trump saying nonsense on Twitter”. How do they come up with the backbone to say, Youve got to stop doing this or were going to kick you off? Eventually, they kicked him off, but at a moment when he was on the way out.

That is a really hard problem. It is not dissimilar, by the way, to problems that we always had in the news media with questions of access journalism. You want an independent third estate to criticise the Government where necessary, but if that means you are cut off from getting the good news and information, you may be a little reluctant. That kind of game playing has gone on since we have had media, and this is similar. Of course, a lot of people are going to have a hard time blocking someone who generates a lot of revenue.

The Chair: Thank you very much for your evidence. Unfortunately, we will have to end it there, but we appreciate your time. Thank you.

Jimmy Wales: Thank you.

 

 

Examination of witnesses

Elizabeth Denham CBE and Stephen Bonner.

Q85            The Chair: Good afternoon. Welcome to the final panel for todays evidence session. We welcome the Information Commissioner to give evidence. Elizabeth, I was at your very first Select Committee appearance when you were nominated as Information Commissioner, so it is nice to be at what may be your last, as you leave office at the end of October.

As Information Commissioner, you have many of the powers to investigate and regulate based on data protection that it is envisaged Ofcom could gain in moderating the content and systems that run and direct content to people on social media. I would be very interested in your view on the Online Safety Bill as it stands at the moment. Do you think that a regulatory regime has been created that will be robust enough to deliver the kind of enforcement and investigation powers that we want it to have, based on your experience of exercising similar powers?

Elizabeth Denham: I am very pleased to be here. I acknowledge that this could be my last appearance before a parliamentary committee. It has been a privilege for the past five years.

Online harms are real harms, and we have a place, and we have had a place, in regulating data protection, which is a type of online harm. I think the Bill we are talking about today, the Bill that has been tabled, has a lot of the right kinds of provisions and structure to be able to change the behaviour of the largest online platforms, search engines, et cetera. What is good about the Bill is that it contains both upstream regulatory interventions as well as enforcement interventions, to deal with the worst kind of behaviour, when behaviour does not change.

Upstream interventions such as the transparency reports are really important. The risk assessments in the Bill are really important. At the end of the day, when the regulator needs to effect change and deter behaviour, there are robust enforcement powers and actions. I think we all know that we are not going to be able to fine our way out of online harms. It is important to build in the kind of changes in structures and systems in companies that reflect community values. Those are some of the things I think are really strong in the Bill.

The other important point is balance. This committee is looking at the difficult balance that has to be made between freedom of expression, privacy online and regulating content. There is a tug of war between some of the very important values that we have, and it will be important to get that right.

The Chair: It struck me that investigatory powers for the regulator are really important because we have a system so far where the social media companies largely self-report on what they do. Even in Germany where there is content legislation, they rely on self-reporting. If the regulator does not have the power to go in and gather data and information as part of the investigation, it will not be able to hold anyone properly to account, particularly on system failures. Content failures are easier to see because you are aware of a piece of content, but the systems failures that allow that content to spread are harder to see. What are the practical challenges in trying to use those enforcement powers against companies such as Facebook and Google?

Elizabeth Denham: The information-gathering powers for Ofcom in the draft Bill are important, but they need to be bolstered by audit powers for the regulator to be able to look under the bonnet. I think that is really important. I will use an example from our own experience with the age-appropriate design code. As you know, this is a world-leading code that is starting to change the systems and the design and the behaviour of the some of the worlds largest companies, not just in the UK but internationally. We have seen some really good changes.

We are carrying out a series of audits on some of the highest risk processing on the big platforms. Because we have the power to audit, we can see what is going on under the hood, under the bonnet. Audit powers are important. Strong information-gathering is important. Those transparency reports, in a way, are like a consumer commitment to how content issues and take-down will be supported by the company. It is all those things together, but I would like to see stronger powers of compulsory audit.

The Chair: Your reading of the Bill is that the auditing powers Ofcom would have are not as strong as the powers you have in legislation today.

Elizabeth Denham: Absolutely. As I say, it is early days with the age-appropriate design code, but we are carrying out a series of audits as we speak on some of the highest-risk processing that impacts children.

Q86            The Chair: This question came up earlier and I think it is an important one. It is a question I have heard discussed for several years. We are going down the approach of having different regulators regulating different bits of the internet: the CMA on competition grounds, the ICO for data, and Ofcom now for content and systems. That approach has been taken rather than having one single regulator that does everything. I can understand why, because as the internet is no longer a discrete thing but almost everything, it is rather difficult to have one regulator for all those things. Do you feel the systems are in place to allow seamless working between the regulators when they have overlapping areas of interest?

Elizabeth Denham: Even without law change in the UK to create a single regulator, which I would not support for all the reasons you have just stated, we have the Digital Regulation Cooperation Forum, which is a collaboration between Ofcom, the ICO, the FCA and the CMA, because we all have discrete objectives but they overlap at times. You have seen some of the work we have done with the Competition and Markets Authority to make sure that privacy and competition work together in the public interest. That is very important.

We need co-operation at the domestic level, and I have given you an example of that, but we also need the ability to collaborate and to co-operate at international level. Because the UK is out in front with its proposals for content regulation, I think it is really important that there are information-sharing powers that Ofcom has with, I guess, soon-to-be other regulators.

The other point that I made in my submission to the committee is that, although there is collaboration and co-operation going on, what is important is that there is a bright line drawn between determinations of privacy by Ofcom and determinations of data protection and privacy by my office. I have suggested to DCMS, and suggested to this committee, making it clear on the face of the Bill that the data protection regulator makes determinations on privacy in the context of, let us say, a super-complaint. I think companies would expect that, and individual citizens would expect to look to the privacy regulator to make those determinations. That is about clarity. We have good co-operation between the regulators. I just think the legislation should be clear.

The Chair: Where would you draw the line on some aspects of that? We have spent a lot of time talking about systems rather than content and whether that needs to be very clear on the face of the Bill. Let us say that content is being directed at somebody because of algorithmic selection by the platform. Will those always be content decisions about the content that has been sent, or are they data privacy questions around inferred use of data and the profiling of users based on their data?

Elizabeth Denham: It is both. Obviously, personal data is used in the delivery of content, and personal data is used if you have algorithms that determine the delivery of content. The content regulator and the data protection regulator will be looking at that very carefully. In the work that we have done with delivery of content to children through our age-appropriate design code and the work we have done on electoral interference, we looked at analytics and algorithms to deliver content to people that sent them down into profiles and filters, and took them away from the public sphere. There, I think you have an intersection. We do not regulate content, but we regulate the use of data in systems that deliver content.

The Chair: In the past, you have raised questions around the use of inferred data and whether inferred data is compatible with GDPR, and whether people have given informed consent. It is used to direct content to people based on accounts they follow and share on platforms such as Instagram. On TikTok, it is even wider than that; it is content directed at you through TikTok, which is based on anything you have engaged with, purely your data profile. Do you have even more concerns around the use of inferred data on platforms like that?

Elizabeth Denham: I do. The 15 standards in the age-appropriate design code, especially when we think about the delivery of content for children and turning off the kind of profiling that we see on those platforms, are even more important. Inferred data is personal data. We certainly found that in our investigations. Stephen, do you have anything to add?

Stephen Bonner: One of the key principles we look for is transparency. It can be very effective to be analysed by these systems and discover new things, but if they are basing it on things that are not transparent to the user, or are things they may not agree with or are not accurate, it is unlikely to result in a positive experience.

Elizabeth Denham: “Why am I seeing this? Why am I receiving this? What does the profile look like?” That is very much data protection.

Q87            Lord Black of Brentwood: As things stand, you have more experience than anybody else in dealing with the platforms. That will obviously be a huge help to Ofcom and in dealing with this Bill. I have two quick questions. First, do you think it will not work unless the codes put forward by Ofcom are statutory?

Elizabeth Denham: In my experience, because we have a number of statutory codes that we are required to draft, which are then laid before Parliament, that is a very good process. Using the example of the age-appropriate design code, the data-sharing code and the journalism code, we have a number of codes that we are directed to draft and consult on by Parliament and then they are laid by the Secretary of State.

The importance of statutory codes of course is that they have to be taken into consideration before the courts. They are not guidance; they are statutory codes. I emphasise that consultation is critically important in drafting those codes. It took 18 months for the age-appropriate design code. We consulted with children’s experts, parliamentarians, parents, academics, and the companies themselves across all the sectors to get the code right. I am in deep discussions with Ofcom about that because it is taking the experience that we have gained from drafting our statutory codes to be considered in its work. Statutory codes are important because of the process and because courts take them into consideration in making judicial decisions.

Lord Black of Brentwood: It would significantly boost the power of the regulator, particularly to move with speed.

Elizabeth Denham: Move with speed, yes. It takes time to draft codes. Even if they are not statutory codes, it takes time because of the consultation.

Lord Black of Brentwood: The other quick point is about transparency. We have heard a lot in the evidence sessions we have had so far about the importance of transparency, particularly in algorithms. Do you think there is enough power in the Bill for the regulator to be able to enforce transparency? I am thinking not just of algorithms but of the advertising supply chain. It is not mentioned in the Bill at the moment, but I know you have looked at it.

Elizabeth Denham: Scrutinising the fairness of algorithms is really important in this regime. It is work that we have been doing at the ICO under data protection. You probably have seen some of the toolkits and guidance we have developed on transparency and algorithms. We have also brought together a group of regulators across many sectors to look at whether there is a set of principles by which we can all scrutinise algorithms, be it transportation or criminal justice. You can see across all sectors that it is an important piece of work.

As to whether there is sufficient power in the Bill to provide for the regulator to look at algorithms, again, I think the auditing tools, compulsory audits, are important. Stephen, do you have anything to add from our technology experience?

Stephen Bonner: The other thing is looking at the outcomes. Transparency is vitally important and is a key part of having control over it, but in some cases knowing why something is happening rather than what is happening may be more important. The Bill covers some of the outcome-focused areas that I think are appropriate. Transparency is great, but it may not be enough.

Lord Black of Brentwood: We need to beef up the order.

Stephen Bonner: Yes.

Q88            Dean Russell MP: I register an interest in that I do some work with the Data and Marketing Association, which I am sure you know. Chris Combemale is an old friend of mine.

I want to ask a question on a couple of very quick points. When GDPR was coming in, I worked with a lot of businesses, and the fear of God was put into them that they would get it wrong. There was a lot of training and work and investment to get it right for those organisations, often small businesses, of course. Do you think this Bill will have a similar effect for the big social media platforms, and that they will go, “We’ve got to get this right or else?”, or might they find ways of getting around it? Obviously, with GDPR, what was required and what was intended was very cut and dried.

Elizabeth Denham: It is a great question. In preparing for the meeting with you today I was thinking about the difference between GDPR and data protection, which hardly came from a standing start. We have had data protection regulation in this country for 30 years. All around the world you can go back and look at 40 year-old statutes that evolved.

There is a rich vein of jurisprudence and tradition on data protection around the world, but for content regulation we are at a bit of a standing start. They are different areas, but many of the large companies are based in the US, and US law is getting stronger. Data protection laws at state level in states such as California look a lot like the GDPR. I see lawmakers around the world looking with envy at our age-appropriate design code and looking at how they can make those changes.

Privacy is getting stronger, data protection is getting stronger, and there are expectations of transparency on the kind of harms that around the world are concerning to lawmakers. I think the tide is starting to turn, and the companies are under scrutiny in ways they never have been before. Some of them are making significant changes—not all across the board but they are making changes. That is not just because the laws are there but because it is what users expect. I see changes in the way they behave. Stephen, do you want to add to that?

Stephen Bonner: Before I joined the regulator, I spent a decade consulting and helping organisations meet things like GDPR. What I learned from that is that often there are three main groups of organisations. There are those that care about these topics, want to do the right thing and will do it even without any legislation at all. We should consider those and make sure the burden we place on them is as small as it can be. There is a small group that do not care and are just in it for some short-term reason, and will exploit and take as much advantage as they can. That is obviously where enforcement and investigation is absolutely essential. I found that the vast majority of organisations have other things on their mind. The goal there is to educate and inform and make it as easy as possible to conform. What we can do with work such as this Bill is move that group from where it is something on the back burner to get it to be enough of a priority that they can move into the group that cares and wants to do the right thing.

Dean Russell MP: I am interested in the concept of consent. Obviously, GDPR is very much about consent. There are various versions but you opt in to get content. We heard testimony earlier in the very first panel about people being sent vile images without any consent, sometimes via AirPlay, or whatever, and in other ways being fed it through algorithms, through channels. Is there an element of this Bill that needs to take into account a firmer view of what consent is as in, “I consent to get certain types of images or not”?

Elizabeth Denham: I have not considered whether consent in the context of the Bill would drive good behaviour. I think the focus on accountability, transparency and the systems that have to be in place is the right way to go, because we want the companies to be responsible for online safety, not the individual. I would not want more stress and more responsibilities put on the shoulders of consumers. I would rather see those community standards having to be reflected in the business processes of the companies. That is just my first answer.

Dean Russell MP: That is really helpful, thank you.

Stephen Bonner: I would draw out the point that we sometimes confuse the GDPR and the UK Data Protection Act idea of consent with control. In these cases, what we want to do is give people control. Often given the power and balance between the individual user and a giant platform, which you use to gain access to be able to communicate with your friends, take part in social activities, to educate, to learn, you have to sign up, so the idea that you are consenting is questionable. What you need to do is give people control so that they can decide and have autonomy over their decisions.

Dean Russell MP: Thank you.

Q89            Baroness Kidron: I declare an interest as the person who brought the age-appropriate design code into the Data Protection Act.

Liz, I am interested in something you said right at the beginning: online harms are real harms. That echoes a cry we heard earlier, “Please think of the victims”. I am interested to know whether you think it is an omission that the Bill does not give any pathway to individual complaints. I know you get individual complaints.

Elizabeth Denham: It is a very good point because the ICO is both an ombudsman, in that we take individual complaints, and a regulator, when we go in and look at whether companies are complying with the Act. We are also an enforcer. We are a little bit of everything. We use the intelligence that we gather through complaints to drive our more systematic investigations and our big actions. I think it puts a lot on the shoulders of one organisation to take individual complaints as well as being in charge of oversight and regulating the space of content.

If individual complaints could come to a different organisation, that might be a way to go, and then Ofcom could learn from the experience of those individuals, but imagine the millions of complaints for take-down requests that might go to an organisation such as Ofcom. The inclusion of super-complaints in the legislation is important, and the provision of the ability of individuals to come together, or through a civil society organisation, to file a complaint is important. We do not have that in the Data Protection Act, and it has not been brought into effect, but I think super-complaints are definitely a way to go when it comes to finding a way to balance freedom of expression, content and privacy. The super-complaint as the David and Goliath problem we talked about earlier might be a way to go.

Baroness Kidron: There seems to be a lot of emphasis in the Bill on automated systems of redress. How confident are you in the purity of those systems, from all the things that you have seen about the datasets on which they are built and the potential failure of automated systems? Do you think there needs to be a bit more emphasis on human beings somewhere in the Bill?

Elizabeth Denham: Human interventions somewhere in that? Do you want to take that, Stephen?

Stephen Bonner: All these cases will have edge cases that are extremely nuanced. The problem with automated systems is that they are fantastic at dealing with things they have seen many times before. It is likely that you will need both to deal with the volume, where the standard ones can be automated to a level to free up resource, to allow nuanced decision-making where it really matters.

Where we have seen organisations fail is when the volume is so high that the humans are not making good, nuanced decisions, and because they are dealing with repetitive issues they do not have the time to properly reflect and understand, so poor decisions are made in that moment by underresourced and stressed individuals. Vice versa, organisations that attempt to do everything automatically often embed precisely the kind of discrimination and poor outcomes we are looking to avoid, because the automated systems have been trained with datasets that do not accurately reflect the population using them, or the situation has changed, and the outcomes are not always positive.

Baroness Kidron: Do you think therefore that we should be looking for minimum standards on moderation that are somewhat related to the scale of the organisation in question? It would not be one size fits all, but if you have a volume of material you have to have a higher bar, or at least more people.

Stephen Bonner: We have spoken a lot about the balance between freedom of expression, online safety and privacy. Similarly, there needs to be a balance to allow small organisations to compete, to innovate and to launch new things. Otherwise, if you put too high a bar on the moderation, only the giant platforms can offer services, and we will not see freedom of choice. We will not see the market that will allow people who, through the transparency at the moment, can move to a different platform. If there is no other platform, they are stuck. Being proportionate to both the risk and the harm is vitally important. Anything that applies the same restraints you would require on a giant global platform to a small organisation is unlikely to be effective.

Baroness Kidron: My final question is really difficult, so I am not asking for an answer now. Could you help us in how we should be thinking about end-to-end encryption? How should we approach that question?

Elizabeth Denham: In a very nuanced way. This should not be looked at in a binary way—end-to-end encryption, bad. That is not the case. The difficulty is that we have some rhetoric about end-to-end encryption preventing law enforcement being able to do its job. We have rhetoric on one side and freedom of expression advocates on the other side. Obviously, there is a value in private communications. I think we well understand the value of private communications. We also understand the need to investigate, especially child sexual exploitation or terrorism content. We want law enforcement and intelligence services to be able to investigate.

Given that both sides of that equation are important, we have to have a nuanced debate. Law enforcement needs to be asked the question about what is wrong with the tools that it already has to be able to access communications. We need a proportionate discussion with the social media companies because I would suggest that it is only a fairly small percentage of communications that are really at risk.

What are the values of our society? Freedom of expression, private communications, and law enforcement for finding the bad actors and the bad guys; there has to be a nuanced place for that, and only in certain circumstances should law enforcement have the ability to break encryption. We have seen some of these cases in the US. Stephen, do you have anything to add? We are working on a paper on end-to-end encryption.

Baroness Kidron: Great. Will it be ready in time for us? That is the question.

Stephen Bonner: The commissioner has very effectively covered the philosophical element. I am more from the practical side. I spent many years defending systems against attackers, and with strong encryption I could stop baby monitor footage and CCTV footage being stolen and abused. I could stop financial details being stolen.

When I look at a control like this, I think: will it work, what happens when it goes wrong, and is it worth it? On the question of whether it will work, unfortunately, those displaying the most egregious behaviour are running their own platforms. We had a very successful sentencing just this month of the person who ran the world’s largest facilitation of online child sexual abuse material. That was not someone using any of these platforms. They were running a bespoke platform, and it took co-ordination between law enforcement, investigative skill and technical capability to break into that environment and cover that. If we limit the protections for everyone on platforms that these people are not actually using, will it work?

When it goes wrong, I have seen the consequences of insecure data leading to exactly the harms that we are looking to control: children’s images being taken by people who are spying on them. At the moment, there is no good way to add access without weakening that security. Thankfully, there is a safety tech challenge being run to try to balance those two capabilities, but until that can be put in place we see that where this has been done on other systems there is misuse of access being given. There are microblogging platforms where foreign agents gained access; cloud email where foreign agents gained access through this capability; and firewalls that had it built in and then another power gained access and co-ordinated that. Weakening the protections of strong encryption causes harm as well as perhaps reducing it. It is comparing those two.

One of the most powerful things I like to do is listen to the people who are impacted. The concern here is very much around children. Ofcom and the ICO run regular surveys with children about their behaviour online and their concerns. Children online are eight times more likely to express unprompted a concern about their data being stolen than they are about content being sent.

Elizabeth Denham: If it is helpful, we can write to the committee about our position on end-to-end encryption.

The Chair: Thank you. Now, joining us remotely, Darren Jones.

Q90            Darren Jones MP: There are two main questions from me. First, I am interested in the potential conflict between regulators, Ms Denham. You have mentioned a lot today content regulation in the context of Ofcom, and of course your responsibility around data, but the two are inextricably linked, especially on platforms where data is monetised. I want to probe you slightly further on that. Do you think there need to be any changes on the face of the Bill to be clearer about how the ICO and Ofcom will work together on enforcement in this area?

Elizabeth Denham: Yes, I do. My suggestion in the submission that I sent was to make it clear on the face of the Bill that the ICO makes determinations when it comes to data protection and when it comes to our electronic communications regulation. I think that is important. It gives clarity.

The other ask that we have made of DCMS, which came from all the digital regulators, is that we would like the Government to consider giving us duties to take into consideration the obligations of other regulators, and information-sharing gateways. That is really important for us. It might sound like an in-the-weeds legal problem, but we need to be able to share information, because from a competition aspect, a content regulation aspect or a data protection aspect we are talking to the same companies, and I think it is important for us to be able to share that information.

The advantage of the Digital Regulation Cooperation Forum, and the advantage of the UK being a pathfinder in the areas of safety online and data protection and competition, is that we are setting international norms now. Countries around the world are really interested in the fact that the Digital Regulation Cooperation Forum is working in the public interest to make sure that these companies, the size of nation states, are not forum shopping, or running one regulator against another and claiming in the privacy interest that they are going to change third party advertising processes. You can see that it is important that we act together in concert, but we need some of those tools to be able to do that. We need duties to respect the other regulatory objectives as well as information sharing between the regulators.

Darren Jones MP: Is there any inequality of powers in the Bill between what is being given to Ofcom and what you might do in order to pursue the same company, potentially on the same issue but from your perspective from a data protection and privacy angle?

Elizabeth Denham: I would not say that we have to have the same powers, but I think there should be equivalent powers. Again, that prevents a company forum shopping or claiming that they are following content rules, and that data protection does not matter, because there is a 10% of global turnover fine under one regime and 4% under another. Parliament needs to look at the coherence of regulatory regimes. I talked before about the bright lines being important. Equivalence in the kind of powers that we need to be able to tackle these large companies is important. I have mentioned audit powers, and again I think that is important for Ofcom.

The other point is about extraterritorial reach. That is one power that we have in the data protection law that is really important because it allows us to reach across a border and deal with the matter of a company that is based somewhere else. I do not see in the Bill—I could be corrected—that Ofcom has been provided with extraterritorial reach. That might be another thing to look at. We need essential equivalence in powers, especially when we are dealing with the same companies across various regulatory regimes.

Darren Jones MP: That is an important point that we will certainly want to check. My next question is about the powers of the Secretary of State. You may have heard me ask a similar question earlier. I am interested in your views. On the one hand, the department would say that in this fast-moving environment you need the Secretary of State to have the power—the example I gave earlier—to designate priority content without a long consultation period, a debate in Parliament and perhaps even a vote and a statutory instrument in a DL committee; and, on the other hand, which is broadly where my view is, you give the Secretary of State the power to intervene with an independent regulator about what they can and cannot do about certain things. As we know, they change quite a lot when a particular Secretary of State thinks something is important. That does not feel quite right, and I would be interested in your views on that.

Elizabeth Denham: It is very important that independent regulators are independent. Especially when it comes to content regulation, it is critically important that the regulator overseeing content is completely independent of government. I understand that in a fast-moving world there may be a quick solution, but this regulation does not make Ofcom the department of truth—that is not what we want; it is an independent regulator, and that independence is important for companies and citizens to be able to trust its decisions. It is important that it is completely independent of government. It is also important for its codes and guidance.

Darren Jones MP: My last question is slightly tangential. The Government have announced that they want to review data protection policy in the UK now that we have left the European Union, with some implied suggestion that we would want to move away from some of the principles in GDPR. Without a full debate about that in the context of this joint committee, do you think there is a risk that we end up changing the remit or powers that your office has on the one hand, and on the other hand giving Ofcom a set of powers under UK legislation and making it harder to do the job?

Elizabeth Denham: That goes back to my previous point about regulatory coherence and alignment of powers. They may not be the same but independence is critically important. On the Government’s desire to review, and the consultation going on right now to review data protection, I think it is healthy to review legislation, and I welcome that. We will be responding within the next two weeks. My office will be responding to some of the proposals in the consultation paper. The lens through which I will be looking at it will be the importance to the public interest of the independence of a regulator, and whether any of those proposals actually dilute or change the rights that individuals have under the law.

Darren Jones MP: That is useful, thank you.

Q91            The Chair: There is a final question from me. The point on auditing powers is quite important for Ofcom in this context. We have received evidence that suggested that the powers of Ofcom are quite wide-ranging in this context in both extraterritorial reach and the ability to ask for whatever information it wants. I would be interested, whether you want to say now or in writing, in where you think, on the face of the Bill, the powers fall short compared to the powers you have and therefore what additions you would make. I agree with you that it is important that Ofcom in this context should have equivalent powers to the Information Commissioner.

Elizabeth Denham: Is the question, Chair, what I think about the powers that are in the draft Bill in comparison to our experience?

The Chair: Yes.

Elizabeth Denham: I can certainly write a lot of detail about that. The top line is that the information-gathering powers that we have in the Data Protection Act are important, but of course they are reviewable by the courts. We have other inspection powers such as no-notice inspections. We have compulsory audits. There are many other things. I would not do justice to the question by answering it now, so if I could write, I will do that.

The Chair: We would appreciate that. Thank you very much. That concludes our questions for today. Thank you for joining us.