Digital, Culture, Media and Sport Committee
Sub-committee on Online Harms and Disinformation
Oral evidence: Online Safety Bill, HC 271
Tuesday 7 June 2022
Ordered by the House of Commons to be published on 7 June 2022.
Members present: Julian Knight (Chair); Kevin Brennan; Steve Brine; Clive Efford; Damian Green; Dr Rupa Huq; Simon Jupp; John Nicolson.
Questions 1 - 48
Witnesses
I: Dr Edina Harbinja, Senior lecturer, Aston Law School; Ellen Judson, Lead researcher, Demos; William Perrin OBE FRSA, Trustee, Carnegie UK; and Izzy Wick, Director of UK Policy, 5Rights Foundation.
Witnesses: Dr Edina Harbinja, Ellen Judson, William Perrin OBE FRSA and Izzy Wick.
Q1 Chair: This is the Digital, Culture, Media, Sport Select Committee and this is a one-off hearing we have as part of our DCMS Sub-Committee’s scrutiny of online harms and disinformation and the Online Safety Bill. That is a hell of a mouthful, isn’t it?
We are joined today by Dr Edina Harbinja, senior lecturer at Aston Law School, Ellen Judson, lead researcher at Demos, William Perrin, a trustee at Carnegie UK, and Izzy Wick, director of UK policy, 5Rights Foundation. Edina, Ellen, William and Izzy, thank you very much for joining us today. Thank you.
Before I start off, I want to see whether there are any declarations that any members would like to make? I will state that I am the Chair of the APPG for New and Advanced Technologies. Thank you.
I will put my first question to Ellen. One of the major bugbears with this legislation—at least from colleagues—is about freedom of speech and whether or not this restricts freedom of speech. What do you think of that notion, that the Online Safety Bill is effectively damaging to freedom of speech?
Ellen Judson: Definitely. As it is currently written, it could be. Our particular concerns around freedom of expression relate to the heavy focus on forms of content moderation as the primary solution to online harms, so looking at ways that content that platforms consider is likely to be illegal or which breaches their terms and conditions is to be removed or demoted. I think, in practice, those could certainly lead to worrying restrictions of freedom of expression from incentivising over-moderation.
Other areas, from the opposite perspective on freedom of expression, are that certain exemptions within the Bill may allow for particular kinds of online harm and abuse, and harassment in particular, to continue. This is particularly thinking about the media exemptions, the democratic exemptions and the journalistic exemptions that that could actually lead to the continued perpetration of abuse against people, which is then silencing their ability to express themselves on platforms as well.
Q2 Chair: Is the view that this clamps down on freedom of expression a little bit simplistic considering that, at the moment, the arbiters of freedom of expression are the social media companies themselves?
Ellen Judson: I agree. The idea that what we have at the moment is a kind of neutral “everyone gets to say what they want”, and that this is introducing constraints that are not already there, is misguided. I think at the minute who gets to say what and who gets to see who has said what is being driven by commercial imperatives.
One of the reasons we are supportive of regulation in principle is that we want to see those metrics being moved away from commercial imperatives and being brought much more into social democratic imperatives. However, the way the Bill is currently structured, the rights and protections around freedom of expression are quite vague and it is quite hard to know what they would amount to in practice and in what circumstances platforms are expected to—
Q3 Chair: Is the fear unwarranted take down?
Ellen Judson: Unwarranted take down but also algorithmic suppression of particular groups on arbitrary bases. For instance, at the minute we know that algorithmic systems, which are trained in a biased fashion, often over-moderate content from marginalised groups, such as LGBT people, from black creators and so forth. I think there is also a worry that, by just focusing on how we can get platforms to be moderating content at scale, those will be replicated if they are not redressed.
Chair: William, I saw you nodding while Ellen was talking.
William Perrin: Thank you, Chairman. I think it is important that we step back and not look at this piece of legislation in isolation but look in society at the other areas in which speech is a predominant product or vehicle where Parliament has decided to bring in legislation. Over the decades that has been in the broadcast media and self-regulation in cinema has been going on since 1913, I think, and there is self-regulation of advertising, all of which in ways that are very different to this Bill. They give very strong powers to either self-regulatory bodies or to fully independent regulators to make decisions often about what is harmful but about a number of other parameters that Parliament has set for them.
I think there is a strong element—and it is natural, I suppose—of tech exceptionalism in saying that these big, powerful media platforms are in some way very, very different to big, powerful media platforms that have preceded them. Therefore, I think there is a strong case for regulating these platforms once they have reached a scale where they are highly influential in society and they cause harm to large numbers of people.
I feel one of the best measures to protect freedom of expression in the Bill would be to row back a little on some of the powers of the Secretary of State, of the Executive, to interfere in what the independent arm’s length regulator does and the decisions it takes in its regulation because the underpinning convention of regulation of media in Western Europe is that there is an independent regulator and the Executive does not interfere in their day to day decision-making for very good reason. In the regime set out here, there are few too many powers for the Secretary of State to interfere in that manner, rather than merely giving strategic direction on important matters.
Q4 Chair: I was about to come onto that particular area. Dr Harbinja, do you think that the powers laid out in the Bill for the Secretary of State to direct Ofcom, in terms of changing codes of practice, are a good basis for public policy.
Dr Harbinja: Thanks for the question. I have written quite extensively on that in my academic papers and I do fear that Ofcom’s independence may be compromised with those powers. I can see that similar powers are creeping into other law reform pieces and proposals, such as the data protection proposal where, again, the Secretary of State wishes to attract more power than would be appropriate for this type of regulation. When we talk about regulation in any area—including this one—we want an independent regulator, an empowered regulator, and powers such as, you said, designating strategic priority areas for Ofcom is one of the areas of concern. The second one of course is the problem with designating priority for legal but harmful content, which is an area that I am particularly interested in, and there for designating priority content for the reason of public policy in particular. I see that Members have proposed amendments to delete the public policy reason already, and I would certainly support that proposal.
Q5 Chair: The fact that any changes of direction, changes that Ofcom bring about in the codes of practice, directed by the Secretary of State, then have to be approved by a vote in the Lords and Commons on secondary legislation, does that not effectively add a layer of scrutiny in there, which is not obviously there at present where decisions are being made by social media companies whose one scrutiny is their shareholders?
Dr Harbinja: Yes. I do agree that parliamentary approval is clearly explicitly stated in the Bill and now we have mechanisms where changes could be passed without Parliament’s explicit approval, so I do think a strong layer of legitimacy would be added with Parliament’s approval.
William Perrin: Chair, if I may, in the matter to which you referred, clause 40 sets out the form of infinite ping-pong before proposals come to Parliament.
Clause 40 is explicitly drafted so that the Secretary of State can infinitely reject proposals it receives from Ofcom until it gets the proposal it wants. It is a very unusual power and that is before the Secretary of State then brings the SI to Parliament. Ofcom, in a terribly polite way, has raised that this causes problems because decision-making must be based on a rational process to survive judicial review. If Ofcom's work is based on its enormous research and evidential capability, I as a former civil servant who worked in DCMS, frankly cannot see how DCMS can surpass that on an infinite number of occasions in any manner that is other than irrational and, therefore, it weakens those kinds of—
Q6 Chair: Who would have thought that DCMS would act in an irrational fashion?
I get what you mean in that respect but there is also the fact that ultimately the Secretary of State—whoever that Secretary of State may be, at whatever point in time—is elected, whereas, whoever works for Ofcom is not elected. Therefore, that is surely a means of democratic scrutiny right at the first process if they have the power to reject what Ofcom actually wants to introduce prior to having another layer of scrutiny, which is parliamentary scrutiny?
William Perrin: In that case, why did the British Government sign up in April this year to a Council of Europe declaration that said, “Media and communication governance should be independent and impartial to avoid undue influence on policymaking, discriminatory treatment and preferential treatment of powerful groups, including those with significant political or economic power”? There seems to be a contrast between that noble statement, which underpins most of western European media regulation, and the practice as set out in clause 40.
Q7 Chair: Do you think the Government are afraid that Ofcom could become too powerful? Frankly, it is already as powerful a regulator as, say, the FCA. It is a hugely powerful regulator already and with regulation in this space it is going to be exponentially more powerful.
William Perrin: This is a very similar debate to what happened in the 2002-03 Broadcasting Bill, which became the Broadcasting Act. I should declare an interest, as I was the civil servant who created Ofcom back in 2001-02. Nonetheless, I think Ofcom has held the confidence of this Committee, and of Parliament more broadly, in the decisions it has taken, even with its greatly expanded powers.
Every now and then there is a small slip up but it does have strong and effective governance. I think now that the issue of the chair has been resolved it has a strong board structure and it is well set up to carry out this task, and the Government have not brought forward any evidence that says that Ofcom will be too powerful.
Q8 Chair: Yes, I understand that. It is just that, since its introduction, it has become infinitely more powerful. For example, it now has an oversight responsibility with the BBC and also you are now adding the largest companies on earth and their interaction with our society as another means by which the chair and chief executive of Ofcom will be the most powerful people outside of elected officials in this country, virtually.
William Perrin: No, I think Police and Crime Commissioners, police chiefs—police chiefs, in particular, have the power of arrest and Ofcom does not have the power to arrest people. They are far more powerful. That comes later.
Q9 Chair: Izzy, what do you think about this debate that there is about, first, freedom of speech and the way in which this legislation either harms it or potentially enables a more frank discussion of it and, also, can you say whether or not you think that the Secretary of State’s powers are too great?
Izzy Wick: Taking freedom of speech first, I think that so often we hear child safety being pitted against adult freedoms, like freedom of speech, and we forget that children also have a right to freedom of expression, to a plurality of information and to participate in the online world. However, I do have concerns that a Bill that focuses on content moderation and content removal will undermine children’s rights to express themselves, to access that plurality of information and to participate in online life. We think that this complex categorisation of harmful content for children—which is primary priority, priority and non-designated—will lead to confusion, potential over-moderation and put companies in a position where they are deciding what can and cannot be said.
What we need instead is a Bill that tackles the functionalities that amplify and spread harmful content, things like frictionless sharing, popularity matrix and the algorithmic promotion of sensationalist content. We are also concerned about the lack of clarity around certain definitions, as Ellen said, some of the exemptions and the different thresholds for democratically important content and journalistic content, which I am sure we will come on to. We do feel strongly that these have to be tightened up or we risk those becoming a backdoor for things like misinformation.
On the powers of the Secretary of State, I think for this regime to be trusted, both by the regulator, providers and by users of those services, Ofcom does need to have full independence from the Executive and protection from political influence. As both Will and Edina have said, we would like to see the powers of the Secretary of State pared down in the Bill.
Q10 Chair: In what particular way would you like to see those powers? Is it, as William was referring to, section 40?
Izzy Wick: I think with the codes of practice we would like to see Ofcom have more independence over what those should say. While parliamentary scrutiny is very important, we do not think the Secretary of State should have quite so much power to reject those things, so as Will was referring to—
Chair: The ping-pong, right.
Izzy Wick: Exactly, yes.
Chair: I think on journalistic aspects I will bring John Nicolson in.
Q11 John Nicolson: It is interesting, that you are all talking about paring down the powers of the Secretary of State. I have sat on both this Committee and also on the Bill Committee. I haven't had a witness before us who does not believe that the Secretary of State's powers should be reduced and, in the interests of openness, I have tabled an amendment to get rid of the public policy powers of the Secretary of State from the Bill, which I hope will attract the support of my colleagues cross-party.
Can I move on and ask about journalism, specifically the journalism exemptions? Can I start off with you, Ms Judson, on that. It seems to me that the definitions are very broad and very vague in this Bill.
Ellen Judson: Yes, I would agree. There is the media exemption and the journalistic exceptions, which are less strong than a full exemption. Our concerns about the definition around the media exemption and the definition of recognised news publishers is that the thresholds that are set could easily be met by a wide variety of legitimate and illegitimate actors setting up a website with the express purpose of meeting those criteria and then—
Q12 John Nicolson: Tommy Robinson, for example, whose name often crops up in this context, calls himself a journalist rather than acknowledging that he is a right-wing extremist, violent, criminal thug. He could call himself a journalist, couldn’t he, and that would be the danger? He does call himself a journalist in fact.
Ellen Judson: That is one of my concerns with the journalism exceptions, for sure, because of what platforms are required to do in terms of how they specify how they treat journalistic content is very vague. Part of that is going to be likely left to their discretion but if they are expected to, for instance, offer the special complaints procedure to people who believe that they are representing journalism, as you say, we end up with a rather odd system where special complaints procedures and special recourse against things like over-moderation—
Q13 John Nicolson: How do we address that? What would you like us to do?
Ellen Judson: I would like to take the journalistic exceptions out of the Bill, and similarly the media exceptions. If there are remaining issues about the freedom of expression protections not being strong enough to protect political freedom of expression, which I think there are—
John Nicolson: How do we address that?
Ellen Judson: One way would be to include rights protections within the online safety objectives. At the moment, the online safety objectives are very focused on protection from harm in a very narrow sense. If we brought an understanding of rights as also being a mechanism through which users are protected and kept safe from harm online into those objectives, that would help strengthen the case for platforms being held to account on what they are doing about rights as opposed to the rather vague duties at the moment.
Q14 John Nicolson: Mr Perrin, it is very difficult, isn’t it, to define a journalist? I am a journalist by profession but when people ask me how you become a journalist, what is the route to journalism or how indeed do you define a journalist, I find it difficult myself. How would you define a journalist?
William Perrin: Like you, Mr Nicolson, we have been around this loop many times in attempts to regulate the press in different ways, over the last decade or so. I remember a surreal meeting with DCMS at the height of Leveson when I had to explain to them what a blogger was, and I don’t want to go there again. I think really—
John Nicolson: It is good to know that the new chair of Ofcom is so in touch with contemporary social media, isn’t it?
William Perrin: I am sure he has very good advice, Mr Nicolson. In our very earliest work where we first set out the statutory duty of care enforced by regulator, at Carnegie in 2018, we were very clear that we felt that the Bill should not be a route to regulate the press by the backdoor. We stuck to that throughout. I think definitions here should be industry-led because it is important that industry is comfortable with the definition. I think that may be some of the reasons why it is so vague because the industry is not comfortable with some of the definitions of journalism for fear it brings on: we get back to Leveson again, which they don't want it to be—at least the newspapers.
When I say industry-led, I don’t just mean the big traditional journalistic outlets but also bringing into its scope the newer outlets for journalism across the piece, but there will be difficulties at the boundaries. I think we just have to accept there will always be difficulties there. It cannot be hermetically sealed. However, I would add that, as I understand it—and the Department confirmed this to me—the journalism exemption or exception, or whatever it is, does not mean that the platform’s terms and conditions do not apply, so perhaps if Mr Robinson was preaching hatred, or someone like that, that might fall foul of the platform’s own terms or conditions.
Q15 John Nicolson: I see. Ms Wick, the danger of course is that certain people's forms of speech are privileged over everyday users' forms of speech.
Izzy Wick: Yes. We have seen cases of whitelisting where, as you say, certain users’ terms and conditions will not be consistently applied to them. The flipside of that is what is called shadow banning where platforms may deprioritise content from certain users. Therefore, I think this comes back to how providers are expected to uphold their terms and conditions, which should be consistently, but also lifting the hood on some of these algorithmic processes and the decision-making that goes on with it in the platforms to understand just how news feeds are populated, why users receive certain types of content and why children, in particular, are sent down rabbit holes.
I think that a really important part of this to tackle this problem is algorithmic oversight, and making sure that Ofcom has both the requisite powers and resources to be able to scrutinise what is going on on these platforms in the processes that sit beneath the architecture, and the design features and the requisite resources to audit and understand the types of risks that they generate.
Q16 John Nicolson: Resources being key. Finally, from me, Dr Harbinja, are you concerned that the comments sections on newspaper websites are exempt from this Bill? That seems to be a huge loophole that could allow all sorts of deeply disturbing and inappropriate content to bypass the rules.
Dr Harbinja: I agree and I have seen that done—it has probably been tabled by yourself, and I would support that amendment.
If I may add to what colleagues have said so far and be really practical about this, I am concerned about clauses 15 and 16, the duties regarding the content of democratic importance and journalistic content. I am not sure if you have read the letter from the HR Committee to the Secretary of State about human rights compliance, including the concern about the distinction between journalistic and democratic content.
If you asked me what I would do with this, I would merge these two clauses because you cannot, under any existing legal standard in human rights courts or domestic courts, make a clear distinction with democratic content, which is primarily political and, as we know, in free speech law you cannot define political content very restrictively. It often includes other types of speech, such as health-related speech, artistic speech and so on. Then you make the distinction in clause 16 with regards to journalistic content that is UK-linked and protects journalists in the UK. For example, if someone is in Ireland and they write about Northern Ireland they are journalists who are not in the UK and they are not protected.
Therefore, I think that these two clauses should be merged and there should not be different protection for journalistic content and content of democratic importance that is shared by other users, and this is a broader category.
Q17 Chair: Just talking about content moderation on newspaper sites in particular, we all know that newspapers have a vanishing degree of finite resources right now and are likely to continue in that scenario for quite some time. How is it possible for them to effectively have very invasive content moderation? Are we asking too much of them to do that? Should we not just accept the fact that sometimes there will be comments on there that are unpleasant or offensive? That will happen, and it is actually more valuable that we have newspapers themselves online and their economics can work even if some nasty comments get through the net?
Dr Harbinja: Given the type of speech and the rationale for regulation, I cannot see the argument to exempt the comments because the consequence is similar in terms of harm or safety. I would agree with you that it is very burdensome to require not just newspaper agencies but also smaller tech companies to extensively content moderate. I disagree with this presumption of extensive, overly cautious and over-removal, and the request to remove everything that might be problematic and offensive because we do want to protect offensive speech as well. According to Handyside v. the United Kingdom, for example, speech that offends, shocks and disturbs can be protected under certain limits.
Therefore, I think there is no rationale to treat this speech differently but I agree that neither tech companies, nor news agencies should be required to extensively content moderate or to use algorithms that will do that and censor free speech as a result.
Q18 Damian Green: I want to pick up first on a point that William made that strikes at the heart of one of the other in principle debates happening around this Bill, about the concept of legal but harmful. A lot of people—including me—are uneasy about particularly giving a Secretary of State direct or indirect powers to say, “Okay, that is perfectly legal but I think that is harmful and, therefore, I am going to find ways of doing it for all sorts of obvious reasons”. If I understood you correctly, you were making the point that, say, the voluntary regulation we have or the self-regulation we have in the advertising industry is exactly what we have done for a century, so the concept of legal but harmful is not a new one in this Bill. Am I right in that?
William Perrin: Absolutely. If my history is right, in advertising that came about for good commercial reasons. The advertisers wanted their adverts to be seen as effective and, therefore, relatively truthful and so self-regulation there came about for strong commercial reasons and was then codified over the decades.
Ofcom is given enormous discretion to go away and write programme codes. It does not have to come back to Parliament or the Secretary of State at all, and those codes influence everything that is broadcast on TV and in a different way on radio. The BBFC has been doing similar exercises in film since about 1913 to stave off Government regulation. That being said, I understand why people will object to it but I always think the phrase "legal but harmful" sets more hares running than is absolutely necessary, given the precedent in other bigger, more powerful and longer-lasting media.
Then there is the question of who should start to define that scope and, as I have already said, we leave it to other independent regulators in these other sectors that are, combined, much bigger than the sectors we are talking about. That would be a route but that is not where the Government are going.
The Government have set out in quite some detail in the schedules—is it 5 to 7—a wide range of illegal things that should be caught by the regime. They have chosen to bring those forward and put them on the face of the Bill. The Government have made a conscious choice not to bring forward such a schedule for legal but harmful, nor to produce a draft SI, nor for the Secretary of State to make a speech indicating what is thought might be in it to give everybody some assurance during the course of this debate. I do hope that they will reconsider that and give some signalling as to what they do.
We said in our evidence to the Bill Committee—which we provided to the clerk the other day—that we feel there should be a schedule 7A for priority content harmful to adults and 7B or C for the different types that are harmful to children as well. Then of course there is the question of what goes in that schedule.
Q19 Damian Green: Izzy, you were nodding there, presumably specifically for the children’s aspect that I understand your organisation is particularly concerned with. I suppose it would be relatively easy to draw up that list, a schedule to cover children particularly, but you then of course have the technical issue of how to stop them accessing it anyway. Is that practical?
Izzy Wick: I think there are two questions in there. The first is around legal but harmful in relation to children. Again, I would like to make the point that legal but harmful content in isolation may not be problematic. One person seeing a piece of content that says the Covid vaccine does not work isn’t necessarily particularly harmful. The issue is the spread and the scale of harmful content online, the systems that drive and exacerbate the harm and the impact of those pieces of content. Therefore, the focus has to be on the functionalities, the systems and the algorithms that sit underneath the operation of these platforms, to tackle the legal but harmful content.
Your point was about how to prevent children from encountering these types of content. What we are talking about here is age assurance, so services having a level of confidence in the age of their users to know who they need to give additional protections to. The revised Bill does include two references to age assurance but they are buried in clause 11, which is the safety duties to protect children, and then later on in part 5, so applying to pornography services.
What we do not have is a standalone duty that says, “Okay, if your service has age restrictions either in compliance with regulations, such as gambling sites, or as specified in your own terms and conditions”—so most social media sites have a minimum age of 13—“you need to have a level of confidence in the age of your users that will enable you to give them proportionate protections”. We want to see a separate and a new clause on age assurance to say, “This is what is required” and then a corresponding duty for Ofcom to produce statutory guidance on how those systems register and should be operated. A crucial component of that is privacy, so making sure that age assurance is not used by providers for additional data harvesting, collecting and sharing. We want to make sure that the data collected for the purposes of age assurance is used for that purpose only and is not shared with third parties. We also want to make sure that it is effective. We do not want to end up in a place where pornography companies have a tick box saying, “Are you over 18? Yes? Go straight in.” We want to make sure that it is effective, that it is proportionate to the risks of individual services and that it is secure. What Ofcom needs to do is establish rules of the road for how those systems and technology should be operated to give providers some assurance that what they are introducing is not heavy-handed, is privacy compliant, and that it is proportionate to risk.
Q20 Damian Green: Do you agree that the legal but harmful debate is slightly overblown?
Dr Harbinja: In the revised Bill, there have been some good positive changes concerning legal but harmful content. I will just come back to what William has said about the traditional media’s regulation of harmful content through advertising regulation et cetera. The paradigm here is different and I disagree with drawing this analogy. When we are talking about harmful content on social media platforms and user-to-user services, we are not talking about content that is served from someone above like the broadcasting industry, or the advertising industry. We are talking about user-to-user content, speech, individuals, different types of groups, privileged and non-privileged groups. We are not talking about powerful industries that self-regulate harm; we are talking about companies or the Government, or Ofcom regulating harm that is inflicted among users themselves. It is very difficult to define, and very problematic, what is harmful to one individual and what is harmful to a less easily offended individual. The paradigm is completely different, and if you talk about the list of harmful content, for example, that could be added to the Bill, it is impossible. Even a non-exhaustive list would be very difficult to draw, and the Government have already included a harmful communications offence—which is a criminal offence in the Bill—and a false communication offence, et cetera. They think this is harmful content, so they criminalised it clearly in the Bill.
What else is harmful? I cannot see what other content is harmful and is not included in these offences that, in my view again, go too far and are problematic from the perspective of implementation from the companies and how they would detect harmful content, defences that the users can use, what is a reasonable excuse and how do you program that into the algorithm, so that is another story. For me, what is then the legal but harmful content, if not these offences and if it is not illegal already or is going to be illegal?
Q21 Damian Green: Do you think we should be hard and fast on this, and say, “If it is harmful, you should make it illegal”?
Dr Harbinja: Yes.
Q22 Damian Green: That is interesting. One of the pieces of evidence we were given makes the point that a lot of the debate about content is quite binary—either you take it down or you do not—and there were a lot of practical steps in the middle that Government and regulators should look at. Ellen, I have been neglecting you on this. What do you think about the idea of delaying payments on advertising to remove the commercial imperative to push what might be harmful content? Do you think there is a fruitful area to explore there?
Ellen Judson: Yes, absolutely. Things like demonetisation, demotion of content and not promoting purely along engagement lines to maximise keeping users on the platform and keeping users engaged, which we know has a tendency to promote polarising and harmful content, increasing friction between users, users being able to create an account and then instantly message the England football team, for instance, with abuse: those sorts of things are functionalities, as Izzy was saying, which could be amended, changed or delayed.
One of the issues around legal but harmful as it is at the minute is it is being interpreted in different ways by different people. My reading of how it will operate at the minute is, essentially, enforce your terms and conditions, whatever they may be. Your terms and conditions might be to leave the thing up, it might be to take it down, it might be to do nothing. Even at that minimum effect, that would be a positive step forward, if platforms enforced their terms and conditions.
The harms that they are seeking to tackle, things like legal forms of abuse, health misinformation, mass co-ordinated foreign interference campaigns or whatever it may be, those are things that are going to be very difficult to tackle through a terms and conditions approach because, to echo what Izzy said, the drivers of the harm there are much more upstream and related to the systems and the processes the platforms have in place, the way that they are making decisions about what to do with the content on their services but also what content their services are encouraging, incentivising or normalising.
For us, the way through the legal but harmful tension of it does not go far enough versus it goes too far is trying to decouple it from thinking about categories of harmful content, which then gets you into the difficulty of, “Should it be illegal, should it not be illegal? and of how a platform is meant to judge illegality at scale, particularly when intent is involved, and all these sorts of issues, and focusing much more on platforms being held accountable for what platforms are responsible for, which is not what their users post on them but is the decisions that they make and the outcomes that they have, which are measurable and are testable when there is access to data and effective audit mechanisms.
Q23 Damian Green: A lot will depend on the algorithms, as we discussed. Therefore, I get the impression you would all agree with the proposition that Ofcom needs to have access to the algorithms and the capacity to intervene. Does Ofcom have that capacity at the moment and how difficult would it be to employ enough people to have the sheer scale to cope with that? William, you invented Ofcom.
William Perrin: No, I do not think Ofcom does need access to the inner workings of the algorithm. In our work in 2018, we said it should be an outcomes-based regime and the company should bear the responsibility for making changes to bring those outcomes about. That means Ofcom does not necessarily need to understand the vast complexity of a machine-learning driven algorithm. Indeed, it is questionable whether the companies do once the algorithms have been operating for, say, five or 10 years. It is much more about what are the outcomes of what the company does with its algorithms in the manifestation of harm to people who are users and who are not users, and to society as a whole. Ofcom has very strong information-gathering powers which are closely related to the powers it already has under the Communications Act to gather information from operators in that regime, and they seem extremely strong to me. I am not concerned about them a great deal, and of course, that is where the yet to be activated criminal offence sits for a failure to comply with an information request.
In game theory or regulatory design terms, that is exactly the right place to put that offence, because if you are not playing with the regime and you are not giving Ofcom any information, then you are trying to thwart the whole thing. I would rather focus on outcomes and make the company responsible for making the changes it needs internally to produce the outcomes that Parliament and Ofcom have set for it.
Damian Green: In fact, you thought that even the companies might not understand the effect of their own algorithms, however that, I suppose, is a world we are now in.
Q24 Kevin Brennan: Dr Harbinja, if I could start with you, but if anyone else wants to respond to this, please come in: what do you think that the effect of this Bill as drafted will ultimately be on corporate behaviour, if any?
Dr Harbinja: If I was a fortune teller? It is difficult to predict what —
Q25 Kevin Brennan: It is intended to impact corporate behaviour, so what is your assessment of its ability to do that?
Dr Harbinja: I think it should be seen in conjunction with other attempts to regulate, such as the Digital Services Act in the European Union. We are talking primarily about the US companies in this context and the big platforms, category 1 service providers that I know the MPs and the Government are concerned the most about because of the prominence of harmful content on their networks and services. It will be about how they prioritise compliance and to what extent they view compliance with the Digital Services Act and Online Safety Act when it becomes the law, and how they position this compliance priority.
We can expect either what is quite likely, that the Bill is passed in this form, overly cautious removal of content just to be on the safe side and comply but the second scenario that I could also expect—if we disregard the senior management liability and the fine because that is another completely different area to look at—the companies may choose to ignore compliance to an extent and game compliance as well, hoping that the regulatory powers and capacities will not be strong enough or, as I said, prioritise other forms of compliance over this Bill.
Kevin Brennan: That is very interesting. Does anybody else want to get out their crystal ball?
William Perrin: You have taken evidence from Frances Haugen so let us look at the processes she described. Teams of very well-meaning, highly skilled people—not very large teams, but nonetheless teams within Facebook assessing and finding risks arising from ways in which bits of the Meta properties impact on people, say on children: those risk assessments in quite a flat bureaucracy going up a couple of notches and not being acted upon. You can see within that process how this regime could make that work a bit better by giving more power to the elbow of the internal experts who are assessing the risk of harm and making senior management tiers take them a bit more seriously.
At the moment, we know from Frances’s evidence but also from others there is under-removal of harmful content. Let us not get hung up on over-removal when currently we are in an under-removal situation, and there will inevitably be some iteration to find a sweet spot within that. This is where Ofcom is bound by its duties under a number of different Acts, but the Equalities Act has many proportionality duties and so on. I think they are in quite a good position to try and find that balance, but there will be an iterative process to find it.
Q26 Kevin Brennan: What do you think of Dr Harbinja’s suggestion that they might simply ignore or game compliance?
William Perrin: Most companies game compliance with regulatory systems. You will know from your own postbag that it is a common complaint in the energy sector and elsewhere. There are two issues here. I have not looked in the last few weeks but the last time I looked Alphabet had $120 billion cash at bank and Facebook had about $60 billion cash at bank. In microeconomic terms, you have to have an enormous fine to drive behaviour at the margins in a system where there is that much cash sitting around. This is where the business instruction powers are quite innovative come in. Essentially, I describe them to people as bringing sanctions on companies in the way we are sanctioning Russia and others, disrupting their ability to work with banks and advertisers in the UK. That could be very effective if we ever get to that point. On the other hand, in my long experience of working on different regulatory systems, good companies usually try their best to comply, and there are some disputes at the margins.
I do not know how seriously one should take this but most of the large companies say they welcome regulation and they are not fighting a desperate battle to stop all of this happening. They have seen what has happened with the DSA in Europe and they seem to be willing to engage in this regime. I was pleased to see yesterday, on LinkedIn I think it was, that Ofcom has employed a senior Facebook executive to run its harm section so you will have someone there with deep inside knowledge of how the companies work to drive compliance.
Q27 Kevin Brennan: One of the things that we suggested was that compliance might be improved if every board had a compliance officer who was personally liable for failures by corporations in this area. Anyone feel free to come in, but what is your view on that proposal?
William Perrin: Again, I try to push back against tech exceptionalism. We are simply talking about regulating a large and harmful multinational industry in the way that parliaments in Western Europe have regulated such industries since the 1960s and 1970s. In many cases, in the most dangerous industries, there is a compliance officer on the board—in financial services, there are quite elaborate compliance processes—and this should be no different.
Kevin Brennan: Anybody else on that point?
Ellen Judson: One of the difficulties at the moment is understanding what compliance will consist in, and I believe when some of the platforms gave evidence to the Bill Committee they made the point that at the moment so much depends on secondary legislation, what comes up in the codes of practice that to set out expectations of precisely what changes platforms will be making, what will and will not be compliant at this stage is very difficult.
While our key focus is around ensuring compliance as the regime develops, we would be focusing on transparency and audit so that we understand once different changes come what the requirements are, how platforms have responded, what the effect is, and what effects does it have, a key part of that will be strengthening the requirements around independent access to platform data, which at the moment Ofcom is required to produce a report about. However, there is not a very clear pathway leading to a systematic change through which independent researchers can be engaging with what is happening on platforms in a more comprehensive way and supporting Ofcom in assessing if compliance is happening and what the effects are of what Ofcom has been recommending.
Q28 Kevin Brennan: Is the best way to future-proof technology to make those developing the technology responsible, ultimately, for its safety? Dr Harbinja?
Dr Harbinja: Yes, definitely; that is a way forward. There is a similar proposition that exists in the data protection regime with data protection by design and default. Here we would be talking about safety by design and default and the risk assessments that already exist perhaps spelt out more clearly.
Q29 Kevin Brennan: I see other witnesses nodding. To save time, can I take it that you agree with that, or is there anyone who disagrees?
Izzy Wick: If I could make one point, we would need to think about these issues as product safety issues like in other sectors, whether it is manufacturing cars or making toys. Ofcom should be given powers to issue criminal sanctions against named individuals. Without that individual liability, it is very hard to see how the larger tech companies whose enormous wealth and cash reserves would absorb even the heaviest of fines would be incentivised to comply with the regime.
Q30 Clive Efford: Following on from that very briefly, should the media platforms be required to publish breaches of notices on their websites in a prominent position when users are using them? Should they be able to see whether this provider they are using is regularly breaking the regulations and breaching the rules? Is that not an important part of online safety for individuals and should they not be published in prominent positions?
William Perrin: Yes, I would strongly agree. The publication of highly prominent breach notices is a tool used in the regulation of financial services, particularly in Australia. It falls into the restructuring of clause 13, the adult harms risk assessment, which is about restoring informed choice. In some ways, you could say that the operation and complexity of platforms and their algorithms have removed informed choice from people, which allows them then to get into situations where they accidentally get harmed when they did not intend it. What you are speaking about is in that vein. How do we give people basic information to make choices to keep themselves safe; easy to access choices, not things buried on page 92 of the T&C?
Izzy Wick: Transparency is very important here, and there is certainly a role to play for certification schemes to show good practice and compliance. Not all users have a choice about the services they use, particularly children. There is a lot of pressure to be on certain platforms, and most teenagers would argue that they do not have a choice about whether or not to use social media so I would question the idea that you could simply choose to use another platform. These have become enormous products that we use and rely on in our everyday life, in the same way that we might use energy or the postal service. A lot of comparisons have been made to those kinds of public sector services, so we should not assume that users have the ability to simply switch services or choose to use others.
William Perrin: I hasten to say that I agree in the case of children.
Dr Harbinja: I would ask the counterquestion: what is highly prominent on these services? Is it Facebook or Meta’s corporate pages? Is it their blogs where they publish information about new services? Is it the interface that the user sees, which is the timeline? The same goes for Alphabet and Google. How prominent is it for your average user, which is what we would like to achieve here? That goes back to the question of media literacy and how we educate users and invest and empower. That is another area that maybe we can discuss, but I will not bring it up now.
Clive Efford: That is exactly where I am going.
Dr Harbinja: To empower users, it is insufficient in the Bill to give them control as to what they see and from whom, whether they see the content from verified users or not. What we need first is the complaint procedure which does not exist in that user empowerment clause, and secondly, stronger media literacy and not just for Ofcom—I know the amendment has been tabled already regarding media literacy and Ofcom’s functions— but also for education and Department for Education.
This requires investment. It is not a one-off solution, it requires educating children and adults, and it is a major investment for future safety and for users to be able to see what is prominent, what matters and how to navigate between those difficult choices and platforms.
Q31 Clive Efford: Fundamentally, we have these huge international global networks that everyone can access now from all parts of the world and communicate with one another, communicate with people they do not know, access information from sources that are questionable and they do not know, all of that. What we are aiming to do here is create a legislative framework that makes people as safe as they can be when they use those services. What does this Bill do in particular to improve the safety of individuals, or more importantly, what is missing?
William Perrin: I should declare an interest. I am also a trustee of Good Things Foundation, the UK's leading digital literacy charity. I would add that it is very hard to reach marginal users who are all using WhatsApp, or younger users using Snap or whatever. It is difficult and expensive to reach those people to improve their media literacy, and what is missing from this regime is very substantial amounts of money to reach millions and millions and millions of people to help improve their digital literacy.
You could extend the polluter pays approach to this, the classic OECD method of dealing with pollution and raise some sort of levy from companies to fund that but that proposal is not in this regime.
Dr Harbinja: What the Bill does in terms of user empowerment and safety is a top-down approach—it is almost a nanny state approach—rather than looking at the user as an agent and empowering and educating that user to make the right choices. That goes back to what I have already said about media literacy.
Q32 Clive Efford: Authentication online: how much of user safety online is to do with the ability to authenticate yourself or those that you are interacting with?
Ellen Judson: To answer your initial question, what is missing in the Bill is recognition of the role of privacy and anonymity in keeping people safe online. The Bill at the moment very much focuses on platforms knowing exactly who users are and engaging in user profiling, behavioural monitoring, content moderation and knowing what is being posted and where and seeing that as the primary route to safety, with privacy a slightly uncomfortable add-on. You should also think about privacy too, and recentring how privacy and anonymity protect people from fraud, attacks, doxing and harassment and the role that they have in protecting people who are otherwise vulnerable and for whom online spaces are a crucial place where they can communicate, where they can access information without necessarily authenticating themselves, that could be strengthened in the Bill.
William Perrin: A point of detail on that: the Government have not explained why they changed the language between the draft Bill and the Bill as introduced. The draft Bill referred to the right of privacy, which is taken to be rather broad. The actual Bill simply referred to privacy in respect of the Data Protection Act, which is a much narrower drawing, and there has been no real explanation of that and we do not know why. It has reduced the scope of privacy protection. That is one of the few things all of us would agree on, that it has gone in the wrong direction.
Izzy Wick: If I could just build on that point, as Ellen said, privacy is fundamental to safety. You only have to look at the changes big services have made in compliance with the children’s privacy regulation in the UK, the age appropriate design code, to see how better data protection leads to better and safer experiences for children online.
On the point about what is missing in digital literacy, digital literacy is of course very important to help users stay safe online but I do not think children or their parents can be expected to mitigate risks that are baked into the design of systems and that needs to be tackled at that level.
I also say that a lot of the education initiatives to upskill children on digital use are delivered by the large platforms themselves often free of charge in schools so perhaps unsurprisingly, they focus very much on user behaviour, things like bullying and harmful user-generated content, and not on the risks that are baked into the design of their own platforms.
Q33 Steve Brine: The Children’s Commissioner had told the Bill Committee that tech companies could be doing so much more to assist parents in digital media literacy, and in supporting them in how to keep their child safe. I completely agree with what you are saying, Izzy, and I have been involved with Google legends in my constituency and they are very good in what they do but you are right; they do focus on user behaviour.
The technology of the social media platforms develops and changes at such a rate, does it not? As a parent of 11 and 14 year-olds, I am absolutely in the eye of the storm and I am completely lost. I am pretty tech-savvy. I have been an MP for 12 years and am across most platforms, but I find it bewildering. I do not know who is best to answer this but let's start with you, Izzy. Is there a need for places to go for how to keep our children safe online because there are so many websites and the difference between information and fact is stark?
There is so much information out there online as to which protections to click and which bits to sign up to. How can the Bill help with navigating that minefield?
Izzy Wick: You are not alone in feeling completely bewildered as a parent. We speak a lot to parents who feel that this is a David and Goliath fight. They are not equipped to deal with the onslaught of harmful experiences that their children are facing.
Steve Brine: Many parents feel that even if they did get up to speed with it at a point in time, they would be out of date within a week. We all lead busy lives and so many parents end up saying—and I suspect I am as guilty of this as anybody— “Too much information”. How can we help?
Izzy Wick: There are two things we can do. The first is to start thinking about this as a product safety issue. These services need to be safe by design so parents feel comforted, and children feel safe in the knowledge that the platforms they are using are safe. Secondly, there does need to be some provision in the Bill for a user advocacy model—even an ombudsman—that can support parents and children to represent their views, and also, crucially, to pursue individual complaints. This is a huge gap in the Bill. At the moment, the only way that an individual child or their representative can take a complaint is by going through the courts, which is inaccessible and costly. There is a super complaints function given to Ofcom, which is fantastic and something we are very pleased to see in there but that would require systemic evidence and multiple cases of violations. We want to see a route for individuals to be able to seek remediation.
This is something that exists already under the video-sharing platform regime which we would lose under the Online Safety Bill. We think establishing an ombudsman, a user advocacy model within that would help both children and parents to navigate this ecosystem.
Q34 Steve Brine: That feels quite highbrow if you do not mind me saying so. What a lot of parents want is someone to come alongside us and just help. We are not interested in an ombudsman and putting together cases we just need somebody to walk us through it. What practical measures could the Bill or regulations that are implemented thereafter take to help people in places such as libraries? Is there a role, something in the real world, that could help?
Izzy Wick: Certainly there is work that Ofcom could be doing and the Department of Education to join up education initiatives on this. If we are talking about the Bill and what platforms need to do, we need to think about literacy by design, so on-platform signposting, things such as labelling, help pages, positive nudges that can encourage safer behaviour. These risks ultimately need to be mitigated at the design stage but I think there are downstream interventions that could support positive online experience as well, literacy by design.
William Perrin: Mr Brine, if I may give you some hope, I also have small children and am struggling with various sets of parental controls. It is striking to me as a user that some companies get it egregiously right. If you look at Nintendo, the work of Lego, these are companies that have taken strategic, high-level decisions to be extremely family-friendly in the way their products roll out. The Nintendo app to control use of the Switch is extremely simple and straightforward to use, and there is a difference between companies that have chosen to be child-centred and responsible versus companies that have essentially adult products but are used by children accidentally in a harmful way and they are at the fag end of corporate decision-making.
If this Bill with its very strong emphasis on protecting children from harm can help put more resources and more senior-level thinking into, okay, this platform is used by kids so we need to make it safer, we can see what is reasonable and proportionate, in the language of the Bill, in what some of the leading companies are doing. Maybe it will push them in that direction.
That is the structural part of the answer. The second part goes back to my role at the Good Things Foundation, which is to put essentially several hundred million pounds into media literacy programmes. Given the scale of resources in this sector, that is not overly ambitious, but the Government’s funding is measured in the low tens of millions of pounds and, given these services are used by 30 or 40 million people, that will not go very far.
Chair: We have much to cover, so I call Simon Jupp.
Q35 Simon Jupp: Good morning and thanks for coming along this morning. I want to focus on building on what Steve has just been saying about children. Izzy, earlier you mentioned the functionality within the Bill and the lack of clarity on some definitions. When we talk about this Bill, do you think it goes anywhere near far enough to protect children who face many dangers when they go online?
Izzy Wick: The risk assessment model the Bill takes is a really good place to start. Overall, I am not convinced it will make the average child user safer online. There are three main reasons for this. First, around the scope of the Bill, as written the Bill will apply to user-to-user and search services, which make up the majority of social media sites and the big platforms. But it leaves out a number of online games that are very popular among children that may not have user-to-user functionality. That also leaves out sites that are provider-generated content only, including things like pro-anorexia, pro-eating disorder blogs or self-harm or suicide blogs. This is a huge oversight. Children need protection wherever they are online, not just where the Government want them to be.
We would like to see the scope of the Bill extended so that any service likely to be accessed by a child is subject at a minimum to the child risk assessment and child safety duty. There is precedent for this in existing regulations. The age-appropriate design code applies to any service likely to be accessed by a child. This would create regulatory harmony between those two bits of regulation, and also help compliance for people who are in the scope of both parts of the law.
Secondly, I spoke about the focus on categories of harmful content rather than harmful systems. The risk assessments need redrafting to take a much more holistic look at the nature of risk and account for not only harmful content but other types of risk that children are exposed to. There is an existing taxonomy of harms that has become well-established in the academic community that talks about harmful content and also harmful contact, harmful conduct, so harmful behaviours, but also commercial pressures, harmful contract risks. Things like nudges to purchase loop boxes, or to extend use. Most social media services are designed to keep users on that platform. The risk assessments need to move away from these three categories of content and take a much more holistic look at the full gamut of risks.
My third point is around age assurance. I have spoken about the need for a standalone duty but I would like to stress that the additional provisions for children under this Bill, that are laudable and go further than for adult users, cannot be delivered unless services know that they are dealing with a child. We need to beef up the age assurance requirements and establish standards set by Ofcom, to assess whether those age assurance systems meet the requisite levels of privacy, efficacy and security.
Q36 Simon Jupp: I am conscious of time, but William, I noticed you were nodding through most of that. Do you broadly agree with the sentiments made?
William Perrin: I am nervous about extending the scope to the whole of the internet because it is a huge task, but I agree with the sentiment. You cannot not agree with it, but it is a case of whether that can be done with the resources set up for this regime. I should add, Ofcom is funded to do everything up to the draft Bill, and new things that have been added since have not necessarily been funded yet.
I strongly agree on decoupling content from system risk. If an alert has been designed to ping someone because they put the phone down for an hour, so they are deliberately showering alerts to the phone, that may be because the child is asleep but the phone is pinging away because it is in the commercial interests of the shareholders of the company to have them reengage so they can see more adverts. That is not a content risk; it is a system risk. There is a whole load of risks like that that we think are quite difficult, but in Ofcom’s clause 83 risk assessment, it is required to look at systems and content together, not just systems decoupled. I think there is an easy amendment in there somewhere to decouple the two, so I strongly agree on that point.
Q37 Simon Jupp: The things we are talking about today, if an average parent was watching this and thinking, this is coming down the track, it is quite confusing for many parents to understand what this could mean for them and what they could do to help their children stay safe online. Do you think this Bill will aid parents who have a multitude of things to think about, aid good parenting decisions when children do go online? I fear that it is quite confusing to see the realm of changes coming down the track.
William Perrin: This is the difficulty with the simplistic libertarian-nanny state argument. There are some people who will be very grateful that the systems work better because a regulator is acting on their behalf in a very complex set of companies, particularly in the case of parenting children. As I said to Mr Green, we should look for an outcomes-based approach, so it is about better outcomes for parents, rather than having themselves to be actively engaged to seek those outcomes.
The system that is currently delivering bad outcomes for them should be tweaked to enable better or even good outcomes for parents, although in some cases we are not arguing for going as far as public service broadcast programming, but it is getting out of the bad outcomes place where we seem to be stuck.
Q38 Simon Jupp: Izzy, if I could return to you, there is lots of discussion around user advocacy at the moment to ensure parents and children have a voice when the regulator is assessing failings against the duty of care. It is quite a complex message to send to parents and internet users after not much protection up until this point.
Izzy Wick: I am not sure it is a complicated message but it sends a message that the Bill and this Government will put children’s rights and needs at the heart of the legislation, which is the stated policy attempt of the Government. A user advocacy model makes complete sense within that context and it is important, given that this is a sector that moves at breakneck speed, to have someone outside the regulator, an advocacy body, that can keep on top of evolving risks and collate various bits of information, whether from academia, civil society organisations, from children themselves, to understand exactly how they need to be represented and that can support Ofcom in their enforcement activities as well. There is a real gap where there should be an individual complaints mechanism and I think if a child cannot exercise their individual right of action, this Bill will have failed them.
Q39 Simon Jupp: So far, I have ignored Ellen and Adina. I am sorry about that. On the things we have discussed, regarding whether the Bill goes far enough to help children stay online, user advocacy, is there anything you want to add?
Ellen Judson: The key thing from my side is thinking about transparency, what we mean by transparency and what we expect from platforms when we talk about transparency reporting or making terms and conditions clear or the way the platforms will interact with their users to let them know what their rules and procedures are. We see platforms engaging in this voluntarily and the transparency reports they produce are often very hard to follow and unclear about the dates they are referring to and focus on instances of harmful content that do not tell you how safe the platform is. If there were 500,000 pieces, is that a good day or a bad day?
Increasing the amount and the depth of transparency we are expecting from platforms and also having minimum standards of testing how those things are communicated to different audiences and how different audiences can access them and understand them. They have to be clear and consistent is the usual language in the Bill, but we also need Ofcom to have an eye on what that means for different audiences and how that lands for different groups.
Dr Harbinja: If you just read the Bill when it becomes the Act, you are not sending a particular message to parents or children because they cannot understand the Bill or the Act and it is difficult for us, for me as an academic, as a lawyer working in this area of digital regulation for 12 years now, to read 225 pages and understand what is meant by each clause. I am sure colleagues would agree, so how could parents or children understand it?
On the other hand, the purpose of the Bill is to regulate user-to-user services and search engines, so the main purpose is not to send that sort of message. I agree that user advocacy, not just for children and parents but also for adult users, and complaints procedures in terms of user empowerment, are essential to include, and it should not be limited to children and parents only.
I add with regards to safety, there is always a difficult balance to strike here not in terms of ideologies, whether we are libertarian or not, but between freedom and safety. Freedom, not just for adults but also for children because children have the right of freedom of expression and free development, so do we want to overly protect and make the children safe at any cost and see them not being resilient enough as we see them at the universities? I must say, the children are increasingly not sufficiently resilient when they arrive at university, so how do we balance these different interests?
Q40 Simon Jupp: I come from a marketing and PR background and I cannot for the life of me figure out the messaging that can go behind this Bill. Even if it is very positive, explaining it to parents, young internet users and the general population will be extremely complex, will it not?
Dr Harbinja: Or impossible.
Q41 Simon Jupp: Exactly, that one. Is there anything else you want to add on that, William? Will Ofcom, your creation, be able to market this?
William Perrin: I should never have said that, should I? It was a mistake.
Simon Jupp: No. I will probably never forgive you for that.
William Perrin: My colleague, Professor Woods, is not here today and my other colleague, May Walsh, and I rewrote the Bill twice. We wrote a simple Bill in 2019 to demonstrate this could be done in 60 clauses. We then rewrote the entire Bill for the scrutiny committee so it flowed from beginning to end. It sets out here are our objectives, this is what we will do. There was none of this having five hands to cross-reference things which bedevils all this. That was ignored and that was a shame. This is how the Government have chosen it to be and I see the Secretary of State has taken all sorts of novel ways of explaining the Bill and that is to be encouraged. As a marketing professional, you can assess how effective they are, I suppose.
Simon Jupp: It is like a tug of war of ideology, is it not?
William Perrin: No, I am not sure I would characterise it that way.
Simon Jupp: I perhaps would. Thank you.
Q42 Dr Huq: We have had children and I will do women. Another of your babies, Ofcom, again. You will never live that down. Figures last week, on 1 June 2022, confirmed what everyone has been saying for ages that women suffer a disproportionate amount of online abuse, cyberstalking, cyber-flashing, all these things, the digital equivalent of up-skirting. Do you think, all four of you—and it is quite a girl power panel, as three quarters of you are women, which is good—that it is necessary to have specific measures purely for women and girls? Should that be a specific offence, misogyny and all that? You can go first as a token man, William.
William Perrin: Okay, if you insist. There are two or three parts to this. Ten days ago, Carnegie was part of a team of organisations that published a specific code of practice to sit underneath this Bill on combatting violence against women and girls. Professor Woods and Professor McGlynn heavily contributed to that with EVAWG and CSEA and a number of other excellent organisations. We felt that clearly there is not a specific treatment of the particular risks that women and girls are likely to suffer online in the Bill. We wrote a comprehensive code of practice on a system and process level that we published and are discussing with Ofcom later this week.
That is one way of addressing the issue, but there is a broader issue. We know the Bill is commendable because it says the most vulnerable users, children, have a higher innate risk of harm. Then it stops there and does not go to other larger groups than children who have a higher innate risk of harm, women, of course being the largest and people with intersectional identities and so on.
We think Ofcom’s clause 83 general risk assessment at the moment is a but deficient because it does not specifically require Ofcom to take account of groups or people who are in groups or have characteristics that give them a higher innate risk of harm, although paradoxically, the companies’ risk assessment does. So, a small amend there could make Ofcom produce a baseline risk assessment of risks arising to groups with a higher innate risk due to membership of groups or protected characteristics. Ofcom is a body covered by the Equality Act as well, so it will have a strong eye to issues such as protected characteristics in its work.
Izzy Wick: I am glad this has been raised because references to women and girls in the Bill are conspicuous by their absence. We know that 43% of girls in the UK hold back their opinions on social media for fear of being criticised, and one in five have stopped using social media or significantly reduced their use of social media for fear of being harassed, so this is not an abstract problem.
We would like to see the risk assessment duties changed, that reference risk to individuals and groups with certain characteristics. We would like to see a specific reference to intersecting characteristics. As Will said, there is a pretty simple amendment to be made to clause 83 on Ofcom to ensure they also consider those intersecting characteristics. I think Ellen mentioned earlier something that is a good improvement and fairly simple fix to the Bill, to reference existing human rights conventions, particularly the UN Convention on the Rights of the Child, which includes the right to non-discrimination.
Dr Huq: That should all carry over into the British Bill of Rights, that apparently we are getting now. Yippee.
William Perrin: On that, the Government are in the process of finally ratifying the Istanbul Convention, which is a great step. One can draw in the definitions in the command paper that is tabled to ratify the Istanbul Convention against violence against women and girls into the Bill and attach a hook-in clause, 80-something, that provides for Ofcom making codes of practice, so Ofcom should make a code of practice based on the Istanbul Convention definition.
Q43 Dr Huq: It feels like women and girls, with the shocking Sarah Everard case when everyone said we must do something, have this year been a bit forgotten.
Ellen Judson: Yes, and I am wary of relying solely on the platform's safety duties to deal with the legal content to tackle it, partly because there is so much online abuse and disinformation about women that would not cross an illegality threshold. Also, because that will always be a downstream measure, post hoc. Something has been posted, something has occurred and so platforms are expected to respond to that, rather than tackling the ways that platforms themselves are driving these campaigns against women that frequently target women through the use of the systems Izzy and I have spoken about already.
I think that part one would be focusing on the systemic approach, looking at how risks to women are being compounded by platform design. The second part would be the protections for freedom of expression in the Bill, as we have discussed already, the democratic importance and the journalistic exceptions in particular, I am especially worried about those being used to legitimise online abuse of women under the guise of “It is just political debate”. We know this happens in gendered misinformation campaigns that target women in public life where it is dressed up as it is just political criticism and political debate, but actually it is lies, doxing, misogynistic hatred being stirred up and I have a lot of worried about exceptions giving a way out. People say their platform should not have done anything about that because it was political conversation.
Thirdly, I think protections for anonymity are incredibly important for women. The existing use of a verification approach in the Bill has been designed with the protection of women in mind, which is really important but my worry about it is that if we move into a system where people are expected to verify their identity to prove that they are legitimate and prove they are who they say they are and are able to engage in public conversation, that will exclude people who are particularly marginalised, particularly unsafe online, and that will disproportionately be women.
Dr Harbinja: I would agree with what has been said and would add I am not sure I would explicitly include women as a gender in the Bill. I would support including intersectionality and belonging to different groups because this is increasingly fluid and complex, so we cannot just say we are protecting women and then in the future, other groups and other forms of identities are threatened in different ways.
I would add to what Ellen said about anonymity and privacy, I strongly support her argument and proposition here, and I urge you as MPs to consider the Bill of Rights but also the data protection law reform in conjunction with this Bill because it will impact privacy and the proposal for the data protection reforms threatens to erode data protection rights and privacy rights. The Bill of Rights, on the other hand, threatens to affect privacy in undesired ways on the one hand and promote freedom of expression on the other way more strongly, which is not in conjunction with what this Bill is doing in threatening to affect free speech. There is some dissonance and lack of communication between the different proposals from the same governance, in my view, in the analysis I have conducted.
Q44 Dr Huq: What you said reminded me of a couple of things. Are there some platforms that are better than others? We have figures that suggest that 90% of misogyny stuff on Instagram is completely ignored by them. In a campaign I have done locally, Facebook is implicated. This is the intersection of the offline and the online. Abortion clinics at the moment often have a protest group outside. They call it a protest but it is a shaming of women who are using the clinics. My local authority has moved the groups a hundred metres away from the clinic gates and we are trying to get that everywhere, but they are often livestreamed, so Facebook livestreams women going in and using these facilities. At the intersection of the online and offline, are there some platforms better than others? I had a fake Twitter account of me, a Dr Huq—“who lives in Ealing and lies about the NHS,” was the description—before I was elected. Then, in 2015 I was elected, I had that shut down but as a civilian it was impossible to have it closed down. I wondered about the difference between platforms and the online and offline.
William Perrin: We do not really know anything about how the platforms deal with complaints apart from their media statements, so the regime will hopefully improve that so we can get some accurate data about what is happening in that complaints stack. An important part of the regime is getting a complaints process that works, which is challenging on a huge scale. We see how difficult electricity companies with tens of millions of customers find it, and BT does and so on, but the companies have resources to do it.
We seem to tiptoe around it. While misogyny may not be illegal, within bits of the broadcast regime and elsewhere, characteristics protected under the Equality Act feature but they are not prominent in this Bill. Part of the question is—and I know Mr Nicolson has tried to address this as one of his amendments on auto-completes—how does one drag in this existing legal concept of characteristics that are protected into this regime? This is a part of those issues you described. The Government have not done that and we have not seen many proposals yet to bring this in. Should it sit in a list of priority content harmful to adults? I am not sure. Denigration of protected characteristics? Denigration is a word that is used in some hate speech law around the world, but we have not had that debate yet.
Q45 Dr Huq: We all know that Diane Abbott is the person who gets the most. In the last election campaign, I think it was said that 50% of all Twitter abuse directed at women was with her in mind.
Izzy Wick: I was going to comment on the offline-online distinction. When we speak to children and young people, that is not a distinction they recognise. As you rightly identified, what takes place online has a very real offline consequence. It is all just part of daily life for young people who rely heavily on youth platforms.
You mentioned livestreaming and I think it is a good example of a functionality that carries high risk, so we would expect that to be something that services have to consider as part of their risk assessment. We would like to see Ofcom setting out these functionalities in codes of practice. For children in particular, we want to see a child safety duties code of practice that talks about functionalities like livestreaming and the particular risks they might create for certain groups, whether it is women or children. That could be explored in more detail in codes of practice.
Ellen Judson: To follow up on the online-offline distinction and connections, you mentioned abortion clinics, and advertising is something we have not talked about this session in terms of anti-abortion advertising being a particular problem targeting women with false information and scare tactics about abortion essentially. The current scope of the fraudulent advertising inclusions is fairly narrow and it should be much broader. Paid-for ads are something the platform is even more directly profiting from than user-generated content, so bringing that into scope would be another channel to tackle the disinformation and gendered effects we see.
Another thing is we know that with campaigns of abuse and harassment and disinformation, they are targeting women online but targeting also the wider population, trying to make the case that women should not be in public life, that certain women should be prosecuted, for instance, in the case of journalists like Maria Ressa, making an online argument to defend offline human rights abuses and so forth. I think those connections need to be taken into account in risk assessments.
William Perrin: I would like to get one important point on the record on the extremes to which abuse of women can be taken is inceldom and it is not clear how well this regime engages inceldom because the counter-terrorist regime barely engages it, as we saw after the Plymouth atrocity where the police said no, this was not a terrorist incident, and a few days later said maybe it is, we are not really sure. The Prevent agenda catches inceldom but the Government's code of practice on counter-terrorism that is attached to this piece of legislation does not engage the Prevent regime, as far as I can make out. Given this is a particularly vile form of almost exclusively online hatred, I wanted to be sure I got on record that we are concerned it is not covered well enough and we would greatly welcome a statement from the Government on how they think inceldom fits into this overall regime.
Q46 Dr Huq: A couple of last points; we have alluded to it but phrased as a direct question, should the emphasis be—again I am a mum of a teenage offspring—on excluding children from accessing problematic material or just taking the stuff down? You said earlier, Izzy, that it should not be just because of your age.
Izzy Wick: I think we should be very careful about excluding, preventing access for or blocking children. Children have a right to participation, a right to access a plurality of information online, and the online world can do really good things for children. It is also not an option for them. They cannot be offline. When we talk about harm and mitigating harm, there are occasions when children need to be prevented from being in certain spaces, those that are specifically for adults, over 18s, whether it is pornography sites or gambling services. But I think it is much more nuanced and you need to think carefully about the balance to strike.
We talk a lot at 5Rights about creating barriers so children know they are transgressing. That barrier may not be completely foolproof, they may be able to surpass it, but developmentally it is important that they understand they are transgressing a barrier so when we talk about age assurance, we need to think about proportionality, what is appropriate in respect of risk and also about inclusivity and not freezing children out of spaces they have a right to access.
Q47 Dr Huq: So more taking offensive comment down than looking at the children?
Izzy Wick: Certainly, content moderation has a role to play, but it comes back to the design of systems and making them safe by design, putting appropriate safeguards in place, not just heavy-handed content removal and takedown.
Ellen Judson: I would like to see a regime that would approach different harms differently, depending on what the risk to the child is. If it, for instance, has discussion of difficult mental health topics, self-harm or suicide, that might be in a context in which that discussion is very essential for the adults who are participating in it and part of how they are seeking support. Perhaps a risk assessment might find that children accessing that without particular support in place would be harmful to the child, but it does not mean that content is just bad per se and should be taken down. It might mean that there need to be different approaches for adults and children.
We are concerned as well about over-moderation and over-blocking of children from content that is not harmful to children. The classic was that YouTube was blocking children from accessing LGBT content because their automatic systems deemed it as being sexual, even though it was not. It was just LGBT identities, but because of an imperfect system, children were being restricted from accessing information they need to access, so we would want to see systems to address those issues as well.
Dr Huq: Case-by-case basis for everyone else?
Dr Harbinja: Yes, definitely, I agree. I think it is not either/or, exclusion versus taking content down. I agree with Ellen, that legitimate forms of speech, including artistic speech, for example, that might be considered harmful by an algorithm, could be taken down because it is a violent piece of music or it is explicit in some form and harmful to children and it is a legitimate and protected form of speech for adults and potentially some forms of that speech for children as well. I think it is a case-by-case basis in this case because of the free speech implications.
Q48 Dr Huq: Do we think it should be possible to have an anonymous account?[1]
Izzy Wick: Yes.
Chair: That was a quick one, thank you. You delivered there, as you promised, a quick question. Thank you very much. That concludes this session. I want to thank Izzy Wick, William Perrin, Ellen Judson and Dr Edina Harbinja. Thank you very much.
[1] Correction by Committee staff: While Dr Huq said “… should it be impossible to have anonymous accounts?”, all the witnesses have contacted the Committee to indicate that they all thought she had said “possible” and their answers reflect that.