HoC 85mm(Green).tif


Petitions Committee 

Oral evidence: Tackling Online Abuse, HC 766

Tuesday 16 November 2021

Ordered by the House of Commons to be published on 16 November 2021.

Watch the meeting 

Members present: Catherine McKinnell (Chair); Tonia Antoniazzi; Martyn Day; Christina Rees.

Questions 41 - 76


I: Seyi Akiwowo, Founder and CEO, Glitch; Andy Burrows, Head of Child Safety Online Policy, NSPCC; Stephen Kinsella OBE, Founder, Clean up the Internet.

II: Ellen Judson, Senior Researcher, Demos; William Perrin OBE, Trustee, Carnegie Trust UK; Dr Bertie Vidgen, Research Fellow, The Alan Turing Institute.

Written evidence from witnesses:

Andy Burrows, Head of Child Safety Online Policy, NSPCC

William Perrin OBE, Trustee, Carnegie Trust UK


Examination of witnesses

Witnesses: Seyi Akiwowo, Andy Burrow and Stephen Kinsella OBE.

Q41            Chair: Thank you so much for coming here today to talk to us about tackling online abuse. I am sorry there not are more Members here today, but several members of the Committee are sitting in Public Bill Committees that are meeting at the same time as us.

Our Committee has received several very popular petitions on this topic in recent years. It is clearly a matter of concern to the public. We originally opened this inquiry last summer, but today's hearing is our second session on the issue since the Government published their draft Online Safety Bill earlier this year. At the previous session, we heard from groups representing communities that are particularly affected by online abuse. We also discussed measures that can be taken to tackle online abuse and how they would impact on free speech, so we are really looking forward to hearing from our witnesses today.

It is possible that there may be a vote during today’s meeting. If that happens, we will suspend the hearing and Members will have to go and vote. Do not be alarmed if the bell starts going and I suspend the hearingthat will be the reason. Before we launch into our many questions that we want to put to you today, will the witnesses introduce themselves, please?

Andy Burrows: I am Andy Burrows. I am the head of Child Safety Online Policy for NSPCC.

Seyi Akiwowo: I am Seyi Akiwowo. I am the founder and CEO of Glitch, a charity ending online abuse for women and girls.

Stephen Kinsella: I am Stephen Kinsella. I am a founder of Clean up the Internet, which campaigns to raise the level of discourse online.

Q42            Chair: Thank you very much, and thank you again for being here today. Just to say, we do have loads of questions we want to put to you and finite time, so if you can do the best to build on one anothers responses where somebody has already covered something, that will maximise the time we have you for today.

There is a question I want to put to all of you, which was why I put that caveat in first. Obviously you represent quite a diverse range of groups and interests regarding this issue. Can you explain briefly what you see as the consequences of the current lack of regulation in relation to online harms? Seyi, I will bring you in first.

Seyi Akiwowo: Thank you so much for inviting me. I also want to just publicly acknowledge and thank Katie Price and Bobby Norris for their petitions around online abuse, and particularly intersectionality of disability and homophobia. The current landscape of the problem that we are seeing when it comes to online abuse is that women are disproportionately impacted by online abuse. UN research shows that women are 27 times more likely to be abused online than men, and Amnesty International's research shows that black women are 84% more likely to be abused online. Girlguiding research that came out just a couple of weeks ago showed that 71% of girls aged seven to 21 are receiving online abuse and are now censoring themselves online. Those are the disproportionate numbers for abuse that we are talking about.

What do I mean by abuse? I mean doxxing, the sharing of non-consensual photography, harassment, cross-platform harassment and targeting on niche platformsplatforms that are meant to be about parenting, animals or animal rights, but are being used to dox people's personal information to troll them. This is having a huge impact on democracy and on censoring public campaigners like Katie Price and Bobby Norris. It is also stifling freedom of expression. Often online abuse is used against freedom of expression as a binary debateat Glitch we do not believe that to be the case—but we also see a consistent failure of tech companies to design their platforms and new products with gender in mind. Having gender neutral policies and gender neutral content moderation does not help to produce gendered outcomes, which we need. Women do not get the same favourable outcomes needed that men receive.

We are now seeing this with the Online Safety Bill and what is being proposed. I am sure you are going to ask questions about this later, but I just wanted to give you a bit of context on the landscape. The algorithms on these platforms are amplifying and exacerbating gender-based abuse. The Facebook Files in The Wall Street Journal had some internal documents from Facebook, and we saw that Facebook is ineffectively tackling hate on their platforms, and is actually amplifying and benefiting from outrage. You have got gender disinformation that is particularly affecting women in politics, such as Luciana Berger. I know Danny Stone gave evidence a few weeks ago that talked about the gendered nature of antisemitism. You have seen this with Diane Abbott and Jess Phillips. You have seen this across the political landscape.

Really and truly, online abuse has been an issue that has not been addressed properly and this Online Safety Bill is creating more gaps. I am really looking forward to discussing some of the ways in which we can beef up the Bill so that it makes sure that women are properly safe online.

Q43            Chair: Thank you. I am going to come to both the other witnesses to ask the same question but, since you have led us into this anyway, what would be the one thing that you would like to see the Government do that you think would make a difference to tackling online abuse?

Seyi Akiwowo: I think it is great that we have an Online Safety Bill that is doing a lot for children and tackling terrorism. Children are mentioned 213 times in the Bill and terrorism is mentioned 55 times, but there is no mention of women nor gender at all in the Bill. We cannot leave it to secondary legislation to look at the gendered nature of online abuse. The one thing I would really urge and encourage this Committee to do is to make a recommendation for women and gender-based violence to be mentioned in the Bill. Having the Bill as a tool to mean that tech companies have to take down illegal content quicker does not help women who are facing gender disinformation and are being targeted online, because gender abuse is not seen as illegal and is not yet seen as a form of hate speech. We need the idea of legal and harmful to make sure it is having an indicative list of the gendered nature of certain forms of online abuse.

We also need to make sure that money is ring-fenced in the taxation of tech companies to go towards helplines, services and trauma services to support women who are being abused. You have only got certain helplines around revenge pornwhich is an important but niche form of online abuse—and that is dwindling—it is having to reduce its service. We need more variety and specialism to keep up with the beast that is online abuse.

Chair: Thank you. Who would like to go next on the first question, but also the second, if you are happy to go into that as well? Andy?

Andy Burrows: Yes, absolutely. Thank you for the opportunity to give evidence.

Over many years, we have seen that the scale of abuse and the impacts on children and young people has continued to deteriorate. The common thread is that we have seen, time and again, that social networks really fail to engage with the problem. At best, we have seen a lack of investment, a lack of resource and a lack of consideration into user safety and protecting children and young people, as some of the most vulnerable users of these services. But, at worst, we have seen a business model that is designed to produce outrage.

As Seyi was saying, and as we have seen from Francis Haugen, large platforms have business models that are predicated on the basis of looking to push people from centre to edge topics and promoting outrage. The very dynamicsthe nuts and bolts of the systems and processesare designed to deliver precisely that. The consequence is children and young people being subject to a range of deeply disturbing harms that can be life changing and life lasting in their impact. Examples of some of those are online bullying, the tide of deeply abusive content, and the very disturbing trend of sharing nude and semi-nude images with the active intention to cause harm and distress, which is an inherently gendered issue. We are seeing huge numbers of children saying that they are seeing inappropriate content and, at worst, large-scale abusive campaigns where platforms are sticking their fingers in their ears and failing to have any basic systemic processes to identify and then respond to harmincluding as harm travels across platforms.

We know that this is a situation that has deteriorated during lockdown. One in three children told the ONS that they have seen bullying in the last year. Through our ChildLine service earlier this week, we released new figures that showed that we had seen a 25% increase in counselling sessions relating to online bullying during the pandemic. This is something that is disproportionately affecting vulnerable and minority groups. In particular, LGBTQ+ children and young people have seen shockingly high levels of abuse during lockdown. A recent survey from Stonewall found that 40% of LGBTQ+ children had been directly subject to abuse, and there is evidence suggesting that that is around 2.5 times what straight children and young people are witnessing.

As for the point about the Bill

Chair: What one thing would you do?

Andy Burrows: One of the real concerns that we have is about the scope of the Bill as this relates to children and young people. There is a child use test that has been built into clause 26 of the draft Bill, and this imposes a higher threshold for services to be considered likely to be accessed by a child than comparative legislation, such as that covering the ICO children's code. So a platform will only be subject to the child safety duties and therefore required to take action to prevent exposure to harmful and age-inappropriate content to children if a platform either has a significant number of child usersor is likely to appeal to a significant number of childrenand there is a real risk that because of that scope being higher we will not see harm being tackled. We will see harm simply being displaced.

Q44            Chair: Stephen, did you want to answer?

Stephen Kinsella: Yes, very much. Building on what Seyi and Andrew had to say, they have obviously talked about the particular abuse that is received by women and children. Opinium did some polling for Compassion in Politics and that showed that a quarter of all social media users right across the across the pitch have received abuse. As Andy said, this has got worse during lockdown. Some research came out yesterday from an organisation called Ditch the Label, which said that since 2019, online abuse has gone up by about 28%. It is still working on this, but it also remarked that this is not just an online problem, because we can see a correlation between, for instance, racist abuse online and hate incidents out in the street. So I am afraid that this is something that bleeds across from the online experience into the real world, if we want to call it that.

In terms of what could be done in relation to the Bill, like everyone here, we welcome the Bill. I think we also recognise that this is probably a once in a generation opportunity to get this right, so we do need to get it right. Melanie Dawes, for instance, in evidence to the Joint Committee, remarked on what she saw as a clear loophole in the legislation at the moment. That is, effectively, that we allow the platforms to conduct their risk assessments and then they are measured against how they perform in relation to their own risk assessments. Generally, we would like to see a serious beefing up of the powers to give Ofcom more power to direct what the platform should cover in their risk assessments, and then compel it to address that.

Specifically, my campaign that we are focusing on at the moment is anonymity. I have some stats I can give you on problems caused by anonymous abuse, but we have a proposal that anonymity should have to be addressed as a specific risk factor. We are not talking about banning anonymous accounts at all, but that it should be identified as a risk factor and something that very much contributes to the volume of abuse. Platforms should have to specifically address that and come forward with plans of how they are going to mitigate it and be held responsible if they do not.

Q45            Chair: You would say the challenge is not anonymity in itself; it is anonymous abuse.

Stephen Kinsella: It is anonymous abuse and it is not just abuse; it is also disinformation. The problem when it is abuse is that, because you do not know who is abusing you, it is more difficult for you to assess the level of risk—the level of threatthat you face. When it is disinformation, the problem is more that someone online says, Well, I am a nurse, and I can tell you that people are suffering reactions to the vaccine and the Government is concealing this, but you have no way of knowing whether that person is a nurse or not. So when it comes to disinformation, our inability to know who we are dealing with makes it hard for us to assess how much weight we should give to what they are saying.

Our proposal is not that we should ban anonymous accounts; it is actually that we should go back to where we thought we were starting with this Bill which is that we were going to take a design-led approach. We should say Well, what is it about the design of these platforms that makes anonymous abuse and disinformation so problematic? And we would say, “Rather than try to prevent people being anonymousI do not think we should; there are very many good reasons why people should want to be anonymous online"why shouldnt all of us have the right to be verified if we want to be verified? It would then be clear to see. Perhaps Twitter's tick is a good model. It should be clear to us which other accounts are verified, and then we could all be given the right to block interactions with all unverified accounts. Give control back to the individual user to curate their experience. In that way, those of us who felt vulnerable to anonymous abuse could screen it out by category, rather than having to block individual accounts as and when they abuse us.

Q46            Christina Rees: Thank you for coming along to give evidence today; it is very much appreciated. This question is to all of you to start off with. Do you think the Government's draft Online Safety Bill is ambitious enough on how it plans to tackle online abuse?

Seyi Akiwowo: I do not mind going first. I think it is ambitious in wanting to regulate tech companies. There are many countries in the world that are not doing that. There are more countries that are not regulating tech companies and having those conversations than are. I think it is ambitious in wanting to do that and it is kind of starting from scratch. A brand new Google document is literally needing to be created here, so I understand the ambition is there. I think the Bill currently being proposed fails in being ambitious to support women and it fails to hold tech companies accountable. As Stephen was saying, on a systemic level, it needs to make sure that the tech companies are not just designing new platforms and new ways to beat regulation. It is not doing enough to be holding tech companies to account systemically.

The Bill currently proposed, in my opinion, is also failing to defend democracy. There is no specific mention of the types of abuse that women, particularly in politics, face or gender disinformation, which we know can make and break a woman's electability. This is already really hard and we are already fighting an issue around representation. So, no, I do not think that the Bill is ambitious enough in supporting 50% of the population.

Let me just add to what Andy was saying around how much lockdown and the pandemic has exacerbated the issue. We did a report called “The Ripple Effect to investigate what it meant when all of us made our kitchen tables and living rooms the new workplace. We actually saw that online abuse against women increased by 46%. When you then looked at black women and non-binary people, this increased to 50%. If we are going to have different waves of lockdowns or remote working is going to be the future, this Bill is not ambitious enough in really holding employers and other institutions to account around a duty of care.

To ping-pong back to Stephen's point around a risk assessment, there is no clear indicative list around that risk assessment, including taking gender into account. An example is Twitter. I declare an interest, if you like, that I sit on TikTok’s and Twitters trust and safety councils. Twitter was releasing a new product around audiostrying to compete, I guess, with Clubhouse. Within 30 minutes of that product being released, women were being abused. Women were being sent pornographic audios, and deaf and disabled people were being completely isolated from the platform. This is something by design. This is something by systems that was not properly checked through a risk assessment and a duty of care that would take into account all equalities. I do not want this Bill to be trying to keep reforming by being amended, and trying to keep up with new platforms. As Andy and Stephen said, we need this to be about systemic tech accountability.

Christina Rees: Stephen, have you got anything to add to that?

Stephen Kinsella: I think I would echo pretty much everything that Seyi said. It is certainly long enough. It is far too complex. Before I started this, I used to be a commercial lawyer—I was an EU lawyerand I worked on parliamentary drafting. I worked on the legislation that led to the privatisation of our electricity and gas industries. I am afraid this Bill is tortuous. I know you are hearing from Carnegie later today, but they have come forward with a very good proposal to simplify and restructure it. We should be promoting the duties. We should have a foundational duty and we should promote that right to the top and then you should be able to follow it through and see what Ofcom’s powers and duties are. At the moment you have to jump back and forth across this Bill to make sense of it. If I struggle—we always say that ignorance of the law is no defence; I think it is on us to try to draft laws that make sense.

There is a lot of work to be to be done in relation to the Bill. The principles are good. We genuinely welcome this. Seyi mentioned the harms to democracy, for instance. I agree. That is a big gap. There is a big gap in terms of how we approach harm and how we approach risk, and then how we direct the platforms to deal with them. I think a lot more structure could be built into this.

Andy Burrows: I would agree with everything that Seyi and Stephen have just said. To add two points to that, right now the Bill does not require companies to discharge their safety duties, or to risk assess on the basis of cross-platform risks and how abuse can spread with real virality and velocity across platforms. To give a very real world example of that, let me talk about a very vulnerable 16-year-old girl called Lily and her mum, Hannah. Lily has diagnosed ADHD and an autism diagnosis. In the past, she had been sexually assaulted. In January this year, videos were posted of that assault and also falsely claiming that she had committed abuse against a young boy. In the course of days, this material spread like wildfire. Within days there were 600,000 views of a hashtag of her name on TikTok. The content spread across multiple sitesSnapchat and YouTube. There was a petition that 30,000 people signed suggesting that she should be prosecuted. Hannah, the mother, was receiving hundreds of abusive messages an hour. There were vigilantes who were trying to track down of the family.

That speaks to the real systemic failing of platforms to identify harmful content that is spreading at scale. But I think it also speaks to the lack of rapid response arrangements where one platform should identify harm like that to vulnerable groups, and then there should be a systemic mechanism to be able to report that. We have seen platforms get their act together on this when it is something like the Christchurch attack, when there are business or reputational drivers that necessitate it. But when we are talking about vulnerable children and when we are talking about families whose lives can be torn apart by harmful content, we just do not see the impetus to action. We need to see the legislation really build in and bake in cross-platform harms through the risk assessment process.

One other thing that I would flag is the absence of user advocacy arrangements in this legislation. As the draft Bill stands, children and other groups at risk of abuse will receive less systemic advocacy protections than, say, passengers on public transport or customers of post offices. With the polluter pays principle that applies across other regulated settlements, we should see levy funded user advocacy arrangements because we need to make sure that this is a fair fight. The regulated firms will significantly scale up their policy and their legal teams to try and influence the regulator and its worldview, to invest in research and to try and skew the evidence base. This needs to be a fair fight and we need user advocacy to be able to speak on behalf of users who are suffering abuse.

Seyi Akiwowo: May I just add to that? I think Andy has made an important point because this is something that was lost in the very first iteration of the Billbeing able to do class action. There was a consultation on if we wanted class action to be a lever as part of the regulatory body—it was not Ofcom at the timeand that has been massively lost. As Andy said, we are already trying to compete with tech companies that are multibillion pound companies. You have the founder of Facebook trying to buy Hawaii, and we are all here struggling to just buy office spaces for our small organisations. We cannot compete.

There is a digital services tax that is meant to generate £400 million a year; 10% of that could be easily ring-fenced towards efforts to end online abuse. That could go towards education or law enforcement. I know you have been hearing a lot around how law enforcement is stretched. A lot of that could go to civil society groups that are providing helplines and user redress. We do not have that. We have an ability to deal with food standards and alcohol, and we can complain at the local supermarket if something has gone wrong. We cannot do that with Facebook; we cannot do that with Twitter.

Facebook has multimillion users on its platform which, when combined, is twice the size of the population of China and three times the population of Europe. Yet you cannot call an emergency service line. We have seen, again from the Facebook Files, how difficult Facebook has made reporting abuse on their platforms over the last year. It cannot be that after Donald Trump has not won his election that it is now putting in the measures it could have put in to stop hate speech from spreading on its platform. This is not good enough. For us mere mortals who are trying to understand the amendment and legal text, and trying to support vulnerable people, where do we go?

If I can talk about my personal experience, I tend to call myself a recovering politician. Back in 2014, I stood for local government. I was one of the youngest black women to ever be elected in local government and a speech that I made at the European Parliament went viral, and I thought, This is amazing. This is an opportunity to encourage more black women to get into politics. And then, one day, somebody posted it on a neo-Nazi forum and I was sent death threats and rape threats. I had to do a complete audit of my platform to check that my address was not public. This was not even a year after Jo Cox had been murdered. It was not that I was terrified for myself; I was upset for my mother, who was mortified that her child was being abused. I had to go on TV to get Twitter to respond to me.

It should not take these personal stories. It should not take people who are trying to make the world a better place, who are trying to campaign for society, online and offline, and who put their head above the parapet, for tech companies to take action. So I completely agree that we need redress for users to be able to hold tech companies to account.

Q47            Christina Rees: Thank you, Seyi. It is a real shame that we female politicians seem to get attacked. I am not going to share my personal story, but thank you for sharing yours—much appreciated.

Andy, can I take you back to your written evidence a minute? You highlighted platforms’ use of algorithms that promote potentially harmful content to others, including young people, as a priority issue for future regulation. Are you satisfied that the framework set out in the Bill will allow for appropriate action to be taken on this particular issue?

Andy Burrows: It is an important question because the use of algorithmshow content is recommended and amplified to children and young peopleis a crucial part of then seeing what appears on their timeline. I think what is important is that Ofcom has the investigatory powers and resources that it needs to be able to lift up the bonnet on these companies and to understand how the algorithms are working, how they promote content and what steps are being taken to ensure that harmful content is then no longer being recommended to children. Some of what has been most concerning for me about what Francis Haugen has disclosed about Facebook's practices is the extent to which the algorithms have been actively amplifying harmful and hateful contents, and where we have seen business decisions being taken not to adjust the algorithms in the face of very real world harm. We have seen that, for example, around hate speech. We have seen allegations that that applied in terms of Covid 19 disinformation in the early stage of the pandemic. So we need to see the regulator have the powers and the expertise to get on top of this issue.

One of the other things that I would also like to see the Bill do—this indirectly but, I think importantly, addresses design decisions such as algorithmsis to explore how we can see better personal accountability on people in these companies who are then making the product decisions that eventually inform what a child or young person sees. One of the frustrations for us about this legislation is that it has not learned the lessons of what works well in other regulated markets. In financial services, for example, if you exercise a significant influence function, you are subject to named personal accountability. That is a crucial way of ensuring that if you are someone in a company who is taking a decision about an algorithm, you are thinking that there are personal stakes and there is personal jeopardy, not just the potential for your corporate entity to face a fine at some point after harm has already occurred. I think that is a really crucial way in which we can address the culture through which poor design choices, such as algorithms, then manifest themselves and cause harm to children.

Q48            Christina Rees: Stephen, as a lawyer you would probably support that.

Stephen Kinsella: Yes, I definitely would support that. One of the problems I find often in this debate is I sometimes think we underestimate our powers. For instance, I often hear people say, ”Well, these companies are huge. They are global. They are outside our jurisdiction. We cannot do much about them.” The reality is that the UK is such an enormous digital market. We are not a market that any major platform is going to want to ignore, and we definitely can make laws that will have teeth that will bite here. The companies will not want to have a different business model for the UK. We are actually in the vanguard of potential serious regulation in this field. We are ahead of pretty much everybody else. So what we do here, I think, could very much set the benchmark for how other jurisdictions—including the EU with their DSA—approach it. So, yes, I absolutely agree with what Andy was saying.

Q49            Chair: Obviously the focus for us from the petitions that you very kindly paid tribute to, Seyi, that were brought by Bobby Norris and Katie Price, were focused on the anonymity of users of internet platforms. But what you are talking about—Andy in particularis removing some of the anonymity of the platform creators and lifting that veil as well, which is an interesting take on the anonymity debate.

Seyi, I was also just going to say thank you very much for your personal testimony because I think it is the testimony of the lead petitioners, Bobby Norris and Katie Price, that have really touched people's hearts and minds in wanting to try and do something about this, which is why the petitioners have signed the petitions in their numbers. I think it is personal testimonies that really do help to give volume to those voices, so thank you for that.

Stephen Kinsella: But on the numbers, of course, we have to acknowledge that without social media, these petitions would not have had the success they did. So it is not all bad.

Seyi Akiwowo: Chair, may I just say something about the anonymity of websites. I do not want to name this website, but WIRED magazine has done a series of investigationsthere will be another one, if not later this week then early next week.  They expose a particular platform where you can upload an image: we could take a photo of us today and upload it, and it would nudify us all. We have got no way of knowing who created this platform. We have got no way of holding them accountable. So I definitely think there is a point there around the anonymity conversation being broader than just the individuals to also the platforms creating it. Leading on from Stephen’s point, how do we, as a market, incentivise the type of companies that we want, with safety by design?

At the moment we have this race to the bottom: who can create innovation tech platforms really fast, breaking things on the way?  There is this race to the bottom of doing the very bare minimum. You have got Clubhouse that has the minimum of safety policies, yet it has generated millions of pounds in the pandemic. It is the same with Houseparty, the same with Twitter and the same with new products on Facebook. We have the minimum being done right now. This Bill is an opportunity for a step change to make sure that we have a huge standard to which we ask platforms to perform, and to change the dial to say, A race to safety. You have seen that if you make your safety product or your safety ambition the heart of your company, it works. That is why people are looking for safe spaces in work and to tackle harassment, so why would it not apply online? You can see that with platforms like Bumble where they have put women's safety at the heart of what they do. That is why it is so successful.

In terms of the Bill, we can really incentivise a business model to say, If you put safety there, parents are going to want their children to be on that platform.” There are then going to be more TikTok creators, and more opportunities to engage in democracy and see women in Parliament, and therefore inspire the next generation to want to stand. But at the moment we do not get to see any of that; we just see outrage and abuse.

Chair: Martyn, did you want to ask some questions?

Q50            Martyn Day: I think the panel have actually largely answered the bulk of what I was about to ask already, which is probably a good sign because we are all obviously on the same wavelength. Where I was going to go—I will just come in with a follow up to thatwas the duty in the draft Bill for social media companies to deal with illegal harms and other harms, and was that prescriptive enough and what should there be. But I think you have all pretty much covered that, so what I am going to ask is: what powers should Ofcom have to intervene or impose sanctions to make sure that if the duty of care is not interpreted properly, we get a result? Would you like to start with that one, Stephen?

Stephen Kinsella: Yes. I would obviously rather we do not have to go down the road of sanctions. I would much rather that Ofcom were given the powers to be very explicit up front as to what platforms are required to do and that they then conducted their risk assessments and discussed before they tweak and launch new products, with more of a consultation about what the implications are. Otherwise, we will always be playing catch-up. The point was made earlier about police and prosecutions. I think we know that the police are already overwhelmed. I do not think we want a whole raft of further offencesquite aside from the difficulties of detection and then prosecution.

I focus more on anonymity and that is what my campaign is looking at. But I would commend the work that Carnegie has done on this in trying to come up with a good hierarchy of duties and also to simplify it. I think it had a concern about whether we really need multiple types of harm to be identified, and then for it to apply to some platforms and not others. Could not we simplify it and have an overriding foundational duty?

Seyi Akiwowo: I completely agree. I think if the focus is on sanctions, it is not really achieving systemic change. We really want to be looking at prevention, not cause. We have been sitting here for almost 35 minutes now. Every 30 seconds on Twitter, a woman is abused online. I cannot even do the maths to work out how many women have been abused since we have been talking about this. We cannot wait for sanctions and then we know that we do not properly have redress. We need to make sure that the duty of care is explicit andas Carnegie will present laterit needs to be simple.

For Glitch, we believe it also needs to be gendered. We need to make sure that duty of care takes into account the gender abuse that women face. We also think there are learnings from the EU Digital Services Act. This December, the EU exec body will be providing gender-specific legislation to add to the Digital Services Act and I think there is some learnings there for us to be including on that duty of care. I think the risk assessment needs to be about actively reducing abuse, automated decision making around video content, content moderation, and making sure that there is a duty of care around how a young womanvulnerable, self-isolating or about to embark on her political careerfeels safe online and that the algorithms are either not turning on her, or exacerbating already the risks women face by being online.

I think the duty of care to empower Ofcom also needs to have clear instructions and guidance to properly layer of framework in which Ofcom can regulate. My worry is if it is not properly stipulated and it is not simplified, Ofcom could always be in danger of committing ultra vires, because it is not really clear what it should and should not be doing. So I think the duty of care is a really important framework that Carnegie has set out—particular thanks to William Perrin and Lorna Woods on that. I would say to this Committee, in honour of Katie Price, Bobby Norris and also, I am sure, your constituents who are telling you how terrible it is to be online right now, that the duty of care needs to take into account women, too.

Andy Burrows: Just to build on that point of a foundational duty, I think the importance is that this creates a very clear, overarching requirement on companies to identify and then mitigate reasonably foreseeable harm. The danger in such a dense and complex Bill is that we increasingly start to head the other way and this becomes a kind of prescriptive checklist. We have a whole bundle of codes and guidance that everyone, frankly, is struggling to work through. The benefit of that clear, overarching dutymuch as we see in health and safety legislation, where there is a clear, overarching objective, and then it is for employers to determine context-specific ways of complying with the legislationis that it really focuses the minds, and it will force and require the companies to roll up their sleeves and do the hard work to consider the risks on their site and what constitutes an effective response.

On implementing that, for me, it really comes back to the investigatory powers, the information disclosure powers, and then enforcement powers. As Seyi says, if enforcement powers are being used, that means harm has taken place and we are seeing the systems and processes not work as they should. But the deterrent value here is really considerable, so we welcome the fact that there are strong financial sanctions, but we also have to recognise that we are dealing with some of the largest companies in the world.  If you consider the cash in hand that a company like Facebook has, even a 10% fine or a proportion thereof is something that they can game and delay for years, and subject to legal challenge. So I think we have to ask the question: how effective is the range of sanction measures that are being proposed in terms of actually hard wiring the idea of a duty of care and viewing this as a kind of compliance piece at every stage of the business?

I think that is where we need to see more. Again, for the NSPCC, that brings us back to the importance of named responsibility. How great would it be if, through the Online Safety Bill, each of the companies had to have a named person who was responsible for harm against children and harm against women? There would then be direct personal accountability for the actions that then translate into harm that is being caused to children, women and families up and down this country every single day.

Q51            Tonia Antoniazzi: Again, a lot of what I am going to ask has been covered. This question is really for Seyi. You have called for the disproportionate abuse faced by women and girls to be specifically recognised in the Online Safety Bill. It is quite worrying to see the word woman not there—it is appalling. But what is this going to look like in practice? You have spoken about a duty of care, and Andy was talking about compliance. Is there anything else? What would it look like? What will it be?

Seyi Akiwowo: What it would look like would be involving civil society from the beginning in some of the decision making around the platform. There are trust and safety councils, but civil society groupssome people might disagree with this—are not properly remunerated for being on those councils, so how can they compete with the policy minds? How can they invest in their organisations to sense-check that what they are being fed is actually true and not just the PR spin? Civil societies are not properly armoured, if you like, in the battles with tech companies when the door is slightly ajar to kind of see what is going on. I think what it would look like is civil society being a part of tech companies’ trust and safety councils and oversight boards. Again, we have seen with Facebook Files that oversight boards can be lied to, but that is at least what it would look like.

I think it would also look like Ofcom working with civil society groups very closely. I know that Ofcom is hiring, and I think it is definitely building its capacity, but we cannot expect it to be ahead of the curve all the time. There are civil society groups like Glitch, NSPCC and EVAW, which are providing on the ground support to women. Glitch provides online safety training to women who want to be online. We support them with their digital safety, digital security and digital self-care. That has given us a real ear to the ground on new forms of online abuse, and we can therefore be ahead of the curve and able to improve our advocacy. This is something that I would love to start seeing with Ofcom.

Putting my recovering politician hat back on, and going back to Andy's point about a case review, I sadly remember when we would do case reviews when there was homicide, domestic abuse or child safety safeguarding that had gone wrong. There would be a whole council case review to look at what the failings were. We do not have that with tech companies. Every day that somebody is being abused online, our tolerance level increases once again and we now move from looking at abuse to violence. When we start looking at violence, it is too late.

The third thing is tech companies reporting on online gender-based violence. At the moment, many of them volunteer to do an annual report but, again, a lot of that looks like PR spin, so you have to read behind the stats. I think it would be really interesting to have a clear breakdown of how many accounts had to be taken down in X period of time that were antisemitic, anti-black, homophobic or ableist, or that had an impact on democracy, and then let us assess them on the same indicators in six months. At the moment the indicators and metrics change and, as Andy said, civil society groups are working on a shoestring budget and cannot keep up.

Q52            Tonia Antoniazzi: We heard suggestions in our last session about important limits on the functionality of unverified accounts, and that could mean that vulnerable users who are unable or choose not to verify their online identity could lose their ability to contact MPs or other high-profile figures. Can that risk be managed or is it just the inevitable trade off? It really is an issue because we were hearing in the last session that anonymity to some people is key for them to be able to be in contact.

Seyi Akiwowo: I will hand the mic to Stephen on this for the work that he is been doing on anonymity. What I will say is that a lot of women need anonymity and pseudonyms to participate online. With gaming, for example, the Gamergate that took place in 2010 and onwards saw an exodus of women not being able to be themselves online because they were just too good at FIFA and The Simsyou name it, all the games that are out there. So sometimes they are a protectionliterally life and death for some people.

I think what we should also be seeing in the risk assessments, to make sure there is a duty of care at the heart of this, is that we are investing in safety tools. At the moment we only have blocking, filtering and muting. I think that that could be expanded so much more to give users a whole range of tools to have proper agency on the platforms so that they can opt in to say, Okay, I am about to speak in front of the Petitions Committee. This might spark abuse. I’m going to able to turn up my security settings for the next couple of days so that I don’t have to see that.” We do not see enough being invested in safety tools or reporting mechanisms so that people can opt in and out when they want to when it comes to anonymity.

Stephen Kinsella: I know in one of the previous sessions some of your witnesses talked about the idea of stable accounts, did they not? It was the notion that over time an account would acquire a certain status and then would have more permissionsI think that is what you were alluding to. Our proposal does not go that way; it goes completely the other way. We say that an unverified account has the same status as any account. They do not lose permissions or abilities to do things. All we say is that each one of us should be able to choose for ourselves not to hear, not to interact and not to receive replies from unverified accounts, and that would also have a benefit. It would not just mean that I would not see it. Let us say that Marcus Rashford could communicate to everyone out there who likes to follow him, but if he decides to activate this permission, only those who were verified would be allowed to reply to him and therefore he and also all those who follow him would not see those replies. That would greatly diminish the ability for people to disseminate abuse.

The MP question I have heard come up a few times and it is a good one. I am not sure how many MPs would encourage their constituents to bring matters to them via Twitter. When I email an MP, for instance, I always get an automated reply saying, “First of all, I can only deal with you if you are a constituent, so can you provide something to verify that you have an address in my constituency?”

Q53            Tonia Antoniazzi: I think the example is that if there is somebody in danger, the only means that they have is to get hold of you on your Twitter. I think there were certain issues that were spoken about. It is not the norm that I would talk to my constituents via Twitter direct messaging, for example. We would say, “Please go to my email account. However, there are cases where people need to.

Stephen Kinsella: Yes, of course. That would have been you as the recipient of that message, so it would have been your choice to decide that you did not want to hear from unverified accounts. You might say, “Well, I take a view as an MP.” I have obviously spoken to a number of MPs about this. Margaret Hodge receives a great deal of abuse, but she says she wants to know about it. She does not even want her staff to screen it from her. Obviously many others use staff or people to filter, but they say, I wouldn’t stop any messages coming through to me, but I would have a screening mechanism for it.” That would obviously be a choice on an MP-by-MP basis.

When we come to whistleblowers, again, I am not sure that they would use Twitter very much as a mechanism. If you wanted to communicate with The Guardian, for instance, and you want to send material that you think it should follow up on, there will be a secure drop box. I think we always have to be very sensitive, of course, to the risk of unintended consequences. What we are talking about today is, after all, a lot of the unintended consequences of these business models that really ran out of control and where that move fast and break things ethos dominated everything else. I acknowledge that.

We have tried to think about what the real downsides would be of simply saying to each one of us,You have the right if and when you choose.” For instance, it could be that you might have been in the headlines for something and you think, “Well, for a couple of weeks, I think I will just dial this down. I will not take all the unsolicited material from people who are completely unverified, and then once that has passed, maybe I’ll dial it up a little bit again and see what happens. But it would give you the choice as the potential recipient. The thing with abuse is it has two elements. If the abuser is just shouting into the void and nobody is hearing them, they are probably still technically committing an offence, but they are causing far less harm.

Again, I think our proposal would have a great benefit in reducing the number of harms that the police or other authorities might have to follow up on. I have discussed this with the victims commissioner. I have discussed it off the record with prosecuting authorities as well. They can all see the benefit in terms of their workload and perhaps even in terms of your workload.

Seyi Akiwowo: Just on the point of harm, which is why we have been looking at a public health approach to addressing online abuse, I think there is scope in the education arm of the regulatory powers that Ofcom will have to really be looking into this because, as Stephen said, it is not just harm of the person who is facing the abuse. It is their friends and family, and those who see itit has this ripple effect. We know from youth violence that a public health approach and deeming it as a public health issue really helped to address that ripple effect of when one person has been stabbed or been abused, and the impact it has on the community and the school. I think a similar approach when it comes to risk assessment is harm prevention and harm reduction, so that not everyone is seeing it.

If we look at the characteristics of a trollI am fascinated about people who become trolls and very obsessive onlinewhat they really want is to be seen. A particular tactic of abuse is called ratio. For example, Chair Catherine, I might tweet you later and say, “It is been really lovely to engage with you. Thank you so much for the invitation, and then we may have a friendly dialogue and post a selfie. Ratio would be where trolls are maliciously trying to have more negative replies than the likes and the retweets—the positive engagements. That means they want to be seen.

Glitch did a campaign with BT Sport earlier this year on drawing the