2

 

Joint Committee on the Draft Online Safety Bill

Corrected oral evidence: Draft Online Safety Bill

Thursday 9 September 2021

9.45 am

 

Watch the meeting: https://parliamentlive.tv/event/index/0979ae08-757b-4d2c-b94e-af284e169dee

Members present: Damian Collins MP (The Chair); Debbie Abrahams MP; Lord Black of Brentwood; Lord Clement-Jones; Lord Gilbert of Panteg; Darren Jones MP; Baroness Kidron; Lord Knight of Weymouth; John Nicolson MP; Dean Russell MP; Lord Stevenson of Balmacara; Suzanne Webb MP.

Evidence Session No. 1              Heard in Public              Questions 1 - 51

 

Witnesses

I: Mr Imran Ahmed, CEO and Founder, Center for Countering Digital Hate.

II: Sanjay Bhandari, Chair, Kick It Out; Edleen John; Director of International Relations, Corporate Affairs and Co-Partner for Equality, Diversity and Inclusion, Football Association; Rio Ferdinand.

III: Danny Stone MBE, Chief Executive, Antisemitism Policy Trust; Nancy Kelley, Chief Executive, Stonewall.

 

USE OF THE TRANSCRIPT

  1. This is an uncorrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.
  2. Any public use of, or reference to, the contents should make clear that neither Members nor witnesses have had the opportunity to correct the record. If in doubt as to the propriety of using the transcript, please contact the Clerk of the Committee.
  3. Members and witnesses are asked to send corrections to the Clerk of the Committee within 14 days of receipt.

 


60

Examination of witness

Mr Imran Ahmed.

Q1                 The Chair: Welcome to this first evidence session of the Joint Committee on the Draft Online Safety Bill and welcome to our first witness, Imran Ahmed. Before the members start the questioning, we need to complete some declarations of interest. I should declare that I serve in a voluntary capacity on the board of the Center for Countering Digital Hate. The Members of the House of Lords in particular need to make more detailed declarations at the start of the session. I will start by inviting Baroness Kidron to do so first.

Baroness Kidron: I am the chair of the 5Rights Foundation, a commissioner at the UN Broadband Commission for Sustainable Development, a member of the Council on Extended Intelligence that deals with AI, a member of the Advisory Council, a member of the Institute for Ethics in AI at Oxford University, a member of the steering group of the Born in Bradford’s Digital Makers programme, a board member of the Data Protection Foundation, and deputy chair of the APPG for Digital Regulation and Responsibility.

Lord Gilbert of Panteg: I chair the House of Lords Communications and Digital Select Committee. I am an electoral commissioner. I am a member of a number of all-party parliamentary groups, but particularly relevant to this inquiry is the All-Party Parliamentary Group for Global LGBT+ Rights.

Lord Knight of Weymouth: I am a director of Suklaa Ltd, which is an educational consultancy that sometimes provides advice on media literacy, particularly in schools. I am a non-executive director of GoBubble Ltd, which is social media for children. I do not think any of my other interests are relevant beyond that.

Lord Black of Brentwood: I am deputy chairman of the Telegraph Media Group, director of the Advertising Standards Board of Finance, president of the Institute of Promotional Marketing, director of the Regulatory Funding Company, vice-president of News Media Europe, chairman of the Commonwealth Press Union Media Trust, a member of the board of the World Association of News Publishers and vice-chairman of the APPG for Media Freedom and Global LGBT+ Rights.

Lord Clement-Jones: I am chair of the board of Ombudsman Services, the telecoms ombudsman, chair of the APPG on AI, vice-chair of the digital regulation APPG and a consultant to DLA Piper.

Lord Stevenson of Balmacara: I am a member of the House of Lords Communications and Digital Committee, but otherwise have no relevant interests to declare.

The Chair: I should declare that I am a director of Infotagion and a host of the podcasts of the same name on technology policy. I think those complete our declarations, unless there are any Members of the House of Commons who wish to make a declaration.

Dean Russell MP: I provided some consultancy for the DMA, which looks at GDPL primarily. I am also the chair of the digital ID APPG, which also looks at anonymity, and of the digital health APPG. I refer Members to my register of interests for other declarations.

Debbie Abrahams MP: I am co-chair of the APPG for Compassionate Politics.

Q2                The Chair: That completes the declarations and we can now start the questions. Welcome to our witness, Imran Ahmed. Your organisation has done a great deal of research looking at the sources of disinformation and hate speech on social media sites. In the last 18-month period there has been a considerable focus on the role of anti-vaccine conspiracy theories and the influence they have on things like vaccine hesitancy. Your organisation did quite a detailed report identifying the major sources of anti-vaccine conspiracies, highlighting the dirty dozen of top distributors of this disinformation. Could you tell us a little about the response you had to that report from the technology companies themselves and from people working in public policy within government?

Imran Ahmed: Good morning. Thank you to the committee for inviting CCDH to give evidence today. It is really great to hear of the range of experience that is being brought to this question. I am glad that Her Majesty’s Government have decided to take the politics out of this and have the pre-legislative scrutiny, because this is a really important issue.

With the indulgence of the committee, I have a short initial statement to lay out the perspective that we bring to it and to emphasise the strength of feeling from our organisation, which has spent the past five years now—two publicly—researching the issues at hand. In that time I have found one central truth.

Social media companies are incapable of regulating themselves. Whenever they are given the chance, they put profit before people. In the past year they have faced three key tests and they have failed all three. They have failed to take action on racists abusing black and minority sportspeople, which is why we will have evidence later from Rio Ferdinand among others. They have failed to take action on US election misinformation, which led to the deadly assault on the US Capitol just down the road from where I live in Washington DC. Finally, they are still failing to act on anti-vaccine lies. Right now there are people gurgling for breath in ICUs in the United Kingdom and around the world who are telling their doctors, “I saw this on Facebook. I believed it to be true. I thought the vaccine would harm me”. Those people will not survive the lies and misinformation that are being fed to them.

We have studied and know through experience the companies’ playbook that will be in play over the coming weeks and months of this legislation to avoid responsibility through accountability, and the contempt they have shown civil society, the victims, democratically elected politicians, even the President of the United States, as I will go on to show from our report The Disinformation Dozen. This Bill is our chance to create a system that incentivises social media companies to finally protect their users and that holds them accountable.

With regard to our report The Disinformation Dozen, we have been looking at anti-vax misinformation for over a year now; our first report was called The Anti-Vaxx Industry. Our first private report was to Her Majesty’s Government and the US Government in around May last year. We said that Covid was coming and that there was going to be a rise in xenophobia, because there is correlation between disgust sensitivity, which would be induced during a pandemic, and xenophobia. There are psychological and neurological reasons for that: they are both collocated in the insular cortex of the brain. That is what Robert Sapolsky, the neuroendrocologist, told me.

There are complex reasons as to why we are going to see this rise, and the one group that genuinely knows how to weaponise misinformation about health, has built scale, has an understanding of the communications and the technology, and has also built sustainable economies based on misinformation, is the anti-vax industry.

As it happened, we had already been studying for the APPG on vaccines, so we have a baseline analysis of the quantum, the size of the anti-vaccine industry and their follower base on social media. We then put out The Anti-Vaxx Industry and The Anti-Vaxx Playbook, which was based on my team managing to attend a secret conference of the world’s leading anti-vaxxers. We recorded them for three days explaining how they were going to weaponise the vaccine against vaccinations as a whole.

The Disinformation Dozen has got us a lot of attention. President Biden cited it in evidence when he said that Facebook were killers, and he was right by the way. The Disinformation Dozen is based on a concentration analysis. Very simply put, there is no real difference between online worlds and offline worlds. Influence is not democratically allocated. Not everyone’s voice is the same as everyone else’s. We did a simple old-fashioned analysis, taken from fast-moving consumer goods, or FMCG, of the concentration of influence and whose content is driving the most shares online. We found a nice statistic: 12 people produced two-thirds of the influence, of the shares. There is a bit of showmanship, a bit of theatricality, in picking 12 and two-thirds, but they are memorable. I think nearly everyone has got that statistic right, apart from President Biden, who has used several different shares in his testimony.

The Chair: On that point, it would seem from what you are saying that finding sources of disinformation on something like anti-vaccine is not difficult to do. It is reasonably predictable where it will be and it is quite concentrated in the hands of a relatively small number of networks.

Imran Ahmed: Precisely. These 12 people have become really good at using different platforms for different reasons. Not all the platforms are the same, so there are spaces used to radicalise people. Facebook, for example, is the 800-pound gorilla in the radicalisation market. That is just as true with violent extremism as it is with anti-vax. Why? Because you can bring together large communities of people and feed them information over time. Frequency bias, which is a psychological trait, is such that if we see something frequently we think it is normal, we think it is true. It is just really simple.

Through using Facebook groups you can colour the lens through which people see the world. If you drip, drip misinformation regularly into people’s feeds, into groups—remember that last year Facebook decided to treat groups as friends; they downgraded news and they upgraded friends, but they made your groups your friends, too—you get a higher prevalence of information from within Facebook groups in your feed. That means that the lens through which you see the world, you see your community, is now infiltrated by bad actors who are spreading misinformation.

The Chair: I want to bring in other members of the committee who wish to come in with questions, but, first, what you are saying, it seems to me, is that it is not just that disinformation like anti-vaccine conspiracies exists on social media, and it is not just that it is easy to find in predicts, but that the companies not only do not act to remove it but have designed the system in such a way that it supports the dissemination of that conspiracy theory.

Imran Ahmed: I think the system they have built is perfect for the dissemination of misinformation in that it recolours the lens through which people see the world and leads them down radicalisation pathways, but the bad actors would have done it anyway. In one part it is not just about you having this ideal system for them. It is also that there are these bad actors who are experts at weaponising those spaces, and the companies tolerate their presence.

Really, the question is: why do they tolerate their presence? We have answers to that. Twitter is used for discourse shaping; it is used to target journalists and politicians and tell them what the acceptable bounds of discourse are until you get abuse, and that has an effect over time. All of us here at some point will have been abused online, and you know that it gets to you eventually. There are evidence points. YouTube, Facebook and Instagram are used as evidence points. You put up your evidence there and it is fed into other radicalisation environments that are used to contextually reshape the way you see the world. All these platforms come together like Lego, and eventually what you have is a palace that is ideal for bad actors to use.

Q3                Darren Jones MP: Imran, I am interested in the identification of the 12 individuals. Often the tech companies will say, “There are millions of pieces of content. We have warehouses of content moderators and AI systems. It is very difficult to find everything”, but you have managed to hone it down to 12 individuals, which makes it sound quite easy just to get rid of them. Presumably they have multiple identities. Is it more than 12? If it is down to 12, why are the companies not dealing with them at the root source?

Imran Ahmed: Facebook—Monika Bickert, who is their president of content policy—has responded to The Disinformation Dozen now, five months after our report came out and about a month after President Biden had accused them. I am trying to find the statistics that we have on the number of accounts. There are those 12 individuals, some of them are couples, and behind them is a network of 501(c)(3)s, charities, LLCs. They all own companies that have millions of dollars of revenue. These are not just 12 random members of the public. These are sophisticated bad actors. They have 97 accounts between them.

Monika Bickert said that CCDH’s research is nonsense, because there are lots and lots of people who are spreading misinformation. That is true, but these 12 people are the super-spreaders; they are nodes in a network. Online spaces are different: they are not flat spaces, they are hierarchical in who the key influencers are. I have 5,500 followers. If I tweet something, 5,500 people might see it. If they tweet something, a couple of million people might see it, because they have far more followers than I do.

Of those 97 accounts, only 47 as of right now have been removed. Even though they know the 97 accounts, they have failed to take down over half of them. There are currently still 7.9 million followers between the 12 of them. Some 6.3 million have been taken down. That is not bad. I am glad that 6.3 million followers and 47 of their accounts have been taken down. I am very proud that my organisation has driven that work.

However, the job is not done, clearly. In Monika Bickert’s response, she said, “We have taken action against all 12 of them, even though CCDH’s research is nonsense”. They are trying to have their cake and eat it too, but at the same time they have not taken comprehensive action. The question is: why not, when it is so easy? We have done the hard work of content moderation for them.

Darren Jones MP: To me this just sounds like an organised crime network. What in the Bill will stop that happening?

Imran Ahmed: There is a question here, and there are three things that we have identified as being particularly problematic.

The first is the Bill’s failure to address hate and misinformation directly.

The second is the lack of independence of the audits—they are marking their own homework essentially, being able to provide that themselvesand that is a real problem. I would like to come to that, because this question of how the companies have managed criticism to date—and there has been criticism to date—is vital to this discussion.

The third is criminal liability. Think if the death toll from any terrorist group was in the hundreds of thousands. Anti-vaccination misinformation has taken lives—there are people who have died as a result of the misinformation they have been fed online. If you speak to NHS workers, they will tell you that the people they see coming into ICUs now are unvaccinated, and many of them are unvaccinated because they thought there was a reason not to take the vaccine because of misinformation they were consuming online, and they did not have the resilience to challenge and not accept it. You would have thought that criminal liability would be a no-brainer here if the companies continually fail to act.

I think what is critical about CCDH is that we can prove, through our experience in negotiating with those companies and working, for example, with Her Majesty’s Government and DCMS’s countering disinformation task force, with the US Government and other civil society bodies, we can show you that the organisations have failed to act in an honest way. Also, we have done the independent auditing work to show how their platforms are being abused. If you send them reports of misinformation using their own reporting tools, I think it’s two in 10 instances that they take action to take down the misinformation that we identify for them. That is a pretty bad strike rate, two in 10. There is clear evidence that these companies are not doing what they need to be doing.

Q4                Suzanne Webb MP: You said very clearly that in your view Facebook are killers, and you talked about the online harm. We are aware that this leads to offline harm, which is what we just talked about. This is of great concern. You talk about the misinformation, which is not just about the vaccine. There is misinformation that goes out to very vulnerable young adults in particular, and often we hear about the consequences of that. Do we have the evidence of a direct correlation between online harm that can lead to the offline harm?

Imran Ahmed: At a really simple level, yes. There are bits of research to connect this all together. Some of it is going to sound ridiculous. Research was done by Heidi Larson’s group at the London School of Hygiene and Tropical Medicine in the Vaccine Confidence Project, which showed that misinformation misinforms people. You would have thought that was self-evident, but we now have the proof that every time someone is exposed to a bit of misinformation it depresses their likelihood to vaccinate by 7.8%. Clearly if people are misinformed, it is rational for them to come to the conclusion, if they are not resilient to that misinformation, if they trust the material they see online, not to vaccinate. We have that evidence.

In August 2020, with YouGov we produced correlation analysis, multiple regression analysis, of data. We asked people if they were likely to vaccinate and then asked them a bunch of psychographic questions, such as: how much media do you consume, and what are your attitudes towards scientists, towards the Government, towards various different figures? Across all communities, including African American and Latinx in the US, and Muslim, Jewish and so on in the UK, it turns out that the best correlator, the thing that drives vaccine hesitancy, is not trust in the Government, which is pretty low everywhere, or trust in scientists, which is pretty high everywhere, but trust in social media, whether or not someone answers the question, “I generally trust what I see on social media”.

The best thing you could do in this pandemic to stop people from believing the material they see on social media is put up a massive banner on the top of every social media site saying, “This site is for entertainment purposes only. You do not know who has put on the information that you see here. It could be someone who wants to harm you”. I did suggest this to Google once and I think laughingly they said that was a good idea, but they never came back to me on it, funnily enough.

Suzanne Webb MP: Following on from that, you mentioned that criminal liability is the route to go down. If we are looking at the impact of this information being on there and the potential, as you say, that it is a killer, I guess we will be looking at how we can put those two together and correlate. To be able to pursue that ourselves, if you are going to pursue criminal liability as something that should be included, we would need to have the evidence that there is a direct link.

Imran Ahmed: There is connecting the two things, connecting the misinformation to offline harm, and there are myriad ways in which we can do that. There are soft ways and hard ways, and there are things where that is not going to be clear. I know that Rio Ferdinand is giving evidence later today. I grew up in Old Trafford, so I am a huge Manchester United fan. Marcus Rashford is another footballer who has been abused recently. When it comes to racism against footballers, the point that I have made to their representatives and to others is that the abuse of Marcus Rashford matters not because he is a wealthy footballer, but because if they can call Marcus Rashford the N-word, imagine what they would call me or my mum or anyone else from a minority, a woman, a gay person, anyone else. This is a sense of, “These places are not for you. These are our places”, and it is literally the counter-enlightenment in play.

This room here today comprises voices that would not have been permitted to sit at these tables 200 years ago. People have fought very hard to bring these voices, including mine, into these rooms, and they are now being excised from an area that is vital to public discourse and democracy. Let us not underplay how important they are. These are not entertainment sites. They have become, for better or for worse, critical components in how we communicate information to each other. We create social mores in our society, as I learned when I worked for the Labour Party and saw the infiltration of anti-Semitism into the Labour Party, and that digital spaces were being used as the space in which social mores and values were evolving.

Q5                Debbie Abrahams MP: Thank you for coming here today. You mentioned in your introduction the rise of misinformation and how that was used in the US presidential election. I noticed in one of your blogs earlier this year that you reported that Gateway Pundit had earned over $1 million in Google ad revenue in that election, but that particular website between November and January claimed that the US election had been stolen from Donald Trump, resulting in 50 million visits and nearly a billion clicks from that. Of course they also promoted the meeting and the rallies on 6 January.

What do you think is contained in the scope of the Bill on potential harms? Is there a responsibility not just in terms of direct individual harm but in terms of potential harms? How does the Bill address not just individual harm but societal harm?

Imran Ahmed: Another problem with the Bill is that it does not deal with the advertising market. Google has two parts to its advertising business. The first part is a display network and websites. Google has about a third of all additional ad revenue, but 95% of the sites that you ever see that have ads on them have ads placed there by Google. Google has chosen breadth over expense. They have an enormous display network and they gobble up sites and put them in there, including Gateway Pundit, which was spreading election misinformationThe Big Lie”Covid disinformation, and all sorts of malignant and anti-black narratives, really horrific racist nonsense.

We found that, thanks to Google ads monetisation—Google takes advertising revenue from everywhere; I think we found Democratic Party adverts placed on Gateway Pundit, a viciously far right site, which just goes to show that, even for the advertiser, it is a terrible waste of money, and most of them are utterly frustrated when they find out about it—Google has become a primary funder of malignant, violent, racist, anti-Semitic, and Covid disinformation content on the internet.

I can give you an example of a site I first heard about through the Home Office’s counterterrorism office and which we targeted last year. It was covered in adverts for household brands. We contacted each of those brands and got all their adverts taken off, and the site shut down after the ads disappeared. It just goes to show you that the advertising revenue from Google is vital to the proselytisation of hatred in our society. Counterterrorism told us that they were concerned about it. Gateway Pundit raised millions of dollars of revenue from Google ads$1.1 million in just seven months.

I am very proud to tell you that after our campaign Stop Funding Misinformation, that story was featured in Forbes magazine in the USA. I think it was last weekend when Google took them off their display network after a year of campaigning by CCDH, and they are now demonetised, which means that they will not be able to spread their nonsense any further or at the scale and speed at which they were able to before. That is another thing that I am very proud of my team for.

Q6                Baroness Kidron: Imran, I have two questions. One is about responsibility and one is about intentionality. I was really interested when the Chair asked you whether the system was set up to spread this bad stuff, and you were quite careful and backtracked, and said, “No, it’s a perfect environment”. Given that you are our first witness, should we be looking at the intention and unintentional consequences of this system? Is that one of the things we need to do?

Imran Ahmed: I am always really cautious when it comes to ascribing the reasons for harm. The most frequent questions I am asked about The Disinformation Dozen is: are they doing this on purpose? Are they doing this for the money? I say, “I can’t tell you what resides in the hearts of people, but I can tell you that it’s curious how much money they make from spreading disinformation”.

I would say the same about the executives of these companies. However, they knowthey have been told many, many times, not just by us, by civil society, by Governments but also internally by their own staff. Sheera Frenkel’s marvellous book on Facebook shows you the extent to which internally there are people pushing back saying, “Please, we need to do something about these harms”. I can tell you right now that I speak to people inside these companies and they are frustrated at the fact that their executives know about this and are doing little about it.

The DCMS counter-disinformation task force set up on Covid was four charities or non-profit NGOs, a few academics, some civil servants, Ministers and the companies, and we were giving them all the data. We were telling them what was going on. There was one point at which I think Facebook’s representative thanked us all for the work that we had done to identify the problems on their platform. I am poky, and I poked back and said, “We have hundreds of thousands of dollars of revenue between the four organisations. You have billions. Why on earth aren’t you doing this work for yourselves?” In one respect, they have not bothered to do the work, because if they did they would have moral culpability, but also, when they are told about it, they do not do anything about it either.

Baroness Kidron: I am going to press you a little bit. We can talk about intentional versus unintentional, and they are very keen on freedom of expression, but you have just said to the committee that some people are being frozen out. That is an unintentional consequence of their freedom of expression desire. Is the job of the Online Safety Bill to really take a societal look at that and say, “In the UK, this is what we think good looks like”? There are many and complicated balances that we have to look at, and I am interested in the unintentional ones.

Imran Ahmed: I feel like the answer is in the question to some extent. Yes of course we need to take a broad look at the harms created by these platforms, both intentional and unintentional, but we also need to look at rights. I would argue that the freedom of speech is absolute” thing is a defensive play by the companies—a sort of first-year PPE, I call it. Of course these platforms are not all about freedom of speech; these are not public squares.

Their algorithms mean that they pick winners and losers in the content game. The spreaders of hate and misinformation are able to bully their opponents off that space to narrow the space in which their opponents can operate. The prevalence of hate speech and misinformation damages free speech in other ways. You do not have free speech if you are a black footballer and 100 racist people jump down your throat every time you post. In fact, the barrier to entry for you to use this vital tool for promoting your brand and for transacting business is taken away from you.

More fundamentally, I have always felt it remarkable that the companies have been able to get away with not being responsible. The first time I heard anyone accuse the companies of being responsible for the harms they create was when President Biden called them killers. I have been arguing this with executives there for quite some time. At what point is there culpability if you know that harm is being created by your product? What other industry gets away with this?

Q7                Baroness Kidron: Can I finish with this, then, about responsibility, because you have taken me right there? In your evidence you say, “Disappointingly, platforms have increasingly passed the burden of dealing with trolling on to individual users through features to filter abuse”. Would you be disappointed in a Bill that gave them the responsibility to create user tools, and do you think that the sorts of response the Bill should have to creating safety needs to be upstream? Is it with them or is it with the user?

Imran Ahmed: It is not just about minorities. I have been advising scientists who have been working on the vaccine and who have been getting unbelievable amounts of abusethat they are trying to put microchips in people, or nonsense like that—and threats to their families. I am not meant to talk about the threats that CCDH gets, but I can tell you the anti-vaxxers give better death threats than anyone else: they are very good fun, completely bananas. There are so many organisations that are being bullied off the use of those platforms, and that is a really important component of it.

I would also make this point about the free speech debate and the first amendment question in the US. We track when the companies are in litigation and being sued by members of the disinformation dozen or by hate actors, for example. Publicly, the hate actors say, “If we lose our platform, we’ll go somewhere else”. Publicly, Facebook says, “Well, they kind of have a first amendment right to say what they want to say”.

In the litigation, the hate actors say, “It has destroyed our business”. One of them said that it had demolished his ability to spread misinformation. Another one said that it had destroyed his business and the economics behind it, because, of course, in the courts they have to prove tort; they have to lay out what the harm is. The companies say, “They don’t have a first amendment right. This is our platform. We have the first amendment right. No court can tell us what to put on our platform, and this guy is an idiot, so sling one”.

Q8                Lord Clement-Jones: I have a couple of questions. That is very powerful testimony on misinformation and disinformation, Imran. You have talked mainly about Facebook and Google. I wondered whether in a sense the kind of behaviour that you have been describing, or the failure to act, is right across the board. Is it common right across the board on the platforms, or are there any social media platforms or search platforms that have best practice? Is there anything that one can draw upon for the purposes of a future code?

The second question is whether, in misinformation and disinformation spreading, we should be making a distinction between the larger platforms and the smaller ones, as the Bill does.

Imran Ahmed: There are rational reasons for splitting out the larger platforms from the smaller platforms. Ironically, the ones that like to talk most about the impact on smaller platforms are the larger platforms, which are trying to cover up their central role in the spreading of disinformation and hate throughout our societies.

I think we will struggle with this Bill to find the right balance between the prescriptions that need to be put into place for large companies and for small companies. Like The Disinformation Dozen, you should focus as much as possible on where the greatest quantum of harm is caused. The greatest quantum of harm is caused by Twitter, which is vital in the shaping of discourse, and by Google, which owns YouTube, which is one of the platforms that is most resistant in taking action to enforce its own rules.

Keep in mind that one of the cute things that the team at CCDH does is always measure the content that we are looking at by companies’ own rules. We look at the enforcement of their own rules. One of the things I would urge you to look at is that, at the moment, transparency about companies’ decision-making and how they enforce their rules is severely lacking.

In one respect, there is a very easy solution that would work in both the UK and the US and some of the other jurisdictions that we work on. If there was transparency in how they enforce their rules, if there was transparency in algorithms and how they decide what wins and what loses on their platform, and if there was transparency in the economics of the advertising market and other aspects of how they generate their immense wealth, you could get most of the way to a solution that would make it impossible for them to function as they are right now, with the arrogant insouciance, the lack of transparency, and their ability to turn around to Governments and go, “That’s not the way it works. You don’t know the way it works. Only we know the way it works”. Of course, that is part of their playbook.

There are various aspects to this Bill, and one of the hardest things about it, as the Carnegie Trust has said very clearly, is that it is a really complicated Bill. I do not envy you having to get your heads around every clause and subclause and the power within it. The question is not necessarily whether there will be sufficient powersI think we have identified some areas in which there are insufficient powers, such as the independence of audits, hate and misinformation being clear in the Bill, and criminal liability—but whether the regulator will be able to wield those powers with sufficient confidence and effectiveness to give us the results socially that we want.

Lord Clement-Jones: It sounds as though algorithmic inspection is an important part of what you are suggesting when it comes to transparency?

Imran Ahmed: We put out a report a few months ago called Malgorithm, which looked at the way the algorithm works. When Facebook bought Instagram, it said that it would not Facebook-ise Instagram. That was its promise to the founders. The founders of Instagram then left, because Facebook of course then Facebook-ised Instagram. In August or September 2020, it decided to change the algorithm that ran on the front page of Instagram. It used to be that if you got to the end of your feed it would say, “Well done, you’ve caught up”. Now it has started recommending you posts that you should read.

We thought this was a great opportunity to find out how it recommends things. We set up a series of different accounts, each of which followed only 10 accounts, each an archetype. One followed health authorities, one followed wellness accounts, one followed anti-vaxxers, one followed anti-Semites, one followed QAnon.

We saw what recommendations came out. In short, if you followed wellness it would feed you anti-vax. If you followed anti-vax it would give you QAnon and anti-Semitism. If you followed QAnon, it would give you anti-vax and Covid disinformation. We found the algorithm. Whether it is by design or by experience of trillions of clicks, it had realised that conspiracist information is addictive and keeps people on platform. If you can deepen and broaden people’s conspiracism and their extremisms, you can keep them on platform for ages because where do you go for misinformation and conspiracy? You go to their platforms. They create a captive audience. Why do they do that? Because eyeballs equal ad revenues equal more money for them.

In reality, I think the algorithms will end up being very simple—just, “What do I have to do to keep people on this platform for as long as possible to make as much money as possible?” Once that is genuinely known, once people genuinely understand the simplicity of those algorithms, the failure to act on malignant content, and the impact on our societies, I do not think the platform would survive anyway in the condition it is at the moment.

Q9                Lord Clement-Jones: I have a quick final question. Following up what Debbie had to say on societal harms—that if you go for that, it does not have to be just an individual harmhow on earth do you define a societal harm for the purposes of the Bill?

Imran Ahmed: Well, luckily that is not my job. I look forward to seeing your solution on it. I think you will have to take a very broad view of societal harm. You have before you later on today my friend and colleague Danny Stone from the Antisemitism Policy Trust. There is a price that is incredibly clear and a harm that is incredibly clear that comes with misinformation and hate.

The Chair: From your responses, though, you seem to be saying that if someone starts engaging with a certain type of conspiracy theory, they will see others on different topics?

Imran Ahmed: They will be fed others. I was told by the Civil Service that the one report by CCDH that Facebook really pushed back on was Malgorithm, not The Disinformation Dozen, not any of the others. That is the one it really pushed back on. I was delighted when a civil servant said that he had turned around to Facebook and said, “Perhaps you are right: perhaps CCDH is wrong about how your algorithm works. But there is evidence that there is harm, because it is clear. It has screenshots. Perhaps that is a good chance for us to have an independent inspection of how your algorithm works”. I thought that was a very clever and right answer.

The Chair: You can see why it would push back on that. Rather than blaming someone who is posting bad content, you are blaming its system as being the problem.

Q10            John Nicolson MP: Why do you think people choose to believe disinformation rather than their doctor?

Imran Ahmed: There are lots of different reasons. First, it is access. As with everything else, in the middle of a pandemic we have been physically dislocated from GP surgeries. Where has the information that we have seen been coming in from? We have been stuck at home. We have been stuck behind computers; 2020 has shown the scale of disinformation on those platforms but also the power of social media, and I never want to discount that.

I moved to DC in July 2020. Social media is the main way in which I expressed vulnerability, gained succour, felt love for my friends, saw things that were important to my life. In among that, I know that there were vast amounts of misinformation flowing. It is the prevalence of it. It is the centrality of these platforms as a means of communication and the fact that they are inherently and irredeemably laced with misinformation. One of the most primary information environments that we exist in is laced with misinformation. That is one reason.

The other reason is that there are psychological correlators for why people believe misinformation rather than good information or information from health authorities. Do not forget that the UK has done pretty damn well when it comes to vaccination compared to the US. Why is that? In part, it is because we have an institution in the UK that commands more trust than the US health system. We have the NHS. This is about epistemic authorities. It is about who has the right to tell you what the truth is, or not. In the UK, the NHS is trusted and loved by most people, so it has done pretty well. What drives conspiracist thinking? It is epistemic anxietya lack of knowledge not just about what the truth is but how to find it.

John Nicolson MP: In your opinion, are there groups of people who are particularly susceptible to the disinformation? For those of us who have grown up getting jags—“jags” is the Scottish use—in our arms as babies, we have believed the National Health Service, we have believed in inoculations, we believe that we will be kept safe by inoculations. Why would we suddenly turn around and believe some random thing on Facebook rather than a health service we have trusted our whole lives, taking your point that it is less prevalent here than elsewhere?

Imran Ahmed: We are talking, though, about 60 million people following spaces that are controlled by anti-vaxxers on social media. That number has not changed over the pandemic, by the way. For all the action they claim to have taken, there are still 60 million people following these spaces. It went up by about 10 million at the start of the pandemic. We have seen that it is not a symmetrical question; it is not about people either believing in vaccines or not believing in vaccines. They are hesitant. Children are important to me, something that I would like in my future. If someone said to me that this jab might cause harm to a child, I would be horrified. I would stop for a second and think, “Well, crumbs, I don’t really want to do that”. It is about making people pausevaccine hesitancy.

So many of the people in those hospitals gasping for breath right now who are inflicted with Covid hesitated because they were waiting to see whether the vaccine was dangerous. That is what they were really doing. It is vaccine hesitancy, not being anti-vax. It is an asymmetric battle. We are actually the change agents. We are telling people, “Get off your bums, go to the doctor, go and get your jab”. That is the change state, right? They are just saying, “Don’t. Just sit there, keep watching Netflix”.

John Nicolson MP: What is the motivation of the anti-vaxxers?

Imran Ahmed: Anti-vaxxers are like any type of groomer or recruiter. You can look at, say, the way that AQ used to recruit, or Hezbollah, or the way in which child sexual exploitation works. This is about making people not trust the authorities that they normally trust.

Anti-vaxxers only have three actual messages. Forget all the nonsense about memes. Look at the themes: Covid is not dangerous; vaccines are dangerous or unsafe, or there is something funny about them, so the people pushing those two things, the doctors, cannot be trusted, because they are compromised. There are only three messages in there. It is the simplicity of what they have to communicate.

We have failed on multiple levels. We have done things like engage with individual memes, actually giving oxygen to the idea that microchips are in the vaccines, or we have given oxygen to nonsense about hydroxyquinoline and now Ivermectin. I keep being asked by US media people to do stuff on Ivermectin. I absolutely refuse to do it. I do not want to talk about it. That is not what this is about. This is about telling people that you do not need to go and get the vaccine; you could just take some horse tranquiliser.

John Nicolson MP: What do you think is the long-term battle plan of the platforms to keep going? I was quite interested, because I also sit on the DCMS Committee, and we had a boss of TikTok in front of us. Their argument was that they take down Covid disinformation, and we said, “You clearly don’t”. My office staff managed to find a very glamorous young woman who was pushing Covid disinformation. She had more than 100,000 hits for Covid disinformation, saying that you were injecting yourself with babies.

What was interesting was that the TikTok boss who appeared before us did not offer a free speech defencedid not say, “Shes entitled to put out the message if thats what she wants. It’s all about free speech”. Their argument was, “This is absolutely horrific and we do everything possible to take this stuff down”. That is clearly not the case. Are they operating different defences simultaneously, both the free speech defence and the “We are completely incompetent” defence?

Imran Ahmed: Free speech is an earlier part of their playbook. They have a playbook. There are four Ds. Initially, it is deny. Initially, they said that anti-vax stuff was not causing a big problem, and that it was not against their rules anyway. Do not forget: at the start of the pandemic, anti-vax misinformation was not against Facebook’s rules. It was only at Christmastime when there was a huge amount of attention as a result of multiple organisations, including CCDH with The Anti-Vaxx Playbook, showing how the platforms were being abused by anti-vaxxers, that they changed their policy on that. So initially, you deny.

Secondly, you deflect. You say, “Lots of people feel this way. This is a societal problem. This is not about what is happening on our platform”. Even though there are no water coolers anymore, in the last year this has been a perfect experiment. If information was ordered by Mark Zuckerberg, what would the world look like? It turns out that it is domestic terrorism at the Capitol and people choking to death in ICUs.

The third phase is delay. Once they have accepted responsibility and have said, “Well do something about it”, they say, “Absolutely, we’re going to do something really big about this, but we can’t do it all at once. How on earth can we do that?” They say, “We’ll deal with it”, but they always delay and delay and delay. The reason why they detest us, which I know is true, is because we expose how severe that delay is and the impact of that delay, their failure to act even when they accept that they must act.

Finally, they throw dollars at it. So it is deny, deflect, delay. Then it is dollars. The dollars are a big issue. In 2020, Amazon and Facebook spent twice as much as Exxon and Philip Morris on lobbying. Big tech spent $120 million on lobbying in just 2020, in the election year. Ninety-four per cent of Congress with jurisdiction over privacy and anti-trust have received money from a big tech corporate PAC. In the UK, I can tell you that I go to meetings in which I see civil society bodies that come out against a CCDH position. I always go back and check who their funders are, and I always find big tech among their funders.

The tentacles of big tech are everywhere. A question you should be asking everyone who submits evidence to you is: do you take money from big tech? I go to dinners, to lunches, to debates every day in which I am dealing with organisations that are essentially bought, that are part and parcel of the big tech lobbying machine, and that is what you are up against.

It is a truly courageous task that you are taking on, for that reason, because you are coming up against the most powerful and impressive lobbying machine assembled in history. There is a reason why Facebook keeps putting full-page adverts in the New York Times, saying, “We are ready for Section 230 reform of the Communications Decency Act”, which gives them no liability for the content on their platforms. It is because they know that this is coming. They are ready for it, they have some of the finest political minds in the world working on it, and Nick Clegg too.

Q11            Dean Russell MP: I want to touch on a couple of points. Right at the start you gave three examples of big issues, anti-vax being one of them, and the awful incident in the capital in Washington. How are these connected? Are you finding that the same organisations are spreading misinformation in different ways to the same groups of people and perhaps trying to widen their audience?

Imran Ahmed: The serious part of the job, where we talk to counterterrorism in the US and the UK, is convergence and hybridisation of these forces. The problem with social media is that there is zero marginal cost for each additional communication to each additional person. The actual fundamental technical change to how communications happen and the economics of it are enormous. It is like the nuclear bomb. It is unlimited amounts of energy from zero. If you can communicate to theoretically 4.5 billion people for zero cost, you are able to market to new groups.

What we have seen over the pandemic is the cross-marketing of different types of extremism to each other. That leads to convergence. On 6 January, we had a stage which had anti-vaxxers; the Oath Keepers, the far-right white supremacist group; George Papadopoulos; Roger Stonea whole array of bad guys on that one platform. They had converged in physical space after initially converging and marketing to each other, realising that there was an opportunity to sell the same idea that you cannot trust the Government, that they are out to get you, that there is a great big conspiracy.

What we are seeing now is hybridisation. Once you form these converged new organisations, you see mega conspiracies being used to hybridise them. QAnon brought together suburban mums with white supremacists, with Trump supporters, with people who were just worried about children—with a whole array. The new one is the Great Reset, which is replacing QAnonQAnon is basically dead—replacing the paedophile weirdness and Donald Trump being the world’s greatest paedophile hunter with the idea that the vaccines are being used to kill people. That is the Great Reset conspiracy theory, which is now the mobilising conspiracy theory behind the eruptions of violence that we are seeing in real life today.

Dean Russell MP: You mentioned earlier the huge amounts of money that are funding these conspiracy creators, as it were. Where is that money coming from? Earlier, you said bad actors, and you mentioned charities and business. Interestingly, you did not say Governments. Is there a bigger issue here to do with security globally?

Imran Ahmed: We do not do government work. We are not a quasi, semi or in any way espionage agency. We do not take money from Governments either, to be very clear. Nor do we take it from big tech. We are not equipped to identify, and we are not capable of identifying, infiltration by Governments. Part of the reason for that is that the people who have access to IP-level data, which you would need to understand and look at masking, for example, are the companies themselves, and they have teams internally to do that. Part of the reason is because it is just not our wheelhouse. There are organisations like the ISDthe Institute for Strategic Dialoguerun by Sasha Havlicek, which is an incredible organisation that is very good at that. It works with Governments, the DHS and others to do that sort of work.

I have always said that it is a no-brainer. If you want to kill British or American people, the best thing you could do now, if you are a Russian tech person, is to like the posts of bad actors from within our society. It is injecting heat into an existing fissure in our society, trying to cause an earthquake.

Dean Russell MP: On the pool of people, you mentioned earlier that 60 million has been the standard number of people who follow the anti-vax conspiracy theories. I used to do a lot of work in AI and psychology AI before becoming an MP. The use of language and the way things are phrased can have a powerful impact on what people believe and their actions. Are all those 60 million people who watched “The X-Files” TV show and thought it was a documentary, or is the reality that they are swapping and changing, that the language that is being used and spread by these platforms is convincing people who previously perhaps would not have been convinced about conspiracies and about misinformation?

Imran Ahmed: Yes, and they use really sophisticated tools to bring them in. Propaganda works. There is a reason why we study propaganda. The anti-vaxxers talked, for example, about creating answering spaces, waiting for people to express anxiety and then jumping on them with a bit of misinformation, which bridges them in. The Disinformation Dozen is worth looking at for so many reasons, but one of them is that like an industry they each have a different brand and market. One of them will target mums who are into yoga and wellness in California. One will target Trump voters. One will target another segment. They have both the bridging identity and the bridging arguments to draw people into extremism.

Q12            Dean Russell MP: Finally, related to that, are you finding in your Malgorithm report and so on that those algorithms are seemingly targeting more vulnerable people in society?

Imran Ahmed: If there are 60 million people across the US and the UK who are following spaces, this is not about vulnerable or not vulnerable. This is not about smart or stupid. It is not about race. Smart people are just as vulnerable to conspiracy theories for a variety of reasons, not least because they are more confident that they know what the truth is once they have worked out what they think the truth is, that they are using the evidence in front of them. Being clever is not resilience enough; you need to do some work to remove the content itself. The platforms are feeding that content in.

When I am in rooms full of smart people, I always ask, “What vegetable helps your eyesight?” Everyone always says, “Carrots”. I remind them that that was Ministry of Information propaganda during the Second World War to hide the fact that we had plane-mounted radar, so we told people that we had been growing carrots and feeding them to our pilots so that they could see at night. I do this, and then my favourite thing to do is to follow up with a question: “Now that you know that that is just propaganda, will you still tell your kids to eat carrots?” and everyone says, “Yes”.

The Chair: Let us hope the kids are not listening, Imran.

Q13            Lord Knight of Weymouth: In your evidence, you talk about the Bill failing to address the harms caused by online advertising, and you go on to talk about your report and campaign on that. What would you like the Bill to do in respect of online advertising? Why would we not have the Advertising Standards Authority regulate that?

Imran Ahmed: I am not sure that advertising is within the scope of the Bill at all. I know that there are other things going on in government and I have spoken to civil servants about that. They assure me that things will happen on advertising. There is one simple answer. I talked about transparency earlier. I like freedom of speech. I am a leftie; I believe that everyone should have the right to say what they think. I think that the platforms should be more transparent, not less transparent. There should be more speech when it comes to algorithms, to enforcement, and to the economics of those organisations.

If every advertiser knew where their adverts were appearing and had to publish it on a page on their website, for example, my team would not have to identify to them that their ads were appearing on malignant content sites. Google would have to tell them where their ads are really appearing, which it is very slow at doing.

They are through networks of intermediaries. Let us say that we had a law in the UK whereby every brand on their website—in the same way that I have to publish my registered address on my website and who the company is for—had to have a page where you could see where their ads appear. It would clean up the problem immediately, because no one would want to advertise on a malignant content site. It would give civil society the tools to do the right job. That is a very quick and smart solution. You could call it the stop funding misinformation amendment. That would quickly fix this Bill and end in one fell swoop the way Google Ads funds malignant content that causes so much harm to our society.

Lord Knight of Weymouth: And on the ASA and how that might relate to Ofcom? You mentioned that in your evidence.

Imran Ahmed: Do I think that the agencies, as they are equipped right now, could regulate that? Clearly not, because we have not been doing a great job at identifying the way in which malignant content is being used to date. This has not been on their radar. I get more interest from companies themselves; I am speaking at the Ad Week conference in a few months to talk about exactly this, asking companies to take responsibility for where their ads appear and the content of those ads. The problem is that if you have bought a package of half a million eyeballs, you just want the eyeballs.

Lord Knight of Weymouth: Those ads are algorithmically placed themselves.

Imran Ahmed: Absolutely. Programmatic advertising, they call it. But it means an abrogation of their responsibility to ensure that they are not funding bad content. It means that every time you buy something from the supermarket, you are inadvertently, by accident, thanks to Google, funding election misinformation that leads to riots, and Covid misinformation that leads to death.

Q14            Lord Black of Brentwood: I wholly agree with you that the single best thing we could do is introduce transparency into the net supply market. It would be very useful if you could let the committee have any ideas about how the Bill could be amended, especially if it is a simple amendment.

I just wanted to ask whether, on the subject of regulators, you think that Ofcom is the right regulator. If so, are the powers that are given to it under the Bill sufficient?

Imran Ahmed: There are things that are missing from Ofcom’s powers and things that have been passed on to Ofcom, such as having to decide whether to introduce or to recommend criminal sanctions. The fundamental thing to me is whether they have the power to commission independent audits that can go into the companies and check whether what they are giving us is true.

There is also the question of the missing statistics. One of the questions we ask all the time is: how much of the hate that is identified do you take down? They always give us the non-sequitur statistic: “90% of the information we take down is detected by algorithms before it even hits the site”. That is not what I asked. That is not the question. There are a number of things that we have been incapable of getting out of them. So there is the question of Ofcom’s ability to commission that independent data.

In part, there is the question of whether they will be able to knit together confidently the mix of civil society and their regulatory powers of government and of criminal sanctions to hold these companies accountable. My fear is that they will be treated with the same contempt that the FSA was treated with by investment banks in the early noughties and late 1990s, which is that they will never have the scale of powers required; nor will they be confident enough to wield them to protect society from the harms being created on these platforms.

Lord Black of Brentwood: That point about independent audit is absolutely crucial. Do you think that the Bill, as it stands, does that?

Imran Ahmed: No, it does not. It needs independent auditing powers and the ability to go in and get other bodies, not just self-reporting. You cannot ask Facebook to mark their own homework. That is why we are where we are. Self-regulation is over. It has to be over. The funny thing is that I keep being told that this sort of regulation cannot happen anywhere in the world. I keep being told that it cannot happen in the US. But I have a solid bet that it is one of those things where everyone is telling you in advance why it cannot happen and then once it does happen they will be telling you why it was inevitable that it was going to happen.

Lord Black of Brentwood: That is one important recommendation we can make.

Imran Ahmed: Absolutely.

Q15            Lord Gilbert of Panteg: Give us the Bill’s marks out of 10 for dealing with any of the issues you have described.

Imran Ahmed: I love a reductive statistic, which is why we wrote The Disinformation Dozen. But I do not want to mark Her Majesty’s Government. It is the most ambitious legislation anywhere in the world. It is a phenomenal display of confidence and strength and, for better or for worse, it is an independent Britain. We are able to put together our own regulation when it comes to social media, and the whole of the world is looking to the UK right now. The whole of the world is looking to this committee. I know there are colleagues in the US right now who are holding a 5 am watch party of this session to see what is happening. What is happening here is important. The bad guys are watching, too.

I cannot change a Nazi, so my job is to disrupt their work. My job with the companies is to make them take responsibility, because they are not Nazis, they are not anti-vaxxers; they are rational people and, as with any industry, think of the industry as amoral and gaseous and it will expand to fit whatever framework you put around it.

We just have to make sure that we are starting to put in some sort of regulatory framework to give them the boundaries of what we consider to be acceptable. It is our fault for having failed. The US literally legislated to create no liability whatsoever for social media companies for any of the content posted on their site -- Section 230 of the Communications Decency Act. An Act, in retrospect, of such enormous naivety, because if you remove the framework, you get the companies racing to the bottom. What we need to do is give them the framework so they know that these are our expectations for society. It is the decent thing to do.

Q16            Lord Gilbert of Panteg: We are sending a very powerful signal, so can we look at how the provisions will work, because you said that it was a very complicated Bill? It is a very complex Bill, and a lot of work needs to be done on the drafting to make it simpler. But it does three basic things. One is that it puts a series of duties of care on platforms to protect children from harmful content—we have not discussed that today, and that is more relevant for other witnesses—and to remove illegal content. Quite a lot of the stuff you have been talking about today is legal content. Not all of it is, but quite a lot of it is. It is also about protecting adults from content that is legal but harmful, and it does that by putting a duty on the companies to look at how their systems work, to identify potential problematic areas and to put in place a policy to deal with them. All that responsibility is on the companies. It is overseen by Ofcom, but the responsibility is on the companies.

It seems to me that nothing much changes, that the companies will decide what is harmful. Maybe there is some change in terms of illegal content. Maybe we will be able to ensure that companies take down illegal content much faster and that Ofcom will be on that case. But it seems to me that there is no change at all in relation to harmful content. It will be defined by companies. Their policy for dealing with it could be not to deal with it, and as long as they are open and transparent about it, nothing has changed.

Imran Ahmed: You are absolutely nailing the problem with the powers and the framework. You are asking the companies to mark their own homework, but you are also, in one respect, asking them to set their own rules and set the test itself. That is deeply problematic. As it happens, when it comes to setting the test, the rules for the main part are incredibly fair and stringent. They ban disinformation, they ban lies about vaccines, they ban the hatred, but they just do not enforce them. There is an enforcement thing.

CCDH has tended to focus on the failure to act and not on policy development. Colleagues like Danny Stone have been incredibly powerful in forcing them to evolve their policies, working with them from the inside and getting them to see dehumanising behaviour as another form of hatred, for example.

I am frustrated by this, because I know that those in civil society are the ones who have driven them to the position they are in now, but we need to hand the baton off to Government, to legislators, because we cannot keep absorbing the cost. That is why companies are taking money from Facebook and Google, because it is expensive. I have 14 members of staff. How do you think we pay for this? It is hard work, and it is not glamorous or well-paid work either.

We also have to take the threats. I alluded to death threats and things like, but right now CCDH is coping, for example, with the fact that we have been sued by one of The Disinformation Dozen, which has accused us not of defamation but of something quite extraordinaryof breaching their right to a reputation as part of their private life under EU human rights legislation. It is, of course, costing us thousands of pounds to defend that. We have to defend it, because it is serious. We have felt abandoned by government. It has convened things but has not given itself the powers nor taken the responsibility to do what has to be done.

With the indulgence of the committee, I could submit as written evidence some examples of the sorts of threats that civil society has had to endure, including the legal threat from the member of The Disinformation Dozen, so that you can see what is happening as a result of the failure of government to act to date.

Q17            Lord Gilbert of Panteg: The Bill is also full of quite vague definitions. Content that is harmful to adults is not defined. It is not clear who should define it. You said that you did not want to get involved in defining societal harm, and I am sure that many other people would not. The Bill leaves most of this to platforms. Who should make these definitions? Who should create the framework? Should it be platforms, Ofcom, the Secretary of State, or Parliament?

Imran Ahmed: I would like to see it on the face of the Bill. These things should be put on the face of the Bill. Your definitions of hate and misinformation and the fact that they are covered by this Bill, the definition of the harms that are being addressed, should be on the face of it. There obviously needs to be flexibility to have them added in. They will escape from the system as it exists now, with its lack of clarity. The less clarity there is, the harder it is on those companies to do the right thing, and the more wriggle room there is for them to escape from it.

They will always be probing the edge of the legislation, and they have the best lawyers, the best public affairs, the best public relations. If you are in DC and you are trying to find a law firm to represent you, which we had to do, finding one that does not have a conflict of interest because they also act for one of the big tech companies is near impossible. We have ended up with one of only two or three firms that can represent you if you take on big tech.

Q18            Darren Jones MP: Just a very quick supplementary following the discussion earlier on the exclusions for paid-for advertisements. I am interested in the distinction, which we discussed earlier, between an individual posting something and organised criminals posting things. Presumably organised criminals will just pay for their advertisements in order to get through a loophole in the Bill here. If you agree with that, from your study do you know how much of the content you reviewed was already paid-for advertisements?

Imran Ahmed: Some anti-vaxxers were able to advertise on their platforms until mid-last year. A lot of the election misinformation was paid ads. We have a report coming out next week on paid ads that we have seen on other platforms that do something really, really horrific. It is one of the most challenging pieces of research that we have ever put out about the content that they accepted money for paid ads for. They currently have some degree of transparency with their ad library, but as usual it is unwieldy. They make the way in which you interface with their transparency mechanisms as complex as possible.

There is a tiny, technical, nerdy issue here. When you are defining what that transparency looks like and determining their APIs, the way in which you interface with their platforms at a technical level, their ability to draw the information out of their systems, and the flexibility you have to combine data to have compound variables, as well as just the variables that you are looking at, are important.

Darren Jones MP: So exclusions for paid advertisements is a problem.

Imran Ahmed: Of course. It is bananas.

The Chair: Imran Ahmed, thank you very much for your evidence.

Imran Ahmed: Thank you so much.

 

Examination of witnesses

Sanjay Bhandari, Edleen John and Rio Ferdinand.

Q19            The Chair: Good morning, and thank you to members of the second panel for joining us in this evidence session. My apologies for running slightly over on the first one.

Obviously the purpose of this committee is to look at the Government’s proposed legislation on online safety and, in particular, for this panel to discuss the prevalence of abusive behaviour towards sports stars, footballers in particular. But, of course, our focus is defined by the Bill, which is looking at online abuse. We would not, at the start of this, pretend that racist abuse exists only in online environments, and obviously scenes that we see consistently within football stadia are a particular cause of concern and not about online environments but about real and physical environments.

Rio Ferdinand, there are over 100 non-white men who have represented the England national men’s football team, yet, despite that, not even that status can protect people from the personal abuse that they can receive, particularly on social media. Given your career, how disappointing is it to you that the position still remains where this scourge of racist abuse directed towards footballers, which once upon a time we may have thought as a feature of life in the 1970s and 1980s, still affects players today?

Rio Ferdinand: Thanks for having me here, first and foremost. It baffles me. It is disheartening. In the 1970s and 1980s, it was a lot more common. We went for a period where maybe it was under the carpet or was behind the scenes a little bit more, but now you can see that the data is telling us that it is here and it is back. We saw it again in the aftermath of the Euros when the three black players missed penalties, and what happened then.

It is obviously disappointing, but at the same time when those three black players for England missed those penalties, the first thing I thought was, “Let’s see what happens on social media”. I expected what happened to happen. That is the disappointing thing: when that is your mindset immediately after a black player, who is representing the country, who is doing great things, does not make a mistake but misses a penalty and has to sit there and go through the abuse in the coming days via social media. It is totally disheartening. That is where it comes to our point on why we are here, talking on behalf of the football community and that. The players should not have to go through this, and not only the players but the people around themtheir families, their friends, the various different types of supporters from all over the world, their children. They are the important people in this as well, the wider part of society and community who have to sit there and listen and see this.

AI is there for so many other different aspects on social media platforms. I have a YouTube channel, and we have copyright issues if we put a particular video up that we have not had copyright approval for. That works. We cannot find it here for certain emojis or certain words or the terminologies that are used on social media platforms. That is baffling. The technology is there.

The Chair: That is an extremely good point. All football fans will know that if, on a Saturday afternoon, you want to try to find on Twitter a goal that your team has scored, that part of the footage will be down within minutes.

Rio Ferdinand: Exactly.

The Chair: So if the technology exists to remove pirated content, why can it not be used to remove abusive content? How disappointing is it for you that, on the night after the England-Italy game, it was not just that individuals were posting comments that were not being removed, that accounts were posting content and those accounts were not being closed down, but that the recommendation tools of those platforms were highlighting and drawing attention to racial abuse as trending topics?

Rio Ferdinand: Exactly, and the perpetrators are allowed to stand behind a curtain at the moment. They are allowed to post. These keyboard warriors are allowed to say and spout all this abuse from behind a curtain. They are anonymous, and the fact that you can be anonymous online is an absolute problem for everybody in society.

This is a good example that we were speaking about before. You can go to a game, and if you threw a banana on a pitch there would be repercussions, but online you can type in and post a banana to a black player or a black person with racist connotations behind it and be fine; you are not going to get punished. There are no repercussions. How is that right? It cannot be.

Q20            The Chair: Rio, you may have comments on this as well, but, Sanjay and Edleen, it seems that that complaints have been raised consistently over the last two or three years over major incidents directed towards a series of players. The incident with Paul Pogba took place probably two years ago, yet we do not seem to get an adequate response from the companies. They do not seem to be good at removing the accounts, removing the posts, anticipating likely incidents. What has your experience been of dealing with the social media companies in raising these concerns with them?

Edleen John: Just to reiterate what Rio said, we are delighted to be here today giving evidence on behalf of football. It is important to flag from a football perspective that we have been engaging with social media organisations for years now, and what we consistently receive are platitudes. We get promises of things that will be addressed. We get told that of course racism, discrimination of any kind, is a priority. What we are seeing is that online abuse is a golden goose for social media organisations. They are able to amplify messages. They are able to make sure that the reach is broad, far and wide, and they are not tackling the problem that we are seeing across the entire football landscape.

As Rio mentioned, this is not just players and their families but coaches, administrators, refereeseverybody involved in the game. It is not just football. What we are seeing from social media companies is significant resistance, a desire to just focus on a business model and make money. It is a mechanism by which they are not putting in place the protections that are so desperately needed inthe online space.

Sanjay Bhandari: Thank you, Chair, for the invitation to speak. I am grateful to have the opportunity to help to drive some change here, because that is what I want to focus on: the solutions. This revolutionary Billwe will be the first globally to do this, as Imran said in his previous session—is a real opportunity. We must seize this opportunity.

On the types of harm, Rio talks well about the challenges he has seen as a player, as does Edleen about some of the other challenges, but this is not just about elite players. This goes all the way down to grass roots, journalists, coaches and fans. It is everyone. It is the people you do not see, not just the elite players. We also have to remember that racism does not travel alone, and what we see are the four horsemen of the Apocalypse of hate. We see hate based on race, gender, sexual orientation, disability and religion.

What is the response of social media? I have been having these conversations with them for two years. Others have been in the conversations for three and four years. My experience is that we will have conversations in London and it will be London that says, “Oh, that’s interesting. Maybe”. Then California says, “No”. I am not sure whether I am stuck in Groundhog Day or Dante’s Inferno, but either way it is a deeply unpleasant experience.

Q21            The Chair: That is why the committee is here: to try to find a way out of that hell.

Finally, from me, Rio Ferdinand, are you concerned that the prevalence of this speech, this dehumanising language on social media, is giving licence to racists, effectively—to speak out, to seek other people of similar opinions and to act in a concerted way to deliberately target and dehumanise sports stars like footballers?

Rio Ferdinand: I agree 100% with that. It is normalising racist behaviour. It is normalising racist language, if you put it in the context of a young person who supports and admires a certain player, at whatever level that is, whatever football club they support, and he is looking through that feed and seeing racist language. Maybe that player is now not his favourite player because he missed a penalty, and he is seeing the racist language that is now being put towards that person who he is now not in favour of. It normalises that type of language. That young person has seen it and then goes to his network of friends, either on social media platforms or face to face, and there are no repercussions for the person who put that online:It’s fine, it’s normal, so I’ll do that at school and I’ll say it at school. It’s okay”.

When there are no repercussions, nothing is going to be done to put that person in place, put that person in the spotlight, take them from behind that curtain to expose them as to who they are, spouting this ridiculous, ignorant language. Then people will find it and think it is normal. I do not know what other industries there are in the world where it is okay to spout such language, be okay, get away with it and be able to hide behind a curtain, as we do online.

Q22            Dean Russell MP: If I may address you in the first instance, Mr Ferdinand, we do not often hear the real human impact of this, and I wondered if you would mind expanding on the impact this awful abuse has had on you and other players, but also on families, because I often find that in our position, in public positions, it is my family and friends who are most offended and hurt by what people might be saying about me as a politician. Would you mind expanding on that and tell some of those stories, please?

Rio Ferdinand: When you sit at home on one of these devices that we all have and that are in our hands probably 80% or 90% of the day, and you see that there is negative discrimination on there and that language is there and prominent for you to see, your self-esteem, your mental health, is at risk. It sometimes all depends on where you are in your life. Some people are at a low ebb anyway, and seeing that just has further impact. Everybody receives this type of discriminative language very differently, but it does hurt.

Again, it is important to stress that it is not just about that person. It is the wider network of that person, what it does to their friends and family. I have seen members of my family disintegrate at times with situations like this when it happens. I have seen other sports stars’ family members take it worse than the actual person who is receiving this type of language through social media.

It is hard sometimes for people to understand. Some people say, “Yeah, but it’s only coming on the phone. Just ignore it, turn it off”, and that is what a lot of the social media platforms say when you speak to them. It is down to the victim to report it or put the blocks in place through turning certain things off on your device so that you do not see it. That is not stopping the problem, is it? It is an easy cop-out for the social media platforms when they put forward ideas like that, that we can quash this situation.

Dean Russell MP: Just on that point, you mentioned your YouTube channel and copyright, and the profiteering that happens from advertising and so on. Do you think that Facebook, Twitter, these channels, are effectively profiting from prejudice as it currently stands?

Rio Ferdinand: I think they do. Imran spoke very eloquently about that. Any type of conversation is profitable and good for these companies. I am a part of that ecosystem in terms of producing content that you want to be shared and liked, but, if it is wrong, things need to be in place to make sure that it is dealt with properly, and that the repercussions are in place.

Just to go back to my point about experiences, there is another big impact from when these social media messages come through online and the hate online comes through to you as an individual. A lot of us have children, and I have to sit there and have breakfast with my kids and explain to them what the monkey emoji means in that context, what a banana meansDad, why is there a banana under your post? What is that about?” I am having to do that in today’s day and age where there is AI and resources available for these companies to be able to deal with these situations, so that I, as a parent, do not have to go down this road and explain that. You like to think that these people would put those things in place.

Q23            Dean Russell MP: Finally, if I may, I have been very fortunate in that I am the Watford MP, so I have got to know Luther Blissett very well, the incredible legend in England and Watford. He has told me stories of what it was like in the 1980s, the awful chanting in crowds and how especially black players broke through those barriers and we had got to a seemingly much better place over the past few years. Do you think we are going backwards now with what is happening on social media? Is that starting to be seen more in crowds, fan bases, and so on?

Rio Ferdinand: The data is telling us that. I get access, luckily, to various pieces of data, and that is what it is telling us. The police force is telling us that. It is just a knock-on effect from, “It’s okay online, so it must be okay in the stadiums. Until you sort those situations out online, it will keep on being reflected in the stadiums. I am now part of a case. I was racially abused at a game recently, so I know from personal experience that it is becoming more normal.

Dean Russell MP: Thank you for sharing your stories.

Q24            Lord Clement-Jones: Thank you for coming today. Rio, you were talking earlier about the ability of abusers to hide behind a curtain. Edleen, in the FA evidence, and we have spoken about this previously, you go into some detail about the identity verification aspect. It is not so much that you are in favour of compulsory verification but that you believe in limiting the reach of a user who does not verify. Is that right? Is that the approach that should be taken in the Bill?

Edleen John: It is fair to say that when we think about verification, social media organisations would have us believe that it is a binary option and an on-off switch whereby people have to provide all information or no information. We believe that there are multiple layers and multiple mechanisms that can be used in combination to help tackle this issue. ID verification is one element. Default settings could be another. The limiting of reach could be another.

We think it has to be layering, because when we look at the volume of abuse that is received across the world of football, we see that a lot of the abuse is coming from burner accounts whereby people set up an account, send abusive messages, delete an account and are able to re-register another account within moments.

If there was some limitation on what could be done with a brand new account, say a cooling-off period, or if there were mechanisms by which there were certain default settings, we think that that would act as a level of deterrent to individuals who are currently setting up burner accounts and abusing people in this way.

It is not a binary situation. We recognise that there are some instances and some individuals for whom a level of anonymity is important, but we do not think that we can start with social media’s current stance whereby they feign complete ignorance about the anonymity issues, which Rio has talked about. We think that the Bill has to address this challenge.

Lord Clement-Jones: Rio, would you go for that more nuanced approach rather than insisting on everyone revealing their identity when they post?

Rio Ferdinand: I think that is right. There are some people who, for different reasonsfor safety and so onwould not want everything exposed immediately from the get-go. There are layers to it, and the entry point is definitely a starting point.

Sanjay Bhandari: Yes. Ultimately, the system at the moment, and this is really about systems and processes, is too frictionless. It is too easy for someone to just turn up and abuse someone. We have to remember the online world and that abuse is not someone standing on Speaker’s Corner in Hyde Park and shouting abuse into the ether. This is 150 people in a Twitter pitchfork mob turning up in your living room and spitting abuse in your eyes while your family are next door unable to do anything about it. That is the problem that we are dealing with, and that is the problem that we need to address.

We need to add more friction into the system, with all the mechanisms that we have talked about. The answer in terms of the legislative framework is to give Ofcom the power to introduce codes of practice on the reach of anonymous accounts so that they can be managed on an ongoing basis. It is also a dynamic environment. We are not legislating for the world as it is now; we also have to legislate for the world as it is going to be. We cannot anticipate all those changes, so the best thing to do is to give Ofcom the power to do that.

Q25            Lord Stevenson of Balmacara: I am sorry for appearing remotely, but thank you very much for the evidence so far. You said that you hoped we would be looking for solutions, and obviously one of those will in some form be a regulatory authority that we would be able to put some muscle behind in terms of the aspirations that we have for change, much of which we have discussed today. If we are going to avoid Groundhog Day, what do you think is the way forward for Ofcom? Do you think it has the authority, the capacity and the focus that you have been identifying as being lacking at the moment?

Sanjay Bhandari: Maybe I will take this question in the first instance, on the basis that I had a 30-year career in law and compliance, so it might be in my strata.

Lord Stevenson of Balmacara: You can make it a job application, if you want.

Sanjay Bhandari: My CV is available. I will go back to the start. This is an amazing opportunity that have, here in the UK, to be a global leader. We will be the first to go, but, of course, the corollary of being the first to go is that there is no rule book, no precedent that we can just cut and paste and implement.

It is sensible to give those powers to an existing regulator, and, of the existing regulators, Ofcom is by far the most sensible one, because it is used to regulating similar companies, the telecommunications companies. Social media are different to telecoms, but they are probably the nearest analogue.

Taking all Ofcom’s experience of regulating the telecoms industry is probably a sensible place to start. It then comes down to pace, power and energy, in football terms. Get on with it quickly and give them the right powers, not just the powers to enforce but the powers to supervise. My experience in other heavily regulated industries where I have spent my career is that it is the supervisory powers that tend to be the most effective. It is the ability to call for robust data transparency to deal with the volatile data that we have at the moment. You need that robust data to enable you to evolve interventions over time. It is exactly right, in the same way in which the Financial Conduct Authority has that power to call for information, that Ofcom also has those robust transparency reporting requirements, along with other powers that we have talked about in our paper.

Then, with energy, it is about giving them the resources. That means investing in them and maybe hiring people from the social media companieshaving poachers turn gamekeeper. We have a track record, have we not, in all those heavily regulated industries such as banking, life sciences, utilities, telecoms, energy? Once upon a time, they were all in their regulatory infancy, and we are in our regulatory infancy. This is the time to invest the resources.

Lord Stevenson of Balmacara: All you have said is very heartening, and you seemed to have picked up the issues that will be required, but underpinning all this is this duty of care approach, which of course is different. It does not set the parameters exactly; it encourages a dialogue and a debate. So the regulator is not just there to police; it is also there to assess and to provide standards that will be sustainable, and, as you pointed out, that may need to change as we go forward. Do you think that is the right approach?

Sanjay Bhandari: The framework is right, with the escalating duties of care. I am aware, from the competition law and the privacy areas, that the scaling of fines relevant to turnover is a good enforcement mechanism.

It comes back to the point about us being in a dynamic environment, so a regulator needs the power to be able to call for the data to enable them to evolve. The regulator is not just about enforcement; an effective regulator is also about supervising and avoiding the problems happening in the first place, not just closing the stable door after the horse has bolted. At the moment, online abuse is like being constantly punched in the face, and it is no comfort to me to say that we are going to arrest the person who punched me in the face. I would rather you prevented someone from punching me in the face in the first place.

Lord Stevenson of Balmacara: There is an educational element as well.

Sanjay Bhandari: Yes.

Rio Ferdinand: I think that was a key point. The last word that was used there was education. Everyone is looking for solutions, which is fine and great and we do need that. Punishments have to be set out, but education has to be a key part of any type of reform or any type of punishment, because without that I think we would just be back to square one again. People will just accept the punishments, get right back online again and go again.

It is about educating the next generation, but also people of our ages in this room, so that they can understand that there are new languages, there are new ways of speaking, there are new ways of communicating now via social media, so you have to be careful and you have to be very aware. You have to be made aware, and education is definitely a way forward with that.

The Chair: As you said earlier, if a person has been educated through their experience, and their experience is that racist abuse exists and spreads without any kind of control, it gives the impression that this is acceptable, even when it is clearly not.

Q26            Lord Knight of Weymouth: I would like to pick up directly on that. The Bill has one clause on the duties on Ofcom with regard to media literacy. Do you think that is sufficient for education? The Football Authority evidence talked about the work that it does on education. The Bill is silent on anything to do with schools and the Department for Education. Should the Bill do more on education?

Edleen John: The reality is that we recognise that to tackle this problem it is going to need a multi-pronged approach. As it relates specifically to this Bill, we recognise that social media organisations are not eradicating the hate that we see, because there are individuals who are uneducated sitting behind a computer, and that is having a significant impact. Let us be clear: the messages are being amplified on those platforms, and that is having a significant impact and a significant reach. This is not an individual-level problem. When we look at the scale of this abuse that we are seeing on social media platforms, it is millions of posts each and every day.

Yes, education is absolutely one part of it, particularly as we look at the younger generation and the children growing up who are now engaging and using social media platforms more, but I think we have to be honest in saying that the responsibility at the moment as it relates to this Bill has to lie with how we can hold social media organisations to account so that they are not amplifying these messages when they do take place, and that we have a systems and process solution so that we are not putting a band aid over a bullet wound.

At the moment, that is what we are doing. We have community standards whereby social media organisations themselves define what can and cannot be in place and what people can and cannot see. We have seen that that is not enforced to the level that we need to protect all users on online platforms.

Yes, I recognise that education is part of tackling societal discrimination, but let us not use that as the out or the excuse for what we are seeing on social media platforms at the moment.

Q27            Suzanne Webb MP: I may have inadvertently photobombed you coming on to the estate earlier, so many apologies. It was before you arrived, but I have a feeling that I may be in that video. It is a nice link, so all’s well that ends well.

I go back to the duty of care and Ofcom. You have said that there is no rulebook and no precedent. You also talked about the band aid over a bullet wound. I am conscious that, whatever we do, we have to make it right. It has been said that this is a revolutionary Bill. Those were your words. In the previous evidence, there was reference to this being an ambitious Bill. As I say, we are very keen to get this right. Do you think the duty of care by the platforms goes far enough? Does the Bill identify that? Is it going to fix the problem, or are we going to start talking about legislation further down the road? We need to keep focusing on this.

You also talked about whether Ofcom has the right powers and the supervision powers. My concern, which I think we touched on earlier, is that the platforms will be marking their own homework, effectively. That is of great concern to me. As politicians, we are also on the receiving end of a huge amount of abuse, unnecessary abuse, and we empathise and sympathise very much about the impact. All of us in this room are very keen to get it right. Is the duty of care on platforms going to fix the problem?

Edleen John: As we look at the Bill as it stands at the moment, we recognise that a number of elements are addressed, but we in football believe that of course it can be enhanced and strengthened to make sure that it addresses the problem that we are discussing here. This is new, as Sanjay said, but that does not take away from the fact that there is some precedent in legislation at the moment that we think could be applied to this Bill.

To use one example, we already have protection for groups identified under existing legislation, the Equality Act, as being at risk. We think that should be mirrored in this Bill, as an example, so that the discriminatory abuse that Rio and others have talked about can be mitigated, and we do not see it on online platforms because social media organisations are held to account.

We also think, as Sanjay said, that Ofcom should be given powers on content that is harmful in the broadest sense, not just content that is illegal but legal content that is harmful. We absolutely recognise how difficult it is to define and clarify what legal but harmful means. Let us be clear: we think that Parliament is absolutely the best place to make that definition versus within social media organisation boardrooms, because, from what we have seen from terms and conditions and community standards thus far, it does not go far enough and it does not address the issues at hand.

There are a number of elements that we think can be further enhanced as they relate to the current Bill. On the legal but harmful piece, one of the discussions we were having this morning is that there is some precedent. I will hand over to Sanjay, because I think it is important for him to give that stance and give some context here.

Sanjay Bhandari: Sometimes people think that the legal part feels like a big grey area, and how do you legislate for that? Actually, we have some jurisprudence from elsewhere. There is a civil law cause of action in conspiracy, and conspiracy has two limbs: if lawful means conspiracy, or unlawful means conspiracy. You can conspire by lawful means and be held to be civilly responsible for that. That goes back to the 1940s and was clarified in the Lonrho v Fayed litigation in the late 1980s/early 1990s, and there has been a rich history of that economic tort.

There are two key defining characteristics. Was harm experienced in this case? Yes, tick, harm was experienced. Was it intended? Was it aimed? If you send a monkey emoji to a footballer, that is pretty clearly intended to cause harm.

We have precedents, we have jurisprudence. We just need to look at that jurisprudence from elsewhere and bring that under harmful content, because I think it is achievable.

Q28            Darren Jones MP: I am interested in this discussion about the balance in the Bill between content moderation and consequences for individuals. We have talked about both today. The Bill does not really deal with anonymity or being able to find the person who tweeted something and provide a consequence for that action, but it does deal with content moderation. I was interested when the Government said in July that they were going to extend football banning orders for abuse on social media, for example, but I do not really understand how they can do that if you cannot identify the person who tweeted the racist abuse in the first place.

I am interested in whether each of you thinks that this Bill is doing the right thing in trying to deal with content moderation first and that we should focus on getting that right, and then maybe try to deal with anonymity and individual consequences second, or whether you think it should all be included and we therefore need to make some quite fundamental reform to the draft Bill as it stands.

Rio Ferdinand: If I am being honest, you are harming yourself doing one without the other. You are holding back on one area and trying to sort out another. The anonymity is a key part of it; you need to understand and see who people are. I think that when people are made to be visible, they may think twice, and their life could change if they are now out in the open and it exposes who they are and what they have been spouting. I think that is a key part of it, but vetting the content and the social media companies understanding that go hand in hand.

The point before was whether the social media companies have the intention, the desire, to deal with discriminative online hate effectively, efficiently and quickly. Their actions have proved that it is not at the top of their list of things to do. I think that government legislation will be a key part in making sure that we get it right, and that the amendments are made and done right.

Sanjay Bhandari: I think we should be careful not to think in a unitary waythat there is only one problem and one solution. Part of the challenge here is that it is multifactorial and the problems are interconnected. We probably have to deal with all those issues, but I do not think that requires enormous, substantial changes to the Bill. It is about how you delegate the right power and authority to Ofcom to ensure that it can evolve regulation in its rules and codes of practice to meet a dynamic industry.

That is exactly what regulators do in other industries, and this is the most dynamic industry. Twitter or Facebook could be out of business in five years’ time because a competitor has come up and stolen its lunch. That is the way the industry works, and how are you going to deal with that? When the world’s greatest ice hockey player, Wayne Gretzky, was asked, “Why are you the greatest hockey player?”, he said, “Because I skate to where the puck is going to be”. We must have that mentality here and think about where the world is moving to, not just deal with the snapshot of the problem as it is today.

Edleen John: I could not agree more. A key element, as Sanjay said, is to give Ofcom the powers to create those codes of practice. Part of that will be, or should be, on the reach of anonymous accounts, because, as Rio has said, the impact of the anonymity for a lot of people across the football space is significant.

There is no singular intervention that is the panacea. It will be a combination of interventions that are going to lead us to the place where there is a solution that addresses the problem. Of course, we are here because we want to get to that place where we have a solution that protects all the users from a social media perspective and where our football players do not feel as though they are being bullied from an online platform, where they do not feel as though they are being told to sit in their front room and listen to people screaming in their eyeballs and are being told, “Just cover your ears”. We want to get to a place where there is a tangible solution in place that addresses the issue, and in order for that to happen there are some amendments that we would welcome seeing in the Bill as it stands at the moment.

The Chair: You are saying that the Bill needs to address the question of the accountability of the account holders to the platform, so that they can be identified by the platform if they were behaving in an abusive way, and that the Bill needs to be prescriptive about what should be within scope, certainly in the areas of harmful but not illegal.

Edleen John: Absolutely. I think the challenge that we see, if we think about social media organisations and some of the reports that they put out, is that they tell us that they are able to identify who is behind the abuse on their platforms, but then, as Sanjay has said, when you try to dig deeper and ask them further questions about that data, it quickly becomes clear that they do not have the level of data that they are portraying.

So tackling transparency in reporting as part of the Bill is also key for us. We have seen that post the Euros. I was saying to both Rio and Sanjay this morning that I asked social media organisations some questions the morning after the final, and weeks on I am still waiting for responses to those questions. If they have that data, it should be really easy.

The Chair: If you have not already done so, I think we would be quite interested to see what those questions are, and we might ask them ourselves.

Edleen John: Yes, I am more than happy to share.

Q29            Baroness Kidron: Edleen, we have made this incredible leap in the women’s game, and that has not really come up yet. How are women experiencing abuse and, in particular, how does that affect young girls who perhaps do not have the tradition of thinking that they might be footballers? What is that gap?

Edleen John: I think it is important to flag that this abuse is being received right from the top-flight game—England playersdown to the grass roots, so including young women and players in our impairment-specific pathways. If we think specifically about young women and the challenges that lots of people face growing up, such as questioning identity, self-esteem, and all the normal teenage angst, that is of course amplified when you are a player on social media who is then being abused for a characteristic that is just part of who you are.

To use an anecdote, several players in the game have recently experienced discriminatory racist and misogynistic abuse and have reported that abuse. The abuse has been so significant that at times they have been blocked by social media companies from reporting the volume of abuse that they are receiving. We have players in the professional game who are being halted because they have reported abuse too many times. What we have at the moment is a system whereby social media organisations protect the offenders more than they protect victims. That, for us, is a significant problem.

Baroness Kidron: So, for absolute clarity, what is happening is that the victim is going to the social media company, which is saying, “You’re coming to us too many times, and thats the problem, not the abuse you are experiencing”?

Edleen John: We have an anecdotal example of a specific player with that exact experience. They were blocked from reporting the abuse that they had consistently received, and it was other individuals who had to get involved externally to help them report it.

Q30            Baroness Kidron: I have another question, and it builds from what my colleague said. After the game, football went into No. 10 and promises were made. What were the promises, and where is the gap? What has this Bill promised to deliver for football?

Edleen John: This Bill has promised that the legislation that will be put into place will address this issue. That is why we think that the enhancements to it that we have asked for are critical if we want to address the issue. The Bill as it stands at the moment addresses some of the issues, but it does not go all the way in tackling the problem that we are seeing across the football landscape.

Sanjay Bhandari: Within a couple of days of the Euros final, the Prime Minister promised to stamp out online racism. We think that this is a good framework, but I think that the additional changes that we are suggesting give us a better chance of meeting the Prime Minister’s promise.

The Chair: I appreciate that you submitted written evidence to the inquiry. If there are specific amendments to the Bill that you think should be made, we would certainly be very interested in seeing those.

Q31            John Nicolson MP: Rio, it was very depressing when you told us that it is like being back in the 1980s and it is all just back to the way it was then. I cannot imagine what kind of person would want to throw a banana on to a football pitch or send monkey emojis. Who are these people, in your experience?

Rio Ferdinand: They are from different backgrounds, from different walks of life. In my experience, it has sometimes been a young schoolboy, 13 to 14 years old, sometimes younger. It could be an estate agent. It could be a banker. They are from all different types of life. That is the crazy thing; you cannot pinpoint one type of person from a certain background and say that it is the stereotypical racist who is putting this abuse online, or at stadiums. It is a varied demographic.

John Nicolson MP: Are there lessons from the 1980s to be learned in how we tackle it and make things better, or is the world just so different because of social media that that is where we have to focus?

Rio Ferdinand: I think the landscape is very different now with social media. We have never been in this area before. This time is very different. Again, you are allowed to be anonymous now, and that is the big difference. The fact that you can be anonymous online gives you a certain amount of power; it enables you to puff your chest out and be able to say what you feel, especially compounded by the fact that there are no repercussions now. I think all that drives numbers up, and I keep going back to the same point: we are being told by the police and so on that the data is that it is going up and is becoming more prominent.

John Nicolson MP: It is having a knock-on effect on schools and elsewhere where we hoped and believed that the experience was that things were getting better and language was improving, and you say that it is all sliding backwards.

Rio Ferdinand: Yes, it is sliding backwards. It got better. I am not saying that racism or discrimination in all forms had gone away, but it seemed to have reached a point where you were not hearing and seeing it as much, but again—

John Nicolson MP: Kids feel that they have licence because of what they see online.

Rio Ferdinand: Yes, they are empowered.

Sanjay Bhandari: Can I pick up on that point about who is doing it? It is an important segue into the bit about data transparency. This is exactly the problem. What happens is that we see an incident, we ask who is doing it, we have an absence of data, so we each fill that vacuum with our own prejudices and anecdotes. So we do not have data, we have “anecdata”. This is why we need a regulator that has the power to call for data to understand root causes and who is doing this, where they are, why they are doing this, what their age profiles are, and which interventions are going to work.

You could introduce a football banning order for online abusers, but that is only going to impact people in this country who go to matches or want to go to matches. I do not know if that is going to deal with 5% of the problem, 10% of the problem, 50% of the problem. That is the challenge we are dealing with, and the only way you are going to get there is if you have a regulator that has the power to call for the data from the organisations that have it, which is social media. We have been banging our head against a brick wall for that data and we are not getting it, so it needs to be mandated.

Edleen John: Let us be clear, to add to what Sanjay said, that as football we have tried mechanisms to look at how we can overlay and get some information and get data using our own monitoring and being proactive in approach. What social media organisations are telling us is happening and what we are finding does not correlate. It is a significant problem as it relates to the data, because social media organisations might have you believe that football banning orders are going to solve all the problems because all the online abusers are indeed football fans who go to matches every weekend.

That is not what the data that we found from a football perspective tells us, so it is critical that that data is correct and that social media organisations cannot mark their own homework or spin the statistics to tell the story that they would like to tell. Even law enforcement and the police in the discussions that we have been having with them say that they face a similar challenge in getting the relevant information out of social media organisations. Delay tactics are put in place. There is a deferring and a moving away from the specific asks, and we need that to be addressed. At the moment, it is not being addressed.

Q32            John Nicolson MP: We have heard that, and your message is very clear, Ms John. We have Stonewall coming in after you. In a previous committee session, I spoke to the then chief executive of the English Football Association, and he told me that he would not advise any footballer to come out as gay. His comments were very controversial. I thought he received a bit of a rough ride, and I do not think he intended to convey the message that some thought. I think he was saying that it was just not safe for footballers to come out. He said that he did not feel that the English Football Association could guarantee the safety of footballers if they came out. Nobody has come out since.

Rio Ferdinand: If I could answer that, I am shooting something on homophobia in football and I have just met a player who was coming out. He was advised by a lawyer not to come out and speak. Initially I thought he needed to come out, speak the truth and be proud of who he is.

John Nicolson MP: Can you say who it is?

Rio Ferdinand: No, not now. It is not for me to say that.

John Nicolson MP: Sorry, I thought you said he was coming out?

Rio Ferdinand: Yes, I have spoken to him.

John Nicolson MP: He wants to come out?

Rio Ferdinand: Let me finish, for one second. What I am trying to get at is that now I understand why the lawyer advised him not to come out. It is because every individual is different and you cannot use a blanket approach. Every individual is at a different stage of their life in understanding themselves and their sexuality. He advised him based on his experience with that individual. He did not think that he was strong enough mentally and had the right pieces in place to be able to withstand the media attention, the spotlight, all the different emotions that are going to come out and the pressures to deal with that situation at that moment in time. Initially, I was quite put back about the advice, but after it being explained by someone who has been through that process, I understood it.

John Nicolson MP: Yes, because of course everybody is different and people should not be pressurised into coming out when they are not ready to.

To go back to the FA, what the chief executive was saying was that the FA could not provide a duty of care to footballers who wanted to come out. We all know that there are footballers who are out with their families, their friends, their teammates and other folk in the club who know that they are gay, and they have to hide because they do not feel safe. That is an extraordinary position to be in in the 21st century. Would it still be the FA’s position that they do not feel that they can protect a footballer who wants to come out?

Edleen John: It is important to say that the FA are doing everything that we can to make sure that we are creating a culture of inclusivity for all individuals, irrespective of background, sexual orientation, race, religion or anything else. We are proactively striving to make sure that all players, all participants, across our game feel as though they are welcome, are respected and feel a sense of belonging. We are working with the various football leagues, the various football clubs, to make sure that we have that engagement, we have education sessions, we talk about campaigns, and make sure that we are clear that our message is that football is absolutely for all.

As part of any employer’s duty there is a duty of care, so we continue to work with clubs and leagues to make sure that any employee who works for them, so a player who works for a club, is supported and has that duty of care, that well-being support, and anything else that they might need.

John Nicolson MP: I am sorry, but it is self-evidently not working, because there is not a single out player. There is no other sport that I can think of where there are no out players. We all know that there are a lot of gay people in the world, including several members of this committee, like me. Is it not extraordinary that that is still the case? I am not blaming the FA for it necessarily, but it must say something about the type of bullying that gay people are scared about as footballers, it must say something about the reaction they are expecting to get on social media, that they do not feel that they can live their real lives and be open in sport.

My goodness, if, as a wealthy footballer, you do not feel that you can come out, what message does that send to a kid living in a housing scheme who is being bullied every day?

Rio Ferdinand: If you look at it from a different angle, you might come to a different way of thinking. In other sportsrugby players, divers most recentlythe spotlight, the attention and the pressure may not be as strong. It is still a huge announcement when you are releasing that information to the public, and I am not saying that football is a bigger and better sport, but I am saying that the number of eyeballs and attention and press pages that they are going to get, and the responsibility on that person to come out, is so much bigger. Just because you are wealthier, have more followers on social media and are in a more prominent sport such as football does not mean that you are somebody who can deal with all that attention. It is very much about the individual and not about the sport. It is about being capable of coming out and being able to withstand that media attention.

Lord Knight of Weymouth: Is that why it is different in the women’s game, where there are plenty of out footballers, but not in the men’s game?

Rio Ferdinand: I would say so. It is more commonplace, and again, like you say, exactly to that point, the media attention on a female player coming out as gay and being open is far different from the men’s game, especially if it was one of the elite players at the elite football clubs. It is very different. Hopefully we will get to the point where everybody is as confident as they can be, and they know and see that the support mechanisms are behind the scenes and that the FA are trying to get them in place.

John Nicolson MP: In closing, before I hand back to the Chair, I would have to say, Ms John, that I do not believe it is because all the gay footballers in football are fragile or emotionally unprepared or any of the other reasons. A lot of them will have been through a journey and are adult men, probably many of them in happy relationships with strong support structures. There is something unique about football at the moment and the level of abuse that they fear they will get, because they see what has happened to black players. I suspect they are terrified about it, and that is a very sad place for us to be.

Edleen John: We must acknowledge that for every individual in any circumstance it is up to them to decide how and when they come out. I think the responsibility for us as football, as the authorities and the organisations, is to make sure that we create an environment where people feel supported, feel that they can come out, and feel safe. That, of course, is a priority for all our organisations across football and is a key strategic priority and objective.

As you say, a lot of individuals receive abuse in the online space at the moment when they come out, which is why we are here talking about this legislation today. It is the responsibility of all of us in this room to make sure that that is also a safe space, so that footballers can come out if they want to.

Q33            The Chair: If I may, quite a few members have follow-up questions they want to ask, and Lord Gilbert has been very patiently waiting to come in for his questions as well. John Nicolson and I were involved in an inquiry a few years ago looking at homophobia in sport. I remember then that several people said that sports stars who play at elite level in a big stadium in front of crowds learn to zone out to the white noise of what people are saying in the stadium. If you cannot do that, you cannot cope with the job that you have been asked to do.

Rio, from a player’s perspective, do you agree with that, and therefore in the social media age does social media make that worse? The abuse, rather than being a voice in a crowd that you learn to ignore, is something that is directed to you in the palm of your hand.

Rio Ferdinand: It is a great point. As a player, my experience, and I am not talking about homophobic abuse or racist abuse but any type of abuse, is that at a stadium that gets switched off after 90 minutes. I go home, it is finished, and I am out of the way now in my own house, chilling with my family, and it is fine and you do not hear it. It does have an after-effect, but it is not visible and live for me to see.

Now, with phones and different types of tablets and access to media, it is there 24/7 and you cannot hide and get away from it. That is a fact, and it is very different in that sense. It is hard to deal with, but for playersmale, female, different sexualities, different racesit is a difficult place to be. Players from yesteryear, as I said to you, could deal with the situation there and then and harden themselves to it. Now, it is very difficult. Now, the bigger, wider problem, like I spoke about before, is that your wider network of friends and family and work friends take on that form of discrimination as well.

Q34            Lord Gilbert of Panteg: I want clarity about one of the things you are very specifically asking for in this Bill. I think the question is probably for Sanjay. Some of the behaviour you have been describing is illegal and ought to be dealt with under the duty of care requiring platforms to take down illegal content. I do not know about you, but I am reasonably optimistic that that is substantial progress. Some content is not illegal, and you said that you would like a much clearer definition of harmful content on the face of the Bill that incorporates the kind of material and the kind of behaviour that you are describing. It seems to me that there are two ways of doing that: either describe that on the face of the Bill as content that is harmful but still legal and therefore that gets captured by the duty of care, or just stop pussyfooting around and make it illegal. Which of those approaches do you favour?

Sanjay Bhandari: Neither. I am suggesting something in the middle, because the approach that you have suggested is static. It does not deal with the dynamism of evolving language and the evolving language of social media. If we just froze in time what is happening there and the abuse we see now, we will be back asking for more primary legislation every time people change their behaviour online. That will not work.

What I think we need to do, and we will happily come back with some more detailed suggestions, is to give Ofcom the power to regulate harmful but on its face legal content. I will give you an example. I think our guiding light in this should be the abuse that the three players received after the UEFA Euro 2020 final, because the unified public condemnation of that tells us that what the public are demanding is that each and every piece of hate that was spat out that night needs to be taken off the platforms.

We should be measuring the effectiveness of this Bill by looking at each and every piece of content, going, “Is this caught? Is this caught? Is this caught?” My fear at the moment is that it would not be, and that the way to do that is to give a regulator the power to reflect contemporary social mores and contemporary social practices as to the kind of offensive behaviour that goes on, and to require that social media companies have policies that address those.

Lord Gilbert of Panteg: Are you slightly uncomfortable with giving that societal judgment to a regulator rather than to Parliament? Is it not Parliament’s job to make those societal calls?

Sanjay Bhandari: It is a balancing act, and of course this would be delegated authority from Parliament and we might need to put some checks and balances in.

Lord Gilbert of Panteg: Have you thought about those checks and balances? It seems to me that they would be important.

Sanjay Bhandari: Again, I am very happy to come back and think in a bit more detail about what those checks and balances might be. Ultimately, we also must balance against dealing with an evolving problem. We cannot legislate through the rear-view mirror; we must legislate through the windscreen. We have to see the problems that are coming up in front of us, not just going, “Well, thats the problem from last year. We’ve solved that”.

So what? People have moved on, and in my experience from other areasI practised in the fraud arena, but not as a fraudsterwe were experts on the fraud before last, because by the time we had got on to it they were on to something else. This is the challenge, particularly when we deal with young people, who communicate in different ways. Okay, it might be a monkey or a banana emoji now, but if you just say, “Well, we’ll make that illegal in this context”, they will move on to something else. They will find some other way. That is the challenge. If you freeze it in time, you do not have the dynamism to be able to build that.

There is no perfect answer, and we have to find a balance between having appropriate parliamentary oversight and giving appropriate delegated authority to a regulator to deal with evolving problems. We do that in other areas: we do that in banking, we do that in life sciences, we do that in utilities. All I am asking is that you give the regulator the same powers that those other regulators have.

The Chair: On that point, a lot of what you are describing there sounds like the effective enforcement of the Equality Act online.

Sanjay Bhandari: Absolutely. Parliament has already decided that there are vulnerable groups that are worthy of protection. We just need to extend that protection into this legislation.

Edleen John: We also need to recognise the context in some of those circumstances. To the point you just made about the monkey emoji, I understand that if I am referring to my child I might call them a cheeky monkey, and that that is a term of endearment and very different from the reference to a monkey emoji being aimed at a black footballer who I have no relationship, no connectivity, with, after they have just missed a penalty in a football match.

Sanjay Bhandari: These are contextual analytics businesses. Guess what? This is their business; they can do that.

Q35            Baroness Kidron: Sanjay, I want to find a little missing piece between what the Chair has just said about the Equality Act and your very clear ask for the regulator to have these powers. Do we also need to put a duty on the regulator to investigate and undertake to uphold the Equality Act? It is one thing having powers; it is another thing using those powers. Maybe it is either another safety objective of the Bill or it is somewhere where we are instructing the regulator. I would like your view on that little gap that I see developing.

Sanjay Bhandari: I think it would make sense to have the regulator do that.

Suzanne Webb MP: Following up exactly what you said, Chair, the simple fact is that none of us was born racist, sexist, a misogynist. What I see these platforms doing is creating a conditioning of people to believe and think that way. We have worked so hard with the Equality Act and so forth. If the platforms are watching, I would say that it is up to them to do something about it, not create future generations of misogynist, sexist, racist people. We have worked so hard to try to eliminate that as much as we can. That was my point.

Q36            The Chair: Finally, Edleen, you have an international relations function at the Football Association. Are these discussions which the FA has with your counterparts at FIFA and UEFA, and might there be a concerted effort by the wider football family to put pressure on the tech companies to do more in this space?

Edleen John: It is fair to say that, even if we look at the recent social media boycott earlier this year, we engaged with our counterparts in the international space. FIFA and UEFA supported our social media boycott and, indeed, some individual country FAs also supported us. We care about this topic across the entire landscape of football, not just English football, but we absolutely recognise that different countries are in different spaces and are facing different challenges. The ongoing dialogue is critical. A lot of the international community is looking to us and this opportunity that we have to put in place the first piece of legislation that will be relevant, which can become a blueprint for other organisations and indeed for other countries as well.

The Chair: International governing bodies sanction countries for the behaviour of their fans. We have seen that in countries like Hungary. Are these discussions being had an executive level at UEFA and FIFA, as far as you are aware?

Edleen John: We absolutely have conversations right to the top of the organisations on discrimination, online abuse, sanctions and what we can do as a collective of football to make it clear that we do not want this in our game and that our game is for all.

The Chair: Thank you very much. That concludes our questions. Thank you for your evidence.

 

Examination of witnesses

Danny Stone and Nancy Kelley.

Q37            The Chair: I would like to welcome our third panel, Nancy Kelley and Danny Stone. Thank you for joining us today. Thank you for your patience. I appreciate that we are running slightly over schedule.

I want to start with one of the questions that we finished off on with the last panel about the idea of the effective enforcement of the Equality Act online. As we discussed, Parliament has already designated what equalities should look like and the characteristics that should be protected. Why are we unable to effectively enforce equalities legislation in the online space and in content moderation on social media platforms?

Danny Stone: At the moment, there is no one there to do it, to be honest, other than the police, and the police are already overstretched. We have put in our evidence to you that there are questions about providers of services and the rules on harassment, for example, that relate to social media companies and to people with protected characteristics.

Baroness Kidron’s suggestion was excellent. I would love to see a specific duty in respect of the Equality Act in the Bill. I do not know why it has not been well enforced. At one point there was a discussion about whether these platforms are publishers or not, and people in this room have argued that they are, but it is so vast now and there is such a large amount of this material online that it needs a different way of addressing it. Certainly a systems focus, as the Bill proposes and tries to reach, is the way to go.

Also, there should be a general duty of care that sits above the duties that are outlined at present to address reasonably foreseeable harms. That could be drawn from tort law. It could be drawn from Health and Safety at Work etc. Act. I would like to see that, too.

The Chair: By a systems approach, do you mean the systems of the social media companies that rank and promote content rather than just the content moderation function?

Danny Stone: Precisely, and their wider thinking. For example, the other day on Talksport a caller made an antisemitic comment in respect of the owner of Tottenham Hotspur Football Club. Had that comment been broadcast on the radio, Ofcom would have censured it and the station would have had to keep a log of that discussion. It was broadcast live on YouTube. It was not caught. It was then shared. What systems and processes does YouTube have in place to address that? There is a gap where our regulation is working in one space but not in another.

Nancy Kelley: If I could add and amplify some of the points Danny has made and maybe pick up some of the points that were made in the previous session. The idea of creating a greater linkage between protected characteristics in the Equality Act and in the Online Safety Bill is good. Ofcom is already covered by the public sector equality duties in its own operations, but making it clear that Equality Act protected characteristics should be included in the priority content list, for instance, feels quite important. Homophobic or transphobic abuse, for instance, would be seen as a priority form of content for the purposes of the regulator.

To give an obvious but none the less important answer, the Equality Act simply was not designed to deal with the highly distributed, international, incredibly high-volume scale of abusive content that you are talking about when you are talking about anti-Semitic abuse online, racist abuse online, homophobic and transphobic abuse online. It was designed to address contexts of individual workplaces and individual contracts of service.

The Chair: When you go to the terms of service for the social media companies, I looked at Facebook’s in particular and it says, “Facebook does not allow hate speech”. That might be a surprise to people who use Facebook. When you see how they define hate speech, they define it largely using language very similar to the protected characteristics set out in the equalities legislation. But the issue here seems to be that they do not enforce it and there is no one there to make them do it.

Nancy Kelley: That is exactly right. A key question here is why it does not yet happen. We know that the voluntary content moderation systems simply do not work for our community.

There are a number of reasons for that. Only around half of LGBT people will report the abuse we experience online because of a lack of confidence that it will be addressed. When we do report it, it often does not meet these opaque community standards. I am sure the committee will all have seen, in relation to a number of protected characteristics, situations where content that we would think absolutely must breach the terms of service is judged as not breaching terms of service, community standards or those kinds of standards.

Having affirmative duties of care is so important. We at Stonewall absolutely welcome the creation of an affirmative duty of care. It is also so important that we take a systemwide approach to the way these platforms function rather than seeing them as isolated individual acts of abuse that are then scaled to millions of posts a day.

Danny Stone: On terms and conditions, at present the Bill, in respect of harmful content, says that the companies must have terms and conditions that “deal with” the problem. If there is a fly in the room, I can kill it, I can let it out or I can decide to let it fly around. That is “dealt with”. That wording does not work. It needs to be changed.

Similarly, the risk assessments need to meet a minimum standard. Otherwise, the system will be gamed.

Q38            Lord Black of Brentwood: I would like to probe a little the difference between online and offline abuse. Of course, we all know that, tragically, anti-Semitism and the accompanying violence, hate and bullying has deep roots. Similar, LGBT+ people have long suffered from discrimination, bullying, verbal abuse and goodness knows what else.

We probably know the answer, but it would be good to hear from you about the tangible, visceral difference between the offline abuse that these communities—our communities—have suffered over many years and the online abuse, which is of course novel.

Nancy Kelley: I guess I would describe the range of ways in which online harms impact LGBTQ+ people as spanning both the purely online and the visceral day-to-day lived impacts. If we think about the abuse our communities suffer, it is direct insults, pile-ons and forced outing and doxing that identifies people in their day-to-day lives. As well as the immense mental health harms that are caused by experiencing abuse —Rio talked passionately about the experience he has had as a black player, for instance—it is not uncommon for LGBTQ people to experience online abuse that means that they need to move house or are made homeless, because they have been outed to a family that is not supportive. It can mean that they lose their job or feel they need to move jobs. It can mean they come under threat in their real lives as well.

These enormous emotional impacts can lead us to withdraw from our social life—and from our online life, obviously—and can cause deep feelings of isolation and worthlessness, but there are also very much real-world tangible impacts that relate to where we are able to live and exist safely.

Danny Stone: Antisemitism online and offline are similarly connected. In extremis, we have seen antisemitic murders in Pittsburgh. We have seen manifestos of terrorists that draw on antisemitism. Some of that has been discovered online, some has been promoted online, some is circulated online.

There are a range of impacts. I do not post pictures of my children online often, because I work in an organisation where I am alive to the fact that, because of what I do and because of the discussion of antisemitism and the persecution of Jewish people, there is a chance that someone will try to hurt my children, so I do not post pictures of them online. That is an individual impact.

You have not talked much today about alternative platforms like BitChute, Gab and 4chan. These platforms are full of antisemitic hate. There was a video on BitChute about the Antisemitism Policy Trust, my organisation. That has impacts on my board and what they consider about their own safety and what that means. There are a range of factors.

Also, on Jews in public life, Luciana Berger was in this House and faced an onslaught of antisemitic abuse. What hope do the young women looking at that abuse have of being in public life? Do they really want to face that level of abuse?

There are all these impacts. There are many different impacts. The online and the offline interrelate more and more at the moment. My view, for what it is worth, is that these alternative platforms have to be captured in the category 1 designation, and that risk ought to be a determinant factor in where a platform is placed.

Nancy Kelley: We would completely agree with that last point. It is clear that a number of smaller platforms have a significant role to play in not only platforming hate against LGBTQ+ people but in encouraging it and encouraging the dissemination of misinformation about our communities that would not currently be captured, in exactly the same way as for Jewish communities.

The Chair: Thank you. That is a useful suggestion.

Q39            Lord Black of Brentwood: I have one follow-up. You will have heard from some of the previous evidence about the difficulty of gathering data and getting data out of the platforms. It is always promised but never comes.

Have you tried to get data out of the platforms about the scale of anti-Semitic abuse or homophobic and transphobic abuse? Are you able to gather and capture some of that data?

Nancy Kelley: We work quite closely with the platforms. We do not have any better data than is available publicly.

In terms of LGBTQ abuse online, there is a real absence of high-quality studies, but the studies that do exist give us a sense of the scale and the impact. Internationally, in more progressive countries, for instance, we know that around three-quarters of LGBTQ people will have personally experienced online hate directed towards them, and all of us will have witnessed it. Some studies are available there.

I anticipate that we will pick this up later, but anonymity is a very different issue for LGBTQ people than it is perhaps for the broader population, so I will not speak to it in this question.

It is also probably worth saying that the data that does exist—there is, for instance, quite good data from the Fundamental Rights Agency in the EU—shows that the prevalence of online hate against LGBTQ people is rising quite rapidly. They looked at the difference between 2015 and 2020, in a five-year cycle, and saw almost a 10% increase in LGBTQ people reporting instances of online abuse and hate directed against them.

Danny Stone: There is certainly some data out there. The EU, again, has a “blind shopper” model operating, which looks at illegal hate speech. It found that on YouTube in 2019, for example, 80% of the illegal material had been taken down. That gives you a feel for the journey to go.

I have tried for about 10 years to get one of the largest social media companies to give me the PowerPoint used to train its moderators. They keep on promising it and I never get it. Who are they talking to, who is informing these moderator guidelines and these PowerPoints? Imran Ahmed was talking about having a real independent review of what is going on. How do we quality-assess the moderation?

In terms of other data, we have plenty, but we released a report with the Woolf Institute and CST earlier this week about Instagram. We know that there is antisemitic supply rather than demand. We know there are 170,000 anti-Semitic Google searches each year, and 10% of those have violent language associated with them.

Lord Black of Brentwood: Would you back an amendment that gave the regulator power to go into the platforms, audit them and demand data?

Nancy Kelley: We definitely would, subject to concerns about user anonymity, in order to understand the scale, the pattern and the network. A lot of the abuse itself is not from real humans. It is from bot farms and those sorts of things. It would be enormously helpful to have a better understanding of the pattern of abuse globally and the way it transmits and the way abusive stories transmogrify as they travel around the internet. We do think that the regulator should be able to require that data.

Q40            Lord Knight of Weymouth: My first question is on the use of encrypted messaging services like WhatsApp, Telegram and so on. How much are they a problem in the dissemination of hate and harm? Is there a relationship then between people using open platforms to capture audiences and to engage them with relatively benign content and then transfer them over into private groups on those platforms? Is there anything that we can do through a Bill like this to delve into that?

Nancy Kelley: Building on what I just said about there being little systematic evidence about the experience of LGBTQ abusers online and how they work and how they network, I am not aware of anything that would give us definitive information about the encrypted messaging services that we would be confident talking about. There is a huge need for more research and more data to be available in this area.

Danny Stone: There is definitely a need for more research. Certainly there is a question about the extent to which all these services are entirely private. You will probably have read about WhatsApp and the fact that messages can be reported and then reviewed. I believe that a Private Member’s Bill at one point looked at whether the owners of private groups on Facebook ought to have some kind of responsibility over a certain size. You might wish to review that. But certainly, yes, encrypted messaging should be a concern that is addressed in the Bill.

Q41            Lord Knight of Weymouth: Thank you. Danny, your evidence talked about countervailing duties and about how protecting democratically important and journalistic content was important but potentially a loophole. How could we close that sensibly in the Bill?

Danny Stone: It needs further definition, in short, and it needs responsible boundaries. I write about wrestling and other things. I could call myself a journalist, and all of a sudden, I have additional protections under the Bill for my content. The example we gave was of a candidate opposing another candidate from the Women’s Equality Party and using misogynistic abuse, and then saying that it was democratically important given the platform they were on. Okay, yes, the Equality Act may come to bear but—hold on—is that democratically important or is that abuse? Further definition, trying to explore what that means and putting some of the meat on the bones of that would be helpful.

Nancy Kelley: We would absolutely agree that both those exemptions need further definition. To give an example in support of that, one thing that Stonewall was founded to do was to create LGBTQ-inclusive schools. It is incredibly important that people are able to express a wide range of views about, for instance, inclusive relationships and sex education and, indeed, we welcome and support that. An almost inevitable consequence of that getting talked about publicly and in online platforms is that, very quickly, individuals and organisations become subject to very harmful traditionally homophobic tropesie we get accused of being a danger to children or enablers of paedophilia.

We should be able to have a free and open debate about inclusive relationships and sex education in schools without exposing LGBTQ people and LGBTQ organisations to that sort of straightforwardly abusive content. So it is really important that, when we are thinking about those democratic policy debates, we do not create a loophole so wide that anything can be said in that context.

Q42            Dean Russell MP: As a broad question—I will go into a few details in a moment—is there a bit of chicken and egg with social media? From the descriptions we have had today from previous panellists, it sounds like social media is effectively creating more hate, more racism, more prejudice, more homophobiaall of these things. Is that because society is moving in that direction anyway, or has social media caused that and therefore this Bill will help to reduce it?

Danny Stone: It facilitates the spread. We know that there is an increase generally in hate. The Community Security Trust, which monitors, collects and reports on the incidents of antisemitism, has seen a steady upwards trajectory in antiemitism. There are particular events that spur that antiemitism on.

Another piece of research done by the Institute for Jewish Policy Research found that while maybe 2.4% to 4% of the population might be considered antisemites, up to 30% of the population  consider one antiemitic statement to be true. So up to 30% of people might believe something antisemitic.

If you then layer that on to social media, look at how that spreads. I could put my antisemitic idea on to social media, it will stay there, other people will see it and then it links. The report that we did on Instagram talks about chaotic trolling: people use various hashtags as gateways into conspiracy and into antisemitism.

So it is a bit of both. We know that things are worse, but social media facilitates the spread of that hatred.

Nancy Kelley: It is probably quite important, as regards LGBTQ people, to think differently about general public attitudes and the direction they are taking, which is overwhelmingly positive. If we think about the general population, attitudes to lesbian and gay people are some of the fastest changing positive social attitudes we have had over the last 50 years. We should not confuse what happens in the online space with too much of a fear that actually public opinion on LGBTQ people is negative and is moving in the wrong direction. We actually do not have any evidence of that; in fact, to the contrary.

That points, though, to this farming of abuse; you have a minority of the community who are enabled through the way these platforms function to engage in this kind of abuse. Platforms reward proliferating abuse. The click-through is king.

Also, to pick up points that Danny and the people in the previous session alluded to, it is important to understand that in this online abuse space, there are close interconnections between the abuse of one group and the abuse of other groups. We should not think of this as necessarily somebody motivated to go online because they are racist. There are many people online engaged in wholesale trolling of multiple marginalised communities. Indeed, having prejudiced views about one marginalised community and expressing them online can lead, through the kinds of processes Danny has been pointing to, to people becoming essentially radicalised into having prejudiced views against a number of our communities.

We should feel confident about where the general public are at and worried about the systematic online abuse.

Q43            Dean Russell MP: I have been involved with digital for many years, although probably nowhere near the experience around the table, but I remember when social media started. It was trying to encourage people to go online. Now in our society it is one of the main methods of communication with friends and so on.

Was the reflection of racism, homophobia, prejudice and all these things on social media 10 years ago different to what it is now? Back then, a few people would be saying things that did reflect what was going on in society, whereas it is now being used as a recruitment driver to encourage more people to be homophobic, racist, prejudiced and hate filled. Is there a risk that if we do not stem this with this Bill now, in five or 10 years’ time we will see this much more in the reality of society and not just online?

Nancy Kelley: Maybe yes, in two ways. We should worry, as Danny has pointed to, about online hate spilling over into real-world hate crimes. I pointed to the fact that prevalence of online hate towards LGBTQ people is heading up from a high base. We know that reported hate crime in the UK is also heading up rapidly. Much of that is likely to be improved reporting and improved recording, but when we are suffering a wave of serious violent attacks, primarily against gay men, in our cities, and indeed the homophobic murder of a gay man a few weeks ago in Tower Hamlets near where I live, we should not be sanguine at the extreme ends of this—the normalisation of hateful attitudes or hateful attitudes that spill over into real life.

There is also an important connection to be made here with radicalisation generally. A good report from Kings College’s International Centre for the Study of Radicalisation Studies that came out this year pointed, during the first 100 days of the Biden Administration, to the far-right using transphobia, which is a prevalent form of abuse against our community, as a recruiting tool for a range of beliefs including entrenched anti-Semitism and complex racism.

Dean Russell MP: On that point, are certain types of content being used at the moment as a gateway drug to hate and hashtags on things that people might generally not think of as prejudiced but that are one of the 30 comments that are then getting people down the Alice in Wonderland rabbit hole and going deeper into more conspiracy theories?

Nancy Kelley: Yes, for a minority of social media users. There is good research in this area because of all the data on radicalisation and far-right studies. For a minority of users, there is a rabbit hole effect. Prejudice against one group—it could be anti-Semitism, it could be homophobia, it could be anti-Black racism, as we were hearing about earlier—brings people into contact with a wide range of deliberately radicalising content against other groups. The previous speaker called it the Four Horsemen of the Apocalypse - that is how it is. These prevalent forms of abuse online then become gateways to other beliefs.

Danny Stone: If it is a recruitment tool, it is working. Five years ago, the incident levels of antisemitism online were at about 18%. They are now up to about 40% of the incidents that are reported.

We did some work with Media Matters for America, a US NGO, which investigated the alternative platform 4chan and looked at the rising nature of antisemitism there. It rose from hundreds of thousands of posts in 2015 to something like 1.7 million in 2017. The intersecting abuse, misogyny and antisemitism increased about 180%. When it comes to the Bill, it is important that intersectionality is considered for people with multiple, distinct and overlapping identities.

That shows you that it is being used as a gateway. We know from the QAnon movement that people start in gaming and can be drawn from those comments boards into other online spaces. There was a trailer for the video of the Antisemitism Policy Trust on YouTube, which tried to get people off YouTube on to BitChute and from BitChute, presumably, on to other platforms, too. There are those gateways.

Q44            Dean Russell MP: Briefly, building on that, we heard evidence today, especially in the first session, about this interconnection and this cross-pollinating of hatred in different groups, across different channels and so on, and often from a small group of the same people with multiple anonymous identities.

Where do you see the anonymity as part of this Bill? Is there enough to tackle that? Also, do you see it as one of the challenges here that you might have one person with 20 identities spreading hate to millions of people?

Danny Stone: This issue is not addressed in the Bill properly. It certainly was in the White Paper that anonymity would be addressed. There is nothing at the moment. People in this House—Siobhan Bailllie, Margaret Hodge—have called for action including verification systems so that one can engage only with verified accounts.

We would go further and make it the platforms’ problem. They should be liable when they cannot provide the details of an individual. Where there is a burden of proof, sufficient evidence and a limited revelation of that data, you should be able to find out who is responsible, as you can in financial services with the know-your-customer principle. That said, I know there must be important protections for victims of domestic abuse, whistleblowers, those who would like to come out but are frightened to give their details or those who have parents who may be concerned about that for whatever reason. Anonymity is precious for various reasons. Perhaps middleware or trusted partners who can be middle-people with that data might be a solution.

Nancy Kelley: I want to say a couple of things, but mostly I will talk about anonymity, because I suspect it is one of the areas where Stonewall’s perspective differs, particularly from an international protection perspective.

It is possible to regulate a lot of the behaviours you are describing without knowing the individual identity of the account. Accounts that are behaving in a particular way can be identified without knowing who the account owner is. It is important to separate whether we can regulate and whether we can identify and remove accounts that behave in abusive ways, whether through networking or through the use of bots from weakening anonymity. The answer is that we can. Those platforms can identify those accounts without knowing who owns them and they can delete them. They do not, but they can.

In terms of anonymity, while acknowledging and understanding the reasons why many groups will have a deep desire to have some degree of lifting of that veil, I would like to emphasise to the committee our deep concerns, particularly for LGBTQ people around the world, about personally identifiable information. I know people have suggested things like names being visible. Even in progressive countries that are accepting, we know that will expose LGBTQ people to harm. Our community in the UK is already harmed by outing and doxing. Making it easier to identify an LGBTQ end user, even in liberal, accepting environments, increases danger to our community.

If we look at that in the global context, we know from research that Article 19 has done that almost 90% of LGBTQ users in Egypt, Lebanon and Iran said they are incredibly frightened of mentioning even their name in any kind of private messaging online. We know that over 50% of the men charged in Egypt in recent years with homosexual “offences”—because it is indeed illegal to be gay there, as it still is in 71 countries around the world—were the subject of online stings. It is incredibly important to understand the potential impact of these global companies introducing identity verification that is visible in any way to end users.

Middleware options and know-your-customer types of approachse do not expose personal data to end users, but we think they would have a considerable chilling effect on people’s participation, particularly in regressive countries. In these countries there will be concern about data security and who is working in those platforms, in countries where the Government have the right to request personal data from companies about LGBTQ people—there are not many, but there are some—LGBTQ people will be extremely reluctant to participate online. There is a risk of a chilling effect. I would entreat the committee to be thoughtful about those impacts. I recognise that I am not coming with a solution but just presenting you with a difficult problem. We would be happy to work those through, including with our international partners.

Q45            Darren Jones MP: On this issue of capacity for enforcement, we are looking primarily at rules but, as we said right at the beginning, there are already rules in place that are not being enforced because, as you said, Danny, the police are stretched.

Do you have a view based on the amount of content or victims you would deal with about what capacity you would want to see the Government fund Ofcom to have? Is it a big regulator? Has anyone taken a view on what headcount there might be or what specialisms it might have, or do you think it should just define the rules and then let the Government and Ofcom figure that out? The fear, I suppose, is that we get some good rules but then do not have the capacity to enforce them because we are stretched with so much stuff to deal with.

Danny Stone: That is a good question. First, as I understand it, under the current rules Ofcom can still co-designate, and it should. The BBFC is a reputable organisation. I like it. It understands antisemitism. Similarly, the extremism commission has people who can do some of the heavy lifting. I do not have specific numbers on how many people it would take but, based on the way Ofcom has approached regulating television and radio in respect of antisemitism, I have confidence that it would be able to do the same here.

Again, if we look at systems issues rather than content, it will go back to those systems. If a general duty is introduced, it will have to go back to the systems. Super-complaints could be brought by organisations, hopefully, like ours or the CST or Stonewall, and we would say, “Hold on. There is a systems issue here. Deal with that and give the regulator that overview on a wider scale. Then you would see the impact downstream, I suppose.

Nancy Kelley: I am not a specialist, so I will confine myself to a fairly narrow comment. It is much more about capability than it is about headcount. It strikes me, even from the conversations we have at Stonewall with tech providers, that you are quickly in quite complicated conversations about machine learning, NLP approaches and these sorts of things, which I can just about hold on to because I have done a little bit of data science, but not most people. Indeed, I can hold on to it only as far as what I have just said.

It is key that Ofcom has the technical understanding to effectively regulate an industry that innovates at an extraordinary pace and that is highly technical. It will be faced by providers who will be saying, “This is simply not possible”, with lots of complicated documents proving how it is not possible. Being able to interrogate that answer and being able to regulate effectively requires Ofcom to understand at quite a deep technical level what is possible in those platforms. I would be more concerned about capability than capacity, if that makes sense.

Q46            John Nicolson MP: This is a question for Nancy. How seriously do the platforms take it when you approach them with examples of hate?

Nancy Kelley: We work closely with a number of the platforms that would indeed be regulated by this Bill, and we have extremely productive and positive relationships with them.

John Nicolson MP: Like whom?

Nancy Kelley: I will name just one. We work closely with TikTok, for instance. When we speak to them at a generalised level, the reception is extremely warm. The same will be true of platforms like Twitter and Facebook. Previous witnesses have said the same thing to you. When you talk about discriminatory abuse to online platforms, they respond warmly, positively, affirmatively and openly at a generalised level.

John Nicolson MP: They talk the talk, or they Tik the Tok?

Nancy Kelley: Yes, they Tik the Tok. In my view, it is genuinely meant, so it is not disingenuous. There is a genuine concern about the hate that all these platforms enable. We do not see that feeding down into that self-regulatory space. We do not see huge strides forward in the quality of content moderation, which is needed. There is a disjunction between what is done at the day-to-day level and experienced particularly by day-to-day users. We can talk about users with high profiles because they are in the public eye, but enormous numbers of ordinary LGBTQ people get abused online and are trying to report that abuse. We do not see those users being able to access effective protection and redress.

John Nicolson MP: As a gay man, I was called a “greasy bender” on Twitter. I would argue that that is unpleasant. I think most people would accept that that crosses a line. Twitter wrote back and said it does not breach any of their community standards. I copied and pasted their community standards and what they say about abusive hate targeted at LGBT people, and they wrote back saying, “Thank you for your communication. It does not breach our community standards”. I provided the voice, because that is what I assume it would have been.

Nancy Kelley: I love that you have a content moderator voice in your head, John. That was great.

John Nicolson MP: This went backwards and forwards three or four times. I even tweeted the absurdity of it and copied them in. Absolutely nothing happened. They know that I am a Member of Parliament and they know that I sit on this Select Committee. They know that I am on this committee, although this predated that. If they are prepared to behave with such utter disdain towards somebody who has a wee bit of influence, can you imagine the disdain they treat the average LGBT person with?

Nancy Kelley: We do not have to imagine it. We can see it. You will often see on online platforms attempts by wider community members to protect somebody, so not only will that person have reported the abuse but other people will have reported it, and they will all get the same message back.

John Nicolson MP: Danny, you wanted to come in?

Danny Stone: Yes, on both points, actually. We have a case with a Member of the House of Lords who, every time they tweet on Jewish-related issues, gets a fake picture tweeted back at them relating to their supposedly illegal sexual conduct. We have taken it up with Twitter and every time they have said, “It has not broken our terms”, and that is it. It is completely within their decision-making process. They would suggest that a public figure is different and so somehow it is acceptable that this fake abuse is tweeted. I disagree.

In terms of the companies, similarly, we have good, constructive discussions with TikTok, Facebook and Twitter. I have had less constructive or certainly difficult conversations with Microsoft. The Bill at present exempts search from category 1. The search companies are having a laugh at the Bill’s expense if they are not included in category 1. Google was directing people to the search “Are Jews evil?” Microsoft Bing was directing people to “Jews are”, and then a rude word about Jews. Currently, if you were to search the word “goyim”—originally a Yiddish word for non-Jews, which is being used in a pejorative sense now—on Microsoft Bing, you will get an antisemitic website and a suspended Twitter account as the top search results. Alexa and Siri are completely outside the bounds of responsibility of this Bill. It would be a travesty if they are left out of category 1. They should absolutely be in there.

John Nicolson MP: Yes, I agree. They do not take it seriously. They are waiting until somebody forces them, because they will not self-regulate.

You mentioned, Nancy, the big rise in anti-LGBT abuse. What percentage of that is transphobic abuse? I have noticed, as a non-trans person, this explosion in transphobia. On a personal basis, I got a lot of transphobic abuse directed at me after simply posting a link to a wee film about a constituent who is trans that she had made with the BBC. After that, there was this tsunami of vileness.

Nancy Kelley: There is not good enough data to give you a number. I wish that the research was there, so I could give you a number.

Undoubtedly, the prevalence of online transphobia, directed particularly but not exclusively towards trans women or featuring trans women, is extremely high and is rising extremely quickly. It is directed not only at trans people but at anybody who supports them. I have to lock everything down, otherwise my social media accounts are constantly a complete mess for the obvious reason of the job that I do.

If a company or a public figure makes even the most anodyne, positive comment about trans people—we saw Nigella experience this recently, and the Women’s Institute: mainstream figures and organisations saying straightforward things, such as “This is a nice woman who is part of the Women’s Institute, who happens to be a trans woman—you will see days and days of transphobic dogpiling on that account, whoever it is.

If it were not for the fact that it is such a distressing experience, I would invite you all to experiment with it. I would not advise you to do it, because it is horrible to experience that kind of a dogpile, but it is automatic and instantaneous and we see it happening all around the world. It then becomes, as we have said, a gateway into other things. If you follow some of those threads of transphobic abuse, you are very quickly into people also saying homophobic things. You are very quickly into people also saying racist or anti-Semitic things. You do not have to follow the thread very far for it to become a complete mess.

Q47            John Nicolson MP: You have been targeted very specifically as an organisation by a very sinister new group called the LGB Alliance, which appears to exist just to stir up transphobic hatred. It claims to be a charity. Bizarrely, it has been given charitable status, although it appears to do no charitable work of any kind. It just raises money, put outs big adverts, and targets gay people with transphobic abuse. Most of it is conducted online.

To what extent do you think it is succeeding, because it targets you and I notice that you are getting pushed out of all sorts of organisations at the moment, including Ofcom, as a result of LGB Alliance pressure? The chief executive of Ofcom switched. At one point she gave evidence to the other Select Committee, saying that she thought that the LGB Alliance should not be allowed a voice to express hostile anti-trans views. Then she flipped and said that she did think they should be allowed a voice after a meeting that she had with them. What role is the LGB Alliance playing?

Nancy Kelley: It will not surprise you, John, but I am going to give you a bit of a politician’s answer, if that is not a rude way of framing it.

I am not the chief executive of the Charity Commission, so I am not going to make any kind of comment on who and who is not covered by the regulator. What I think is helpful for the purposes of this discussion is to think about the way the global anti-gender movement, which is a movement that attacks not only LGBT people but women’s reproductive rights globally, is associated with a process called NGO-isation and is strongly associated with the creation of online groups. That is something that you can see well beyond the bounds of the UK.

Thinking about the way in which organisations that exist primarily online are capable of becoming nexuses for targeting individuals and organisations that are primarily digital is an important part of this picture. Of course, it is also important to protect people’s rights to free association, free speech, those sorts of things, which is why you are going to get a very careful answer from me.

To those of you concerned about Stonewall, we are doing fantastically well. The diversity champions programme is growing. So please do not be too worried about us.

John Nicolson MP: A politician’s answer indeed.

Q48            Baroness Kidron: Danny, I want to go back to something you said earlier. You said, “In the full government response there was a duty to care—"

Danny Stone: A general duty.

Baroness Kidron: —a general duty to care about reasonably foreseeable . We have already touched on whether the safety objectives are good/excellent/need some work, but what is your analysis about what is missed by having specific duties? If you were to go back to the full government response and the “general duty to care”, do you really need the other ones that sit underneath it currently?

Danny Stone: Arguably, potentially, not. My view is that you end up more in a content space than a systems space if you start going down that line, and that is a space that I think is convenient for some people to be in, because you get into discussion of free speech when I think one does not need to.

It potentially ties Ofcom into the way in which it can respond and makes the Bill more complicated. You could make the Bill easier to read, give Ofcom better freedom, and essentially have the Bill make better sense.

Baroness Kidron: Can you extrapolate? This is a most important point, because if we need the general duty, why does having the specific duties push us into the content space? I might add a little question there, because you will notice that the Bill in the same journey became a harmful content Bill, not a harmful content and activity Bill, so there are two moves in that direction. I am interested to hear what you say about that.

Danny Stone: You begin to have to attempt to define what harmful content is. Ofcom does already do that to an extent when it makes determinations—I cannot remember the exact wording. Certainly the  Communications Act provides the authority to Ofcom to determine what societal standards are, but you get into defining what is legal but harmful, what is illegal versus what is legal, so you strip all of that out.

Take Wiley, the grime rapper on Twitter. He goes on an antisemitic rant over 48 hours or so with a tweet every 87 seconds. It is seen, conservatively, by 46 million people, something like that. The issue there is the not the content. We can discuss whether it was legal but harmful, or illegal, or one of the tweets may have been illegal. Hold on. The company has a duty to address reasonably foreseeable harms. It is a reasonably foreseeable harm that an account that has a lot of followers and is repeatedly violating community standards might need some kind of graded response.

Given what we know about social media today and the codes that will be out there, I think that is a reasonably foreseeable harm. There will be other things to look to, but this turns into a systems question. That is not about whether the content was legal but harmful and whether the terms and conditions dealt with it. It is not about whether one piece was illegal and whether they dealt with that one piece of content. It is about whether they had a system, more broadly, to address accounts with large numbers of followers who break their terms or go on antisemitic rants. Does that make sense?

Baroness Kidron: It does make sense. It connects to something else that you said about minimum standards. If you are saying that we need a systems approach and a duty to care, and we have these objectives—we know what we are working towards—can you rely on their community rules and terms, or do you then have to have minimum standards, because, on the map that we are mapping, I could say that anything goes and it is fine on my site to be antisemitic in my terms?

Danny Stone: You can compare YouTube and BitChute. BitChute has very minimum terms. I do think there should be minimum standards. The Carnegie Trust, which is fabulous, has produced a code of practice on hate crime, which we helped with. There are ways in which the regulator could publish codes of practice that could inform those terms of service or help with those minimum standards. That is not a very difficult task. We would help with it, and I would rather do it once or twice with Ofcom than have to go to them every time there is a new platform and help them with their terms and conditions. It would make the draw on our organisation a lot less, and I am sure it would be the same for Stonewall and others.

Baroness Kidron: Do you believe that it would be chilling to have minimum standards and a duty to care? Does that chill free speech?

Danny Stone: We have just put into a report that HOPE not hate did on legal but harmful content and freedom of expression. Imran was saying earlier that at the moment there is no freedom of expression for some groups. Groups are marginalised, are taken out of this conversation. In other areas, we do regulate legal but harmful content. The BBFC, which I mentioned before, refused to classify a film called “Hate Crime”, which was about a gang beating up a Jewish family for 90 minutes or so. That content can still be found, but it cannot legally be bought. Its reach was limited.

Similarly, this stuff will exist online, there will be websites where you can find this stuff, people will still be able to be horrible, but we are saying, no, there is a minimum standard that we expect within the law and one that Parliament sets, not Mark Zuckerberg waking up one day and deciding that Holocaust denial is not all right on his platform.

Q49            Baroness Kidron: Finally, are you trying to guide the committee towards the mechanisms of spread rather than the mechanisms of takedown?

Danny Stone: Yes. I think both are important, but yes I would focus on the systems, because the systems have the widest application and effect in respect of the content downstream.

Q50            Suzanne Webb MP: A quick question. Do you think the online platforms understand the scale of the work they need to do once this Bill comes in and what they will need to do thereafter? They will have to get themselves prepared and do the work thereafter.

Nancy Kelley: They know the scale of the problem and they probably know the scale of the problem better than anybody sitting in this room. Whether they know the scale of the work required for them to address the problem, I am not sure.

Danny Stone: I think the truth is that they are having a bit of an internal debate. Some probably within the companies understand and know what it coming. There are so many parts. I have not even talked about supply chains. For example, could the Bill explicitly reference supply chains, whether it be the provision of Gifs, or some other companies that are working to supply the different products when you look at a Facebook or a Twitter? Should that be referenced? Should we have reference to the Bribery Act or the UN principles or OECD principles and what have you? I think the platforms, in some cases, are going to get a bit of a shock and I think that is brilliant. I do think—and I hope I have not given any other impression—that it is a great piece of legislation in terms of trying to set a new standard for the world. That is to be welcomed.

I do not know if this is our final opportunity to say something, but if it is, I want to say that it has just been the Jewish New Year, Rosh Hashanah, and the traditional greeting is “Shanah tovah um'tukah—“May it be a good and sweet year. I hope for it is a good and a sweet year for your deliberations. I have brought you all some honey cake, a traditional Rosh Hashanah food, individually wrapped, so you can enjoy that at the end.

Q51            The Chair: We greatly appreciate that. If I could put one question to Nancy Kelley, I want to go back to the quite important issue of anonymity. I take what you say about not breaching anonymity for end users, but the question is about what sort of data a platform might hold. We have to think about the legislation in terms of the UK-only context.

Do you think it is possible to have a system whereby a court order could be obtained to ask for information about an account which a law enforcement agency believes has committed offences and wants to understand who the operator of the account is so that they could be charged with an offence? Do you think it is possible to create a system where that could happen while still protecting the public identity of the account involved?

Nancy Kelley: It is clearly possible, but it seems that it is probably quite complicated, so I think probably the devil is in the detail, and that is how it applies here. Our concern is that these are not one-nation companies but global companies, and when we are imposing general duties of care, which we are entirely in support of, one of the easy wins is on a form of identity verification that they would just use across a platform, which then affects people well beyond the scope of users in this country. But, yes, it is clearly feasible to construct something that mitigates the risk in this domestic context.

The Chair: Thank you very much. That concludes our evidence session.