Written evidence submitted by the Center for Countering Digital Hate (OSB0009)

 

A handful of companies run by a small elite dominate the Internet economy. They own the platforms and technology on which 4.5 billion people share information, form and maintain relationships and transact business. The communities on these online platforms, the behaviours and beliefs, the values emerging from those spaces increasingly touch every aspect of offline society too.

 

These companies are not, despite their vital role in public discourse, in the business of “free speech”. They are motivated by money and make that money by:

  1. Selling users’ personal data, and insights derived from that data;
  2. Selling advertising space on content users produce which they purport to hold to stated standards;
  3. Providing infrastructure - web hosting, monetisation, and customer relationship management - to other organisations seeking to access digital audiences.

 

To provide them with the rich datasets and eyeballs they need to sell to advertisers, they have a great business model. To most users, they allow them to use their services for free. The technology is cheap and useful to the user, with zero marginal cost for each additional view of each additional bit of content. There is the added thrill to the user that if content goes “viral”, a potentially unlimited number of people might see their content, propagating at lightspeed along intersecting networks of influence that span the globe and connect us all.

 

However, while the cost to users may be zero, our societies have paid the price for the way these companies create wealth for themselves, avoid responsibility for how their services are used and keep their costs low. On these services, bad actors are able to intermingle seamlessly with the public. This means that while most individual users generate no immediate harm, a small but highly impactful minority are very good at spreading hate and misinformation at a rate far superior to anything that society has known in its past. For example, researchers found misinformation on Facebook got six times more clicks than factual news during the 2020 US election.[1]

 

The reason why is that the companies decide which content wins and loses in the battle for attention on those platforms, allowing them to shape awareness and knowledge for billions of people. Their algorithms are designed to drive engagement, eyeballs and therefore ad revenue. Engagement is maximised by (1) strong emotion, (2) rabbit holes that lead to a warren of conspiracy, (3) misinformation that gets engagement from detractors and supporters, and (4) ouroborotic algorithmic reinforcement of prior beliefs.

 

Culpability for harm caused on their platforms becomes clearer still when we realise:

        There is evidence to suggest that social media platforms prefer and therefore advantage bad actors.

        Without credible, comprehensive enforcement of rules on hateful speech, bad actors can bully those they oppose - whether it be black people, Jews, Muslims, women, LGBT people, journalists or scientists - into withdrawing from digital discourse.

        If highly engaging “clickbait” misinformation that attracts both support and opprobrium is given an algorithmic boost, then journalism is disadvantaged against cheaply-produced misinformation.

 

It is now broadly accepted that the frequency with which this content is served to people can normalise hate, misinformation, conspiracy, extremism, and trolling, at the expense of Enlightenment values of tolerance, reason, and fact. This is what we have called at CCDH the Counter-Enlightenment: an array of bad actors who operate with impunity to create harm, converging and hybridising audiences and ideologies so fast that organs of state whose job it is to analyse, manage and contain harms are unable to keep pace.

 

Big Tech is not the first industry in history to have refused, at first, to accept responsibility or accountability for the costs they collectively impose on others or on society. Big Oil and Big Tobacco, for example, also sought to deny, deflect and delay the moment when they had to take responsibility for climate change and lung disease respectively.

 

Many industries have sought to refuse to take responsibility for the way their products are used too. Indeed, the unscientific claim Big Tech repeatedly makes that “the best way to defeat bad information is with good information”, not active (and admittedly costly) intervention, echoes the claim from the US gun lobby that the best way to defeat a bad guy with a gun is a good guy with a gun.

 

***

 

CCDH was set up to disrupt the work of malignant actors online and to accelerate the moment when Big Tech has to take responsibility for the fact that at the same time it connects families, loved ones, friends and new communities of interest, it also has powered modern extremism, science-denial, hatred, bullying, terrorism and child sexual exploitation.

 

The Covid pandemic, in particular, laid bare the real-world harms caused by misinformation, and the systematic failure of social media platforms to protect their users from those harms. Multiple studies, including our own polling, have shown that those who are most reliant on social media for information are more vaccine hesitant.[2]

 

In the last year we have repeatedly demonstrated that platforms’ systems for preventing the spread of harmful misinformation are not fit for purpose. The same is true of hatred, which platforms systematically fail to recognise, let alone act upon, despite the harm it causes to individual users and to wider society.

 

The pandemic also showed the limitations of governments’ existing powers. HMG’s sputtering attempts to encourage platform self-regulation, even President Biden’s charge that Facebook is “killing people” by hosting the “Disinformation Dozen” - the people CCDH identified as being behind nearly two-thirds of anti-vaccine content - were met with a well-worn playbook of denial, deflection, delay and dollars shovelled towards potential critics in an attempt to discourage criticism.

 

***

 

It is therefore right that the government’s Online Safety Bill aims to address these harms at a systematic level, by placing duties on platforms to keep their users safe. However, we are concerned that the Bill will fail to address some of the most pressing online harms in society today unless it makes hate and misinformation a clear priority, backed by independent audits of platform systems and criminal liability for executives for the worst systematic failures.

 

The Bill must address inadequate reporting systems for hate and misinformation

 

Platforms often claim that they remove harmful content when it is reported to them by users, but this is simply untrue. Our regular audits of platform action on reports of harmful hate and misinformation show that they fail to act on around 8 in 10 posts that violate their standards.

 

Nowhere has this been more harmful than on their failure to act on reports of clear Covid and vaccine misinformation, as evidenced by three audits of platform action against reports of this misinformation from ordinary users performed by CCDH in the last year.[3]

 

Our second audit of 912 user reports on Facebook, Instagram, Twitter and YouTube showed that platforms failed to act on 95 percent of Covid and vaccine misinformation.[4] Platforms failed to improve significantly even as social media’s role in damaging vaccine confidence became clear. Our most recent audit conducted in March in collaboration with the Canadian Broadcasting Corporation showed that the same platforms were still failing to act on 87.5 percent of user reports.[5]

 

These failures are primarily a result of platforms’ moderation systems rather than the strength of their standards. Platforms’ performance in acting on user reports has improved only marginally despite adopting stronger standards on vaccine misinformation in particular.

 

Platforms are no better at addressing user reports of harmful racist hatred. Our recent “Failure to Protect” report showed that Facebook, Instagram, TikTok, Twitter and YouTube fail to act on 84 percent of user reports of clear antisemitic content.[6] Facebook performed worst of all, failing to act on 89.1 percent of reports, despite adopting stronger policies on antisemitic conspiracies and Holocaust denial in the last year.[7]

 

These systemic failings on hateful content became particularly clear after a surge in racist abuse directed at the England footballers Marcus Rashford, Bukayo Saka and Jadon Sancho following the Euros 2020 final. Our analysis of racist abuse directed at players on Instagram in the aftermath of the match showed that 94 percent of accounts reported for racially abusing players had not been removed after 48 hours.[8]

 

This is not just a case of platforms being slow to act, but of failing to act completely. Our second audit conducted six weeks later showed that 75 percent of the accounts that had racially abused England players had still not been removed, including accounts that had posted other racist and white supremacist content to their own profiles.

 

Platforms fail to remove material even when it has clear links to extremism. CCDH identified one YouTube channel promoting violently misogynist “incel” ideology that has over four million views and counted the Plymouth murderer Jake Davison amongst its subscribers. YouTube failed to remove the channel after it was flagged by journalists at The Times.[9]

 

They have also failed to act against the most prolific superspreaders of vaccine misinformation. Our report, The Disinformation Dozen, showed that just twelve leading anti-vaxxers are responsible for up to 65 percent of anti-vaccine content. To date, these twelve anti-vaxxers still have 50 accounts across Facebook, Instagram, Twitter and YouTube that reach 7.9 million followers.[10]

 

The Bill must address platforms’ systematic failure to deal with trolling

 

Platforms have also failed to adequately address the problem of trolling, a tactic that is still used by spreaders of hate and misinformation to project their messages into public discourse while pushing their opponents out of online spaces. Our research has demonstrated that anti-vaxxers are the latest group to employ these tactics to silence doctors, scientists and journalists.[11]

 

In the absence of action from platforms to protect their users from trolling and the serious psychological harm it causes, we published the report Don’t Feed the Trolls which aimed to explain how and why trolling takes place, and practical steps to take if you are targeted.[12]

Victims of trolling should not be expected to deal with this problem themselves. Disappointingly, platforms have increasingly passed the burden of dealing with trolling on to individual users through features to filter abuse and restrict who can comment on posts.

 

Platforms can and should play a much larger role in addressing this problem. Drawing on the experience of victims of trolling, we recommended that platforms change the design of their services to put user safety first, for example by moderating ‘trending’ subjects to ensure that they do not amplify trolling and introducing systems to increase moderation around incidents of organised abuse.[13] Unfortunately platforms have still not implemented such features.

 

It is welcome that Part 2, Chapter 6 of the draft Bill defines “content harmful to adults” with reference to harmful content that platforms know to be targeted at a “particular adult”. However, the Bill would be improved if it were made clear on the face of the Bill that these clauses are intended to address trolling and abuse, and if it ensured that platform performance on trolling was subject to regular independent audits.

 

The Bill must address the role of algorithms in amplifying harmful hate and misinformation

 

We welcome the introduction of duties on platforms in the Bill to assess the risks posed by the algorithms they use to promote content to users, and ensure that they are safe. However, at present it is not clear that the Bill would allow OFCOM to regulate the harmful hate and misinformation spread by social media algorithms. This needs urgent clarification and is important enough to warrant inclusion on the face of the Bill so as to avoid any room for evasion by social media companies in future.

 

Last August, as countries around the world entered the second wave of the Covid pandemic, Instagram decided to start placing algorithmically recommended posts at the bottom of users’ feeds.[14] This vast extension of its recommendations algorithm, previously limited to the app’s “Explore” page, was intended to boost ad revenue by increasing the time users spend on Instagram.

 

However, Instagram had not taken even the most basic steps to prevent this feature from promoting harmful hate and misinformation. Our report Malgorithm established a series of Instagram profiles following the accounts of wellness influencers which received recommendations for disinformation from leading online anti-vaxxers who had already been repeatedly flagged to Instagram in advance of its roll out of this new feature.[15]

 

Instagram’s algorithm was also shown to be driving a dangerous “convergence” of different types of harmful hate and misinformation. Followers of anti-vaxx accounts were recommended antisemitic content, QAnon conspiracies and election misinformation. In return, followers of far-right accounts were recommended Covid and vaccine misinformation.[16]

 

Instagram’s algorithm still recommends posts containing Covid and vaccine misinformation, but this has not stopped the platform from extending this feature even further by placing algorithmically recommended posts between other posts in users’ news feeds.[17]

 

This problem is not unique to Instagram’s algorithms. In June 2020, we reported that when users “liked” an anti-vaccine Facebook Page, the platform would algorithmically recommend that they follow other pages belonging to leading anti-vaxxers.[18] This problem persisted for at least another year, in which time Facebook may have recommended users follow well-known anti-vaccine pages millions of times.[19]

 

The fact that platforms were algorithmically amplifying anti-vaxxers and the disinformation they spread helps explain their rapid and unchecked growth during the pandemic. Tracking of 147 accounts for which CCDH has been able to obtain historical figures shows that they had grown by 10.1 million followers since the end of 2019. Anti-vaccine accounts on Instagram grew fastest, adding 4.3 million followers.[20]

 

The Bill must not ignore the harms caused by online advertising

 

It is concerning that the Bill at present excludes harms caused by online advertising and by the online promotion of unsafe products. During the pandemic, platforms have allowed paid advertisements for harmful Covid and vaccine misinformation. Facebook routinely accepted money to advertise anti-vaccine messages to its users until announcing it would end the practice in October 2020. Even then, our research showed that Facebook continued to broadcast anti-vaccine adverts worth at least $10,000 to its users.[21]

 

As well as promoting harmful ads, platforms allow ads to target users who are most susceptible to harmful misinformation. As late as June last year, Facebook was allowing advertisers to target users who “like” anti-vaccine pages or have interests related to vaccine misinformation.[22] In the same period, Twitter allowed advertisers to target followers of leading anti-vaxxers and keywords such as “antivaxx”.[23]

 

Excluding advertising systems from the Bill will also prevent OFCOM from examining the role that ads play in funding websites promoting harmful hate and misinformation. The CCCH campaign Stop Funding Misinformation has repeatedly shown that Google’s “brand safety” systems for adverts are not fit for purpose. Google ads for popular brands regularly appear on websites that promote clear racist hatred and harmful misinformation about Covid and vaccines, funding those websites with Google ad revenues in the process.[24] If the government’s intention is to leave the regulation of online advertising to the Advertising Standards Authority (ASA), then it should allow OFCOM to “co-designate” powers to the ASA so it can more effectively investigate harms caused by online advertising.

 

The government’s decision to exclude harms caused by “the safety or quality of goods” promoted by social media content will prevent the Bill from addressing some of the most harmful content related to Covid and vaccines. Our report, Pandemic Profiteers, shows that there is a powerful industry of dangerous false cures underpinning the spread of vaccine disinformation online.[25] In some cases, anti-vaxxers have breached platform standards by failing to declare that they stand to profit from the disinformation they spread online.[26]

 

Leading anti-vaxxers have risked the lives of millions of their followers by suggesting that inhaling hydrogen peroxide can treat Covid, or that the drug ivermectin can function as an effective alternative to vaccination. At present, OFCOM would be powerless to examine the harms caused by content promoting these dangerous false cures.

 

Recommendations

 

Address hate and misinformation explicitly

 

At present there is no democratic oversight of the government’s interventions on harmful hate and misinformation. Much of this oversight is carried out in private meetings and correspondence between civil servants and social media platforms, making the process of managing harms such as vaccine misinformation opaque and unaccountable.

 

This Bill is our opportunity to introduce effective, democratically accountable regulation that addresses the pressing harms posed by online hate and misinformation. But in order to do that, the Bill must explicitly address these problems as part of its regulation of “content harmful to adults”. Ideally the Bill would be amended to include explicit references to the harms caused by hate and misinformation. In any case, the Secretary of State for Digital, Culture, Media and Sport should publish the list of “priority content” the OFCOM will be instructed to regulate in addition to the categories already outlined in the Bill to allow politicians, the public and civil society organisations to understand how regulation might work in practice.

 

OFCOM’s expected oversight of “content that is harmful to adults” must then be strengthened to ensure that regulation is effective. At present Clause 11 of the Bill demands only that the largest platforms show they have “dealt with” priority content that is harmful to adults. This is much weaker than the duties that apply to other harmful content in the Bill, which demand that platforms not only “deal with” harmful content but take steps to “mitigate and effectively manage” the harms they cause. We know from experience that platforms frequently claim to have ‘dealt with’ a problem when we can produce evidence that they have not - regulation needs to change this complacent mindset by holding the platforms to account for what they claim to have done. Given our experience, it would be foolish to trust without verification.

 

The government should also review its decision to regulate “content harmful to adults” only on the largest “Category 1” platforms. While the largest platforms have the greatest potential for harm, our research has identified significant harms caused by smaller platforms. For example, extremist anti-vaxxers have used the Telegram app to coordinate attempts to storm buildings and clash with police.[27]

 

With these issues addressed, OFCOM will be in position to transform the government’s current process for intervening on harmful hate and misinformation into one that is transparent and open to democratic accountability.

 

Commit OFCOM to performing independent audits of platform safety

 

We welcome the Bill’s provision for OFCOM to commission independent “reports by skilled persons” to identify a platform’s failures to fulfil its duties. However, if OFCOM is to effectively regulate social media companies, this must be strengthened to become a firm commitment to regular independent audits of platform safety.

 

It has been reported that Facebook in particular is seeking to manipulate or restrict the publicly available information about harmful content circulating on its platforms.[28] In particular, none of its recent “transparency reports” have answered the simple and pressing question of how many users have seen vaccine misinformation on Facebook.

 

In order to effectively regulate platforms, OFCOM must not allow them to mark their own homework. Instead it should engage independent researchers and organisations to conduct audits of their performance on managing harmful content. Our own research has shown that audits of platforms’ performance in acting on reports of harmful content and of the harmful content recommended by their algorithms can give a truer, transparent picture of the systemic failures of platforms to keep their users safe.

 

Furthemore, it is also concerning that Clause 49 of the Bill currently limits the range of transparency information that OFCOM may demand from platforms. Given the fast-changing nature of online harms, OFCOM must have the power to demand new categories of transparency information from platforms as the need arises without having to go back to Parliament or the Secretary of State.

 

Move forwards with criminal liability for tech executives immediately

 

While the Bill contains criminal offences for tech executives that fail to comply or provide false information in response to OFCOM’s requests for information, we understand that this section of the Bill will not be brought forward until at least two years into the new regulatory regime at which point OFCOM will report back to Secretary of State on its necessity.

 

This would be a grave mistake. Tech executives have repeatedly shown contempt for elected officials and regulators. Summits on online football racism over the past few years have yielded no substantial improvement in Big Tech’s independently-assessed enforcement record. Even six weeks after we reported 105 accounts for racially abusing England footballers, Instagram had still failed to act on 75 percent of them.[29] In that time, one of the accounts we identified went on to racially abuse the F1 driver Lewis Hamilton too.[30]

 

Most recently, Facebook responded to a request for information on how the platform is handling Covid vaccine misinformation from the Chair of the US House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee Jan Schakowsky with a single paragraph reply stating “we have nothing to share… outside of what Mark [Zuckerberg] has said publicly.”[31]

 

Big Tech resorted to its playbook of “deny, deflect, delay” even when President Biden questioned them on the “Disinformation Dozen” who we showed are responsible for up to 65 percent of deadly vaccine disinformation.[32] Facebook’s immediate response was to publish a blog accusing the President of “finger pointing”.[33]

 

The reality is that the Disinformation Dozen are still reaching 7.9 million followers on mainstream social media platforms, including 6.3 million followers on Facebook and Instagram. While we are proud that our campaigning has disrupted the activities of vaccine disinformation superspreaders, causing them to lose access to 6.3 million followers, the first instinct of platforms has been to deny that there is a problem.

 

Figure 1: Enforcement progress against Disinformation Dozen accounts on Facebook,

Instagram, Twitter and YouTube

Name

Accounts

Removed

Current

Drop

Joseph Mercola

9

0

3,946,808

0

Ty & Charlene Bollinger

15

8

1,420,559

-359,221

Robert F. Kennedy Jr.

7

1

1,145,060

-796,731

Christiane Northrup

4

1

740,808

-174,600

Kelly Brogan

4

1

184,809

-130,382

Sherri Tenpenny

14

6

101,019

-620,931

Rizza Islam

8

6

95,845

-975,153

Erin Elizabeth

13

11

87,498

-1,148,901

Rashid Buttar

5

3

86,967

-1,176,999

Sayer Ji

8

6

63,483

-690,584

Kevin Jenkins

5

1

14,692

0

Ben Tapper

5

3

9,099

-199,271

Grand Total

97

47

7,896,647

-6,272,774

 

The behaviour of tech companies and their executives has shown that criminal liability for intentionally obstructing or misleading OFCOM is needed in order to secure their cooperation. YouGov polling commissioned by CCDH shows that this policy would have broad public support, with 66 percent of UK respondents agreeing that social media bosses “should face criminal charges” if their platforms are found to leave up material designed to spread misinformation on vaccines.[34]

 

Allow OFCOM to examine harms caused by advertising and unsafe products

 

Given the nature of online harms, particularly the pressing harms caused by Covid disinformation, it would also be beneficial to amend the Bill so that OFCOM is able to regulate the harms caused by advertising and the promotion of unsafe products. Clause 46 (8)(b)(ii) appears to exclude regulation concerning product safety - this would challenge for instance tackling disinformation about ivermectin the horse dewormer currently being touted as a false Covid cure.

 

Make the Bill more easily accessible to the public

 

The public are deeply concerned by online hate and misinformation, and strongly support tough penalties for platforms that systematically fail to address these problems.[35] However, we agree with other experts such as the Carnegie Trust that the Bill is currently extremely complex and difficult for the public to understand.

 

The public must be able to see that the Bill will address pressing problems such as digital racist hatred, trolling and vaccine disinformation. This could be accomplished by addressing these points on the face of the Bill.

 

 

September 2021


[1] “Misinformation on Facebook got six times more clicks than factual news during the 2020 election, study says”, Washington Post, September 6 2021 https://www.washingtonpost.com/technology/2021/09/03/facebook-misinformation-nyu-study/

[2] “The Anti-Vaxx Industry”, Center for Countering Digital Hate, 7 July 2020, page 6, https://www.counterhate.com/anti-vaxx-industry

Lazer, David, Jon Green, Katherine Ognyanova, Matthew Baum, Jennifer Lin, James Druckman, Roy H. Perlis, et al. 2021. “The COVID States Project #57: Social Media News Consumption and COVID-19 Vaccination Rates.” OSF Preprints. 27 July 2021. doi:10.31219/osf.io/uvqbs

[3]Will to Act”, Center for Countering Digital Hate, 4 June 2020, https://www.counterhate.com/willtoact

“Failure to Act”, Center for Countering Digital Hate, 3 September 2020, https://www.counterhate.com/failure-to-act

“Marketplace flagged over 800 social media posts with COVID-19 misinformation. Only a fraction were removed”, CBC, 30 March 2021, https://www.cbc.ca/news/marketplace/marketplace-social-media-posts-1.5968539

[4] “Failure to Act”, Center for Countering Digital Hate, 3 September 2020, https://www.counterhate.com/failure-to-act

[5] “Marketplace flagged over 800 social media posts with COVID-19 misinformation. Only a fraction were removed”, CBC, 30 March 2021, https://www.cbc.ca/news/marketplace/marketplace-social-media-posts-1.5968539

[6] “Failure to Protect”, Center for Countering Digital Hate, 30 July 2021, https://www.counterhate.com/failuretoprotect

[7] “Failure to Protect”, Center for Countering Digital Hate, 30 July 2021, pages 8 and 23, https://www.counterhate.com/failuretoprotect

[8] “Instagram fails to take down more than 94% of racist abuse accounts targeting England players after Euros”, iNews, 15 July 2021, https://inews.co.uk/news/technology/instagram-racist-abuse-posts-england-players-after-euros-1102896

[9] “YouTube channel ‘spreads incel hate’”, The Times, 24 August 2021, https://www.thetimes.co.uk/article/youtube-channel-spreads-incel-hate-lhk3p350m

[10] “The Disinformation Dozen”, Center for Countering Digital Hate, 24 March 2021, https://www.counterhate.com/disinformationdozen

[11] “The Anti-Vaxx Playbook”, Center for Countering Digital Hate, 22 December 2020, page 38, https://www.counterhate.com/playbook

[12] “Don’t Feed the Trolls”, Center for Countering Digital Hate, 16 September 2019, https://www.counterhate.com/dont-feed-the-trolls

[13] “Don’t Feed the Trolls”, Center for Countering Digital Hate, 16 September 2019, page 11, https://www.counterhate.com/dont-feed-the-trolls

[14] “Instagram wants you to keep scrolling even longer”, CNN, 19 August 2020, https://edition.cnn.com/2020/08/19/tech/instagram-suggested-posts/index.html

[15]Malgorithm”, Center for Countering Digital Hate, 9 March 2021, https://www.counterhate.com/malgorithm

[16]Malgorithm”, Center for Countering Digital Hate, 9 March 2021, page 10, https://www.counterhate.com/malgorithm

[17] The Verge, 19 August 2021, https://www.theverge.com/2020/8/19/21373809/instagram-suggested-posts-update-end-feed

[18] “The Anti-Vaxx Industry”, Center for Countering Digital Hate, 7 July 2020, page 5, https://www.counterhate.com/anti-vaxx-industry

[19] Avaaz, 21 July 2021, https://secure.avaaz.org/campaign/en/fb_algorithm_antivaxx/

[20] “The Anti-Vaxx Playbook”, Center for Countering Digital Hate, 22 December 2020, page 9, https://www.counterhate.com/playbook

[21] “Facebook promised to ban anti-vaxx ads. One day later it’s still broadcasting them.”, Center for Countering Digital Hate, 14 October 2021, https://www.counterhate.com/post/facebook-promised-to-ban-anti-vaxx-ads-one-day-later-it-s-still-broadcasting-them

[22] “The Anti-Vaxx Industry”, Center for Countering Digital Hate, 7 July 2020, page 32, https://www.counterhate.com/anti-vaxx-industry

[23] “The Anti-Vaxx Industry”, Center for Countering Digital Hate, 7 July 2020, page 34, https://www.counterhate.com/anti-vaxx-industry

[24] Stop Funding Misinformation, retrieved 3 September 2021, https://www.stopfundingmisinformation.com/the-briefing

[25] “Pandemic Profiteers”, Center for Countering Digital Hate, 1 June 2021, https://www.counterhate.com/pandemicprofiteers

[26] “Pandemic Profiteers”, Center for Countering Digital Hate, 1 June 2021, page 14, https://www.counterhate.com/pandemicprofiteers

[27]Antivaxers clash with police at (wrong) BBC studio”, The Times, 9 August 2021, https://www.thetimes.co.uk/article/antivaxers-clash-with-police-at-bbc-white-city-studio-tswnkpxf0

“Anti-vaccine protesters try to storm London offices of medical regulator”, The Guardian, 3 September 2021, https://www.theguardian.com/world/2021/sep/03/anti-vaccine-protesters-try-to-storm-london-offices-of-medical-regulator

[28] “Inside Facebook’s Data Wars”, New York Times, 14 July 2021, https://www.nytimes.com/2021/07/14/technology/facebook-data.html

[29] “Euro 2020 racism: File on 4 confronts Saka troll”, BBC News, 7 September 2021, https://www.bbc.co.uk/news/uk-58466849

[30] “Instagram owners Facebook have not taken any action against 31 accounts that sent racist abuse including monkey emojis to Lewis Hamilton after his British Grand Prix win”, Daily Mail, 19 July 2021, https://www.dailymail.co.uk/sport/sportsnews/article-9803259/Instagram-owners-Facebook-no-action-against-31-accounts-sent-abuse-Lewis-Hamilton.html

[31] “Facebook refuses inquiry by Reps. Eshoo and Schakowsky on COVID-19 misinformation”, Tech Policy Press, 26 August 2021, https://techpolicy.press/facebook-refuses-inquiry-by-reps-eshoo-and-schakowsky-on-covid-19-misinformation/

[32] “‘They’re killing people’: Biden aims blistering attack at tech companies over vaccine falsehoods”, Washington Post, 16 July 2021, https://www.washingtonpost.com/politics/biden-vaccine-social-media/2021/07/16/fbc434bc-e666-11eb-8aa5-5662858b696e_story.html

[33] “Moving Past the Finger Pointing”, Facebook, 17 July 2021, https://about.fb.com/news/2021/07/support-for-covid-19-vaccines-is-high-on-facebook-and-growing/

[34] “The public back sanctions on tech giants that spread anti-vaxx misinformation”, Center for Countering Digital Hate, 14 July 2020, https://www.counterhate.com/post/the-public-back-sanctions-on-tech-giants-that-spread-anti-vaxx-misinformation

[35] “The public back sanctions on tech giants that spread anti-vaxx misinformation”, Center for Countering Digital Hate, 14 July 2020, https://www.counterhate.com/post/the-public-back-sanctions-on-tech-giants-that-spread-anti-vaxx-misinformation