Written evidence submitted by The Henry Jackson Society (OSB0028)





About the Author


Isabel Sawkins is a Research Fellow at the Henry Jackson Society. She has a BA in Modern Languages at Durham University and an MA in Political Sociology of Russia and Eastern Europe at UCL. She is currently completing a PhD on Holocaust memory in the Russian Federation at the University of Exeter, funded by the South West and Wales Doctoral Training Partnership (part of the Arts and Humanities Research Council).

The Henry Jackson Society has taken an interest in the development of the Online Safety Bill as we feel that Social Media Platforms, under current regulations, harm our democracy.





  1. The news that the Government would be proposing legislation on Online Harms was welcomed by campaigners, charities and those that have faced hate and worse online. The Online Harms White Paper in 2019 proposed a new regulatory framework, built on existing initiatives. However, it has not solved the problem. The core of this framework would be a duty of care ascribed to the platforms. This new framework was caught between those that felt it did not do enough to protect and those that felt it limited online freedoms.


  1. In December 2020 the response to the White Paper was published. It was confirmed that Ofcom would be the regulator of these platforms and that duties of care would be introduced in an Online Safety Bill. These duties of care include protecting users from illegal material; protecting children from harm; protecting individual adults from harm. It was also made clear that platforms would have a responsibility to ensure key elements of our society were protected; the right to speech and privacy; content of democratic importance; journalistic output.


  1. This bill was introduced in the Queen’s Speech in May 2021, expanding on the response to the White Paper to include company fines and the power to block sites that did not abide by the rules. In an attempt to prioritise the right to privacy, for which there are meaningful reasons, the bill does not include the requirement to end anonymity online. Without the cloak on anonymity much of the hateful and harmful content that targets users would vanish. This briefing will detail how this content could be ended on our major social media platforms, and why it should be implemented.




  1. Anonymous bots and trolls are prolific on social media. They will often target anyone, but a 2019 House of Commons report highlighted that they specifically prey on women and minority groups.[1] Anonymity provides these individuals with a cloak of protection; indeed, research conducted by Arthur Santana at the University of Houston showed that almost twice as many anonymous commenters were uncivil online in comparison to those who were identifiable (53 percent and 29 percent respectively).[2] Disinhibited from their online persona and dissociated from their real identity, anonymous social media users feel that there is less accountability for their words.[3]  As life moves more and more online, we need to ensure that the same regulation and social cohesion is applied to the online sphere as that which we've built for intrapersonal interactions over millennia.


  1. This problem has become increasingly prevalent during the Covid-19 pandemic. People have been spending more time on their computers, relying on social media as their main platform for communication. An unfortunate side effect of this trend is the increasing online abuse which accompanies it. The impacts of this are staggering: for example, social media abuse has played a role in the suicide of celebrities, including Love Island star Sophie Gradon in 2016.


  1. Anonymity has also had devastating repercussions in the political sphere. Propaganda bots, often anonymous, are spewing hatred and disinformation online to such an extent that it has forced several MPs to step back from politics.[4] One such example is former MP Nicky Morgan who chose not to run in the 2019 General Election, in part in response to online hatred she had received.[5] This online abuse is thus preventing people from embarking on a public-facing career, out of fear of being hounded by online trolls and bots. Democracy is being eroded by these anonymous online figures.


  1. This briefing report presents potential solutions to this problem, specifically focusing on the question of anonymity. It aligns with polling conducted by YouGov that showed that 45% of the British population believe that photo ID should be required prior to setting up a social media account. [6] It highlights the dangers of anonymity, specifically in spreading hatred and threatening democracy. It then explores the purpose of online verification, how this might look in practice, and how we can ensure that social media companies enforce these rules. It focuses on how the UK can lead the global crackdown on anonymous hate speech, how we can sit at the front of a multilateral framework and start meaningful conversations with global partners who are similarly perturbed by this concerning trend.


“Legal but harmful”


  1. Introducing a requirement for online verification will significantly diminish the number of comments that fall under the categories of “illegal” and “legal but harmful”. “Legal but harmful” behaviour includes, but is not limited to, “disinformation, cyber-bullying and trolling.”[7] As it currently stands, the Online Safety Bill does not offer concrete guidance on policing “legal but harmful” material, as noted in a report by Antisemitism Policy Trust. Rather, the responsibility is left to social media platforms to monitor such behaviour themselves.[8]


  1. Like the Antisemitism Policy Trust, we call for guidance to be issued on what constitutes “legal but harmful” content on social media platforms. It is not in our best interests to leave what is defined as harmful content up to the discretion of the platforms. This guidance should then be applied across all social media platforms in order to ensure that there are not platforms deemed safer for this content.[9] Targeting illegal content alone is not enough, we need to ensure that all online harmful behaviour is punished.


Purpose of online verification:


  1. Thus far, social media companies have not wanted to investigate anonymous trolls, for fear of it impacting their footfall, and the police do not have the resources or manpower to carry out this activity by themselves. Police cases are also hindered by social media platforms withholding information and refusing to hand it over to the authorities. Implementing mandatory online verification will help to lessen this burden.


  1. Online verification will ensure that people are held accountable for their words online. As it stands, anonymity allows them to escape responsibility for causing harm to people across the world. We hypothesise that if people know that their identity could be revealed if they spew abuse online, they will be less likely to spread such hatred. They will finally be held answerable.


  1. Online verification will also allow social media companies to limit the activities of bots and trolls by blocking multiple accounts originating from the same email address/name. It will prevent instances in which one blocked troll will merely set up another account using similar login details, facing no consequences. Social media companies can thus stem the flow of hatred.


  1. However, some have raised concerns with online verification. The first is that it would impinge upon freedom of speech. Yet, we contend that people can be free to share their thoughts but must be held responsible for those words if they purposely cause egregious harm to others. Another of the main concerns with online verification is that for some people, publicly disclosing their identity might put them at risk. This could include, but is not limited to, members of the LGBTQ+ community, domestic abuse victims, dissidents, and refugees. Indeed, there are many advantages to anonymity: anonymity “protects people exposing repression, corruption and hate, and allows stigmatised and abused communities to find safety and support when revealing their real-world identity could expose them to harm.”[10] In these cases, it is important to preserve anonymity so that these individuals can freely express themselves. This is one of the arguments stipulated by the Open Rights Group who are fighting to preserve anonymity.[11]


  1. Taking these concerns into consideration, we argue that an individual’s identity can remain unknown to the social media company itself and those the individual engages with on the platform; all identifiable information must be held by a middleman (government or private), accessible only in those situations when a report has been filed and deemed worthy of investigation. This would allow individuals who are at risk to continue to feel like social media is a safe space for them to share their opinions, whilst on the other hand also implementing a duty of care to protest those unjustly targeted by anonymous trolls.


How might this look?


  1. There are several companies who have already implemented online verification in their own businesses. These models serve as a key inspiration for how online anonymity might work on social platforms. Ultimately, the number one priority must be to have this information held securely, only disclosed to the police in instances in which the social media platform’s T&Cs have been breached. It must also be 100% GDPR complaint.


  1. For several social media websites, such as Twitter and Facebook, a user needs to have an email to set up a page. But what else could social media companies ask them to submit in order to verify their identity? An address, which could then be cross referenced against government databases? Or maybe a photo of them holding an ID card? Critics have expressed concern regarding the latter suggestion, arguing that there are 3.5 million people in the UK who do not currently have access to a valid photo ID.[12] We argue that it should be up to the social media platform themselves to determine the exact form of identity verification they require, provided it fits within the legal framework posed by the government.


  1. We suggest that verification could be offered through either the Gov.uk verify website or Post Office easyID. Gov.uk would offer social media companies the opportunity to:


    1. Check the details users provide against records held by organisations like credit agencies.
    2. Choose from different levels of identity assurance (confidence that users are who they say they are) depending on whether a risk assessment suggests your service faces a high or low risk of identity fraud.”[13]


    1. As a government-affiliated system, Gov.uk assure users that data would be “protected to a high standard”, and they would “continually monitor[s] for identity fraud.”[14]


  1. If social media companies would rather use a non-government-affiliated system, an alternative might be to use the Post Office easyID service. They provide an identity service for businesses, in collaboration with Yoti, which is GDPR complaint and could include:


    1. “Accurate data extraction: We accept thousands of ID documents from over 200 countries and territories. We accurately extract ID data using OCR technology and NFC chip reading capabilities.
    2. Document authenticity check: Our hybrid approach uses both automatic and manual processes to check the security features of each unique ID document to make sure it is genuine.
    3. Liveness detection: Our anti-spoofing liveness test gives us confidence that we have captured an image of a real person in front of a camera, not a spoof through an automated bot, mask or photo.
    4. Biometric face match: We compare the scan of the user’s face captured during liveness to their ID document photo to confirm that the ID document belongs to the user.”[15]
    5. Additional verification options include proof of address, third-party data checks, and AML watchlist screening.[16]


  1. There are several organisations who have used online verification on their platforms. It has provided them with credibility and allowed them to ensure the identity of their users and build trust. These include, but are not limited to:



  1. Paypal





  1. AirBnB



  1. Natwest


  1. Tinder



Policy Recommendations:


  1. The current government line of merely suggesting that social media companies limit anonymity on their platforms is not good enough. Social media companies will be reluctant to institute change unless it is compulsory. We propose several options with regards to enforcing anonymity:


  1. Fine the social media companies for unidentifiable abuse on their platforms. They therefore become financially responsible for the hatred spewed on their sites. However, for companies such as Twitter and Facebook, a fine is going to be a drop in the ocean for them and unlikely to instigate substantial change. As such these fines should be extended to the Directors of the social media companies. By applying a personal element to the fines the severity of the harms that their users experience will be more genuinely felt.


  1. Another way to target the social media companies’ revenue might be to block advertising on their websites. Advertising revenue is driven by the number of users on a platform, by introducing more stringent verification the number of bots and users banned under new regulations would affect this income stream. If the government had the ability to temporarily ban social media platforms from selling their advertising space unless they comply with the duties of care and protection of rights, then there would be more incentive for the companies to do so.


  1. Ensure that Directors of social media companies are held criminally liable for any abuse/breach of code of conducts from unidentifiable accounts on their platforms. Combining criminal and financial responsibility for this hatred will encourage social media companies to enforce changes to their anonymity rules.


  1. The UK is not alone in fighting this growing trend of anonymous online abuse. Other nations are establishing their own forms of this proposed legislation. Given the cross-border nature of online platforms working with other concerned nations is in both ours and theirs best interest. As Global Britain becomes the driver of our Foreign Policy we should ensure that our values are being protected at home and abroad. Here is a sphere where the UK can lead the world.


    1. The UK is developing this legislation in isolation. Online harms are a problem that is being felt around the world, especially in developed nations where more of day-to-day life is carried out online. In South Korea self-harm is becoming a crucial discussion as citizens are committing suicide in response to online abuse.[17] Ireland is also considering the balance between duty of care and the right to online anonymity.[18]


    1. Even beyond the specific question of anonymity, there are broader global concerns about online safety. Questions of cybersecurity are also being considered in the United States, albeit not addressing anonymity in particular.[19] The US is also concerned with the security of its elections as disinformation becomes increasingly prevalent.[20] Whilst Germany’s new surveillance laws prioritise the prosecution of illegal and harmful content (including disinformation) over the user’s right to privacy.[21]


    1. How can the UK bring together other nations to ensure a safer online society? We propose that initial conversations are established through transnational frameworks that already exist such as G7 or G20. These conversations should then lead to a more focussed forum, equivalent to the COP26 global project. Should the D10 project ever be convened this would be the ideal way to set international standards and codes of conduct. Especially in opposition to those powers which take advantage of loose online regulation to destabilise our democracies.


    1. In establishing a modus operandi that extends beyond British waters, we are not saying that we expect every country to be involved; nor are we implying that policies will be implemented in the same way in every country. Rather, we argue that the purpose of this multilateral framework would be to start a conversation, with the UK at the helm. Social media companies are more likely to act, to impose restrictions on anonymity on their platforms, if there are more voices in this conversation.



September 2021




[1] David Babbs. ‘New year, new internet? Why it’s time to rethink anonymity on social media.’ Inforrm. https://inforrm.org/2020/01/31/new-year-new-internet-why-its-time-to-rethink-anonymity-on-social-media-david-babbs/; ‘Communications, social media and other media.’ UK Parliament.


[2] Zachary Renshaw. ‘The Anonymity Of Social Media.’ Odyssey. https://www.theodysseyonline.com/the-anonymity-of-social-media

[3] ‘Anonymity and Social Media.’ Applied Social Psychology. Penn State. https://sites.psu.edu/aspsy/2015/10/26/anonymity-and-social-media/

[4] Stephen Kinsella. ‘Some reflections on Freedom of Expression, regulation of social media platforms, and anonymity.’ Clean up the internet. https://www.cleanuptheinternet.org.uk/post/some-reflections-on-freedom-of-expression-regulation-of-social-media-platforms-and-anonymity

[5] Frances Perraudin and Simon Murphy. ‘Alarm over number of female MPs stepping down after abuse.’ The Guardian. https://www.theguardian.com/politics/2019/oct/31/alarm-over-number-female-mps-stepping-down-after-abuse

[6] ‘Online Harms Consultation Gives UK Opportunity to Curb Online Anonymous Abuse.’ Digital Tories. https://digitaltories.uk/online-harms-consultation-gives-uk-opportunity-to-curb-online-anonymous-abuse/

[7] ‘Online Harms.’ Child Rights International Network. https://home.crin.org/issues/digital-rights/online-harms

[8] Antisemitism Policy Trust. ‘Special Briefing: The Draft Online Safety Bill.’ https://antisemitism.org.uk/wp-content/uploads/2021/07/Draft-Online-Safety-Bill-7.pdf

[9] Ibid.

[10] ‘Social Media Futures: Anonymity, Abuse and Identity Online.’ Institute for Global Change. https://institute.global/policy/social-media-futures-anonymity-abuse-and-identity-online

[11] Heather Burns. ‘#SaveAnonymity: Together We Can Defend Anonymity.’ Open Rights Group. https://www.openrightsgroup.org/blog/saveanonymity-together-we-can-defend-anonymity/ 

[12] ‘Make verified ID a requirement for opening a social media account.’ Petitions. UK Government and Parliament. https://petition.parliament.uk/petitions/575833

[13] Gov.uk Verify. https://www.verify.service.gov.uk/

[14] Ibid.

[15] ‘Identity verification.’ Yoti. https://www.yoti.com/business/identity-verification/

[16] Ibid.

[17] John Duerden. ‘K-pop suicides reignite online abuse debate.’ Nikkei Asia. https://asia.nikkei.com/Life-Arts/Life/K-pop-suicides-reignite-online-abuse-debate

[18] ‘EU court asked to rule on right to online anonymity.’ Pinsent Masons. https://www.pinsentmasons.com/out-law/news/eu-court-right-to-online-anonymity

[19] Dan Lohrmann. ‘Biden Sets Cyber Standards for Critical Infrastructure.’ Government Technology. https://www.govtech.com/blogs/lohrmann-on-cybersecurity/biden-sets-cyber-standards-for-critical-infrastructure

[20]H. Rept. 116-246 - STOPPING HARMFUL INTERFERENCE IN ELECTIONS FOR A LASTING DEMOCRACY ACT.’ https://www.congress.gov/congressional-report/116th-congress/house-report/246


[21] David Fischer. ‘Germany’s New Surveillance Laws Raise Privacy Concerns.’ Human Rights Watch. https://www.hrw.org/news/2021/06/24/germanys-new-surveillance-laws-raise-privacy-concerns