Written evidence submitted by the Community Security Trust





DCMS Sub Committee on Online Harms and Disinformation

Call for Evidence – Online Safety and Online Harms


Written Evidence submitted by the Community Security Trust (CST) – August 2021


This submission is from Community Security Trust (CST), a charity that advises and supports the UK Jewish community in matters of security, antisemitism, extremism and terrorism.


CST monitors antisemitism and antisemitic incidents in the UK, publishing twice yearly antisemitic incident reports. In addition, CST researches antisemitism and regularly publishes reports on various facets including online antisemitism. Most CST publications are available to view on the CST website here.


CST broadly welcomes the Online Safety Bill and the regulation of online spaces by Ofcom as a necessary and long overdue intervention. Though the mainstream Social Media platforms have incrementally improved community standards and worked to make their online spaces less welcoming to antisemitic and extremist users in recent years, as evidenced by CST and partner organisations progress is still slow and enforcement can be inconsistent. In addition, the ballooning of newer and smaller Social Media platforms that consider themselves to be free speech alternatives to the more established platforms, has led to completely ungoverned spaces where antisemites, extremists and terrorists coalesce to create communities where antisemitism and violent extremist incitement and rhetoric is rife, and easy to be drawn into.



1. How has the shifting focus between ‘online harms’ and ‘online safety’ influenced the development of the new regime and draft Bill?




2. Is it necessary to have an explicit definition and process for determining harm to children and adults in the Online Safety Bill, and what should it be?


Extremist and hate content online is a significant and growing concern, and needs to be prioritised in the Online Safety Bill alongside terrorism, and child sexual exploitation and abuse. Hate content online is a clear gateway to violence and terrorism offline. Antisemitism is often a strong theme that runs through all ideological extremisms, extremism that does not necessarily fit one ideology (Mixed, Unclear, Unstable), and should be considered an indicator of extremism which can and does lead to terrorism or the incitement and promotion of terrorism and terrorist acts. Several perpetrators of deadly antisemitic attacks in recent years were active in online forums where antisemitism and antisemitic content was rife including Robert Bowers, the shooter who in October 2018 killed 11 Jewish worshippers at the Tree of Life Synagogue in Pittsburgh, USA.


Conspiracy theories that are rife in many online spaces, including the QAnon and Covid hoax conspiracies and more explicitly antisemitic conspiracies such as Jewish people controlling global affairs, are either rooted in classic antisemitism or are only one step away from drawing proponents and adherents into antisemitic spaces and discourse.


These conspiracy theories can result in offline harms: Bowers was a believer in the ‘Great Replacement’ conspiracy theory that suggests Jews are responsible for mass immigration into Western societies, and other far right terrorists have been proponents of this and similar conspiracy theories.



4. What are the key omissions to the draft Bill, such as a general safety duty or powers to deal with urgent security threats, and (how) could they be practically included without compromising rights such as freedom of expression?


The key element omitted from the draft Bill is the issue of anonymity online, which has not been addressed at all. As with all online Hate Crime, much of the online antisemitism recorded and researched by CST is perpetrated by anonymous users who are likely to feel emboldened to act with impunity from behind online avatars.


It is likely that simple changes enforced by law will have a significant impact on how anonymous perpetrators of online harm will behave online, whilst continuing to protect the right to be anonymous if operating within the scope of the law.


CST understands the real and legitimate need for many users to maintain anonymous online profiles in order to protect themselves and their identities. However, we advocate that in the same way that the law maintains the right to be anonymous offline until laws are broken, if a user breaks the law with their online behaviour, they should be identified in a responsible manner.


One mechanism could be the use of a magistrate’s court order being issued to Social Media platforms and/or Internet Service Providers to reveal the identity of an anonymous law-breaking account in a responsible and confidential manner to an investigating Police force. This would increase the chance of convicting users for online illegality, as well as being a likely mechanism to dampen online illegality and abuse due to the fear of sanctions. It would also necessitate Social Media platforms and the like instituting processes whereby they responsibly know or have access to the identity of their users, irrespective of whether those users decide to use anonymous profiles or not.



5.Are there any contested inclusions, tensions or contradictions in the draft Bill that need to be more carefully considered before the final Bill is put to Parliament?


One tension in the draft bill is how it defines which tier social media platforms will be categorised into, and therefore which platforms will have a duty to address legal but harmful content. If companies and platforms will be categorised purely on ‘size’ and ‘functionality’ there is a risk that those smaller platforms that have become havens for antisemitic users and antisemitic content, that may not be considered illegal but is certainly harmful (eg Holocaust Denial and antisemitic conspiracy theories), will allow such content to flourish. Though far from perfect, most of the larger and pre-eminent social media platforms have slowly taken steps to address legal but harmful content and tighten their community standards to try and limit it. It is the smaller platforms that arguably present the greater risk of the most extreme kinds of antisemitic harm, illegality and as potential gateways to radicalisation and extremism, and therefore due consideration should be taken in elevating those platforms into the ‘Tier 1’ category based on the risk they pose.