Written evidence submitted by the Antisemitism Policy Trust (OSB0005)

 

This evidence is compiled and authored by the Antisemitism Policy Trust, a charitable organisation working to educate decision makers about antisemitism. The collaborative submission also comprises expert contributions provided by other bodies, either representative of the Jewish community, or working closely with it, and/or against antisemitism. This includes: the Community Security Trust, the Jewish Leadership Council, the Holocaust Educational Trust and the Union of Jewish Students. 

 

1. Antisemitism Online:


Antisemitism is widespread and pervasive online. It ranges from dedicated websites which contain racist materials about Jewish people to remarks made on comments boards run by professional associations. The Community Security Trust (CST) which holds one of, if not the most accurate dataset on antisemitism in the world, reports that approximately 40% of antisemitic incidents in the UK over the past several years have occurred online, although the CST only details incidents where the victim or witness is based in the UK, where incidents have been proactively reported to them, and repeat attacks on an individual constitute one incident. If the CST were to proactively search for antisemitic incidents and discourse online, it would be a never-ending process.[1]

 

Facebook, Twitter, Instagram, YouTube and other large platforms can facilitate the spread of antisemitic material which often remains on their services, even when it violates their own terms and conditions.[2] However, antisemitism is also widespread across smaller or so-called ‘alternative’ platforms, including and specifically those that play host to individuals expressing extreme ideologies, for example on the far-right or in the so-called manosphere. In these spaces, antisemitic rhetoric is used to incite violent hatred of, and radicalise people against Jews. Antisemitism on these platforms has inspired physical attacks and terrorism against Jews across the globe, and these platforms have been used to signal upcoming violent and deadly antisemitic attacks.[3] Online platforms have also been used for planning co-ordinated online campaigns against Jewish people, including those in public life.[4]

 

The numbers are stark. Data presented at an Antisemitism Policy Trust conference by NGO Media Matters detailed that there had been 630,000 antisemitic posts on the anonymous alt-right platform 4chan in 2015 rising to 1.7 million posts by 2017.[5] They found an 180% increase in posts that contain antisemitism and misogyny between the same two years.[6] In 2019, CST and Antisemitism Policy Trust reported that 170,000 Google searches with antisemitic content are made per year in the UK, with some 10% of these containing violent language or intention.[7]

 

Research by the CST and the Institute for Jewish Policy Research proposed an ‘elastic’ theory of antisemitism, in which antisemites are less common than antisemitism, but according to which anti-Jewish ideas spreading amongst approximately 30% of the population. [8] Social media facilitates this spread and increases it exponentially. The Government is right to legislate to address online harms. It has set out its intention to combat antisemitism in the digital sphere[9] and we set out below some concerns we have that the Bill, as envisaged, does not do enough to achieve that aim.

 

2. Duties Of Care

We strongly support the proposals to establish a ‘duty of care’ set out in the draft Bill. The Government’s initial documents (the Green and White papers) proposed that such a duty would encompass regulated bodies taking “reasonable steps to keep their users safe and tackle illegal and harmful activity on their services” and referenced “reasonable and proportionate action” to tackle harms on their services. The duties as they stand are more specific, and do not reference risks of harm that are “reasonably foreseeable”. This, in our view, is an error. An all-encompassing duty, applied differentially according to size or other key factors, would likely provide more protection and ensure a systems-facing approach, rather than the perspective on content which the current drafting risks.

 

There are various examples of how a systems-lead approach might work in practice. Under the current arrangements, a racist rant by a musician on Twitter or the broadcast on YouTube live-stream of a racist comment might lead to complaints to the regulator, but the regulator would not necessarily investigate (given there is no mechanism for individual complaints). The major requirement facing the platforms would be to ensure their Terms and Conditions ‘deal with’ antisemitism. At present, both companies, YouTube and Twitter, would argue that their Terms and Conditions do just that but as has been proven repeatedly[10][11], the enforcement is lacking and patchy. By contrast, a duty to address reasonably foreseeable harms would ensure that racism on live stream and abuse promoted by social media accounts with large followings were considered at an earlier stage – a systems-based approach. If the proposed duties are not encompassed under a wider duty to address reasonably foreseeable harms, then the proposed categorisation of companies (into categories 1 and 2) should be amended.

 

2.1 Duties of Care: Search

The draft Bill omits Search Services from category 1 duties. This means that irrespective of size, a search company will not be bound by the duty to address legal but harmful content, nor to protect content of democratic importance, for example. The only duties that Search Services will have will be in relation to limiting or removing illegal content from their platforms.

 

Research by the Antisemitism Policy Trust and Community Security Trust has demonstrated, on more than one occasion, that Google’s services have led to harm.[12][13] For example, before campaigners forced a change, Google prompted users towards the search ‘Are Jews…. evil?’ (whereby ‘evil’ was autosuggested by Google), and its ‘SafeSearch’ feature cannot remove antisemitic images. Microsoft Bing, meanwhile, sent users to the search term ‘Jews are b******s’ before being alerted to the problem. Amazon’s Alexa service directed people to both antisemitic and anti-Muslim results before it was embarrassed into changing its systems.[14]

 

Google might say that its returns are not provided, or ‘owned’ by the company but rather by its algorithms. The company might also argue that preventing someone from a particular search inhibits freedom. Our position is that if a company like Google is not included in category 1 then the arrangement is not fit for purpose. Google and Microsoft should not be absented from responsibility in respect of addressing legal harms, nor should voice recognition services like those operated by Apple or Amazon. These are private platforms, not free speech entities, they direct and encourage search and their systems can be just as harmful as those operated by companies housing user-to-user generated content.

 

2.2 Duties of Care: Categorisation

 

The current proposal for the categorisation of companies is that their placement be determined according to size and functionality. This could leave many alternative platforms out of category 1, even if they host large volumes of harmful material. Platforms including Bitchute, Gab and 4Chan, house extreme racist, misogynist, homophobe and other extremist content that radicalises and incites harm. The Community Security Trust has outlined in detail some of the most shocking and violent materials on these sites[15] and whilst illegal material has been present, much of that content is legal but harmful (and would be addressed in other environments, such as a football ground, cinema or on TV/radio). That lawful material can and has transferred to more mainstream platforms and has influenced real world events. The Antisemitism Policy Trust briefing on the connection between online and offline harms details how antisemitic terrorism, like the deadly attack in a synagogue in Pittsburgh, and deadly Islamophobic attacks, like the Christchurch Mosque attacks, were carried out by men who were, at least in part, radicalised online and whom signaled their intent to attack online, and in some cases seek to livestream their attacks online.[16] We believe that it is crucial that risk be a factor in the classification process determining which companies are placed in category 1, otherwise the Bill itself risks failing to protect adults from substantial amounts of material that causes physical and psychological harm. The relevant schedule (4) needs amending, for example, to reference the risk register developed and maintained by Ofcom in Clauses 61 and 62.

 

Even in its current form, we do not have confidence that the duty placed upon category 1 platforms will be fully effective. There are no minimum standards set out for the Terms and Conditions, expected to have “dealt with” harm to adults. In the past, these have proven to be hugely inconsistent across platforms. Terms and Conditions for addressing harmful content should be required to meet a minimum standard and the wording of the Bill should be amended to recognise this, including by defining what ‘dealt with’ means. Furthermore, risk assessments of harmful content performed by companies in scope should meet a requirement to be ‘reasonable’ to prevent gaming of the system.

 

There is an explicit exemption from all duties for content present on the website of a recognised news publisher. This is deeply problematic. The Antisemitism Policy Trust has worked with Government and others for many years, to highlight the abuse on newspaper website comment forums. For example, as secretariat to the APPG, the Trust worked with the Department of Communities and Local Government (now MHCLG) towards a guide delivered by the Society of Editors in 2014[17] which was inspired by discussion of this form of harm. The exemption in Clause 18 relating to newspaper comments boards should be removed or, at worst, amended to ensure publications have measures in place to address harm on relevant boards. 


2.3 Duties of Care: Countervailing Duties

 

We agree that there is an imperative to protect democratically important and journalistic content. However, the way in which these duties are set out in the draft Bill, mean that extremists, who actively undermine the democratic process by disseminating hateful and racist material, disinformation and other harmful content, will be protected under the law.

 

For example, a racist activist standing for election might be able to demand their harmful material be re-platformed once removed, claiming bias or discrimination against the platform. What would happen in the case of someone spreading misogyny in an electoral race against a candidate from the Women’s Equality Party? In other words, if democratically important speech causes harm, what guidance will be offered beyond leaving the matter to the platforms to decide? Furthermore, without a duty to promote such content, how will platforms ensure spaces are not closed to those often left out of democratic debate? Whilst perhaps well intentioned, the current drafting of this duty is not workable in practice and should be reconsidered.

 

The protective duty in relation to journalistic content presents similar and additional concerns. Journalistic content is poorly defined and might be read, as the anti-racism NGO Hope Not Hate has suggested, as content “generated for the purposes of journalism”.[18] To this end, citizen journalists’ content achieves the same protections (through Ofcom) and enhanced routes to appeal (through platforms) as content from any major national publication. There are examples of far-right activists self-identifying as journalists, and news companies like InfoWars which spread hateful and dangerous conspiracy theories. There is little assurance on the face of the Bill that content produced by such individuals and companies might not be offered special protections, or otherwise benefit from these duties as currently drafted. We are also not convinced by the explanations offered to the Trust and others by officials, that platforms will have to perform a balancing act in respect of harms and content. 

 

 

3. Anonymity

 

The issue of online anonymity is entirely absent from the draft Bill, to its detriment. Anonymity has many benefits. For example, by protecting users from the LGBTQ+ community, victims of domestic violence, and making it safer for whistle-blowers and political dissidents of oppressive regimes to speak out. However, it also has a significant negative impact on online discourse. The ability to be anonymous contributes to trolling, bullying and hate speech, which has a harmful psychological impact on users. This has created a threatening and toxic environment in which many users, including and specifically women and ethnic and religious minorities, suffer constraints to their freedom of speech as a result. Research by the Antisemitism Policy Trust has found that anonymity has a widespread impact on online abuse towards Jewish individuals and on the wider Jewish community.9 

 

Based on our findings, we recommended using a “Know Your Client/Customer” approach, similar to that used in the financial sector. In our view this would guard the privacy and identity of users who wish to remain anonymous but restrict the right to anonymity for users who choose to target others through illegal activity. Our premise is that users will be less inclined to use hate speech and other abusive language, images, or videos, if their identity is known to an online platform and if they are in danger of waving their right to anonymity if their behaviour violates the platform’s terms and conditions, or the law. To that end, we would recommend platforms include clear rules on anonymous accounts and enforce these with vigor. We also recommend setting the bar for removing anonymity high, requiring a burden of proof for a magistrate to order disclosure of identity to investigating Police forces. We understand there are concerns about data security. Technological solutions, including middleware, might be considered as might efforts in other jurisdictions to address these issues. One example is the Pan Canadian Trust Framework (PCTF) developed by the Digital ID & Authentication Council of Canada (DIACC) and the Pan-Canadian Identity Management Sub-Committee (IMSC) of the Joint Councils of Canada. 

 

We are aware and supportive of other approaches, including those proposing verification systems which would allow users the opportunity to only engage with verified accounts on platforms hosting user-to-user services, but would like to see action that goes further than this. 

 

It is important to recognise that privacy and complete anonymity rarely exists online. Technology companies gather an enormous volume of personal information about the users of their services and sell those for profit. As such, placing formal restrictions on anonymity will make a marginal difference in users’ actual privacy, but can have a major impact on creating a safer and more positive online environment. Using a digital identity may be another safe way for individuals to maintain anonymity unless they commit an offence. The technology for creating digital identities already exists, and the Cabinet Office in collaboration with the Department for Digital, Culture, Media and Sport is already looking into creating a framework for digital identity governance that will safeguard privacy and prevent identity theft. This framework might also apply to the prevention of, and action to tackle, online anonymous abuse. 

 

4. Penalties


Senior management liability is included in the draft Bill, but only as a reserved and fairly limited power relating to specific information retrieval. Fines are important but will be written off as the cost of doing business by major companies. Senior management liability, as we have in financial services, can be a powerful and effective tool that will encourage companies to comply with the law and face consequences. It is therefore our position that Ofcom should be granted full enforcement powers with associated criminal sanctions under Chapter 6 of the draft Bill in relation to senior management liability, for breaches of the duties of care, in extremis. 
 

Clause 96 details provisions for publishing decisions by Ofcom but there is no provision to mandate publication of a breach notice by a service. The Antisemitism Policy Trust believes publications, like newspapers directed by IPSO or others, should have details of their breaches of a duty of care available to view on the platform. 

 

We are supportive of Government’s intention to appoint OfCom as the independent regulator to enforce a new framework for addressing online harms. However, we would like to see the Government consider how existing legislation might be applied as part of any future regulatory regime. For example, the Equalities Act defines harassment, its effect and application to (amongst other things) providers of services. It would be useful to understand whether the regulator will use existing legal powers to address hate-based harassment in particular contexts, considering those within the scope of its powers as ‘providers of services’.

 

September 2021

 


[1] https://cst.org.uk/research/cst-publications

[2] https://252f2edd-1c8b-49f5-9bb2-cb57bb47e4ba.filesusr.com/ugd/f4d9b9_cac47c87633247869bda54fb35399668.pdf

[3] https://extremism.gwu.edu/sites/g/files/zaxdzs2191/f/Antisemitism%20as%20an%20Underlying%20Precursor%20to%20Violent%20Extremism%20in%20American%20Far-Right%20and%20Islamist%20Contexts%20Pdf.pdf

[4]https://antisemitism.org.uk/wp-content/uploads/2020/08/Online-Harms-Offline-Harms-August-2020-V4.pdf p.9.

[5]https://antisemitism.org.uk/wp-content/uploads/2020/06/Web-Misogyny-2020.pdf p.7.

[6] Ibid.

[7]https://antisemitism.org.uk/wp-content/uploads/2020/06/APT-Google-Report-2019.1547210385.pdf p.5.

[8] https://cst.org.uk/public/data/file/7/4/JPR.2017.Antisemitism%20in%20contemporary%20Great%20Britain.pdf

[9] https://hansard.parliament.uk/Commons/2020-12-15/debates/1B8FD703-21A5-4E85-B888-FFCC5705D456/OnlineHarmsConsultation#contribution-F9803662-8EB6-48CE-AA8D-EB6007BA3994

[10] https://www.thejc.com/news/uk/shame-of-youtube-jew-hate-1.517411

[11] https://www.theguardian.com/music/2020/jul/25/wileys-management-firm-drops-grime-artist-over-antisemitic-tweets

[12] https://antisemitism.org.uk/wp-content/uploads/2020/06/APT-Google-Report-2019.1547210385.pdf

[13] https://antisemitism.org.uk/wp-content/uploads/2021/05/Unsafe-Search-Report.pdf

[14] https://www.dailymail.co.uk/news/article-8991925/Amazon-says-investigate-claims-Alexa-gives-racist-answers-questions-Jews.html

[15] https://cst.org.uk/news/blog/2020/06/11/hate-fuel-the-hidden-online-world-fuelling-far-right-terror

[16]https://antisemitism.org.uk/wp-content/uploads/2020/08/Online-Harms-Offline-Harms-August-2020-V4.pdf

[17] https://www.societyofeditors.org/wp-content/uploads/2018/10/SOE-Moderation-Guide.pdf

[18] https://www.hopenothate.org.uk/2021/06/07/hope-not-hates-response-to-the-draft-online-safety-bill/