Written evidence submitted by Sara Khan, Former Lead Commissioner at Commission for Countering Extremism, and Sir Mark Rowley, Former Assistant Commissioner (2014-2018) at Metropolitan Police Service (OSB0034)

We wanted to write in response to your call for evidence, having collaborated on the Government’s Commission for Countering Extremism report on the inadequacies of the current state of UK law with respect to extremism. Our Operating with Impunity report was published in February. We would encourage the committee to consider its full analysis and conclusions. Operating with Impunity.

However we thought it helpful to draw out some key points in this letter. We would both seek to make ourselves available for personal testimony if that was helpful.

1.       We define hateful extremism as: Activity or materials directed at an out-group who are perceived as a threat to an in-group motivated by or intending to advance a political, religious, or racial supremacist ideology: a) To create a climate conducive to hate crime, terrorism, or other violence; or b) Attempt to erode or destroy the fundamental rights and freedoms of our democratic society as protected under Article 17 of Schedule 1 to the Human Rights Act 1998 (‘HRA’).

 

In 2020, the then Lead Commissioner of the Commission for Countering Extremism, Sara Khan, appointed Sir Mark Rowley to lead the Commission’s legal review into hateful extremism. The review sought to identify whether there are gaps in existing legislation or inconsistencies in enforcing the law in relation to hateful extremism; and to make practical recommendations that are compatible with existing legal and human rights obligations.

 

2.       In February 2021, the Commission published its findings in the report, Operating with Impunity.

The report evidenced how hateful extremists are able to operate lawfully, both online and offline, due to a lack of legislation designed to capture the specific activity of hateful extremism. We evidence the ghastliness and volume of hateful extremist materials and behaviour which is currently lawful in Britain, including online. As a result, we concluded that this is creating a climate conducive to hate crime, terrorism or other violence; or is eroding and even destroying the fundamental rights and freedoms of our democratic society as protected under Article 17 of Schedule 1 to the Human Rights Act 1998.

 

3.       We encourage the Committee to read our full report, but in particular the Executive Summary and Chapter 6: Legal but Harmful: Online Extremism and the Proposed Online Harms Bill. The references to the evidence we have provided in this submission can be found in full in our report.

 

4.       You will see in our report the detail of the legal gaps that are of grave concern to us. For example, in certain circumstances as we describe, it is legal to ‘intentionally stir up racial or religious hatred’ and


furthermore it is legal to ‘intentionally glorify terrorism’. These cracks in the law have always been there but the ability of online to magnify the effect of such behaviours is, as we evidence, having a grave effect on communities.

 

5.       Chapter 6 evidences the threat and harm of online hateful extremism which is currently legal, in contrast to illegal terrorist content; the inconsistent and often inadequate approach taken by social media companies to remove such content, largely due to the lawfulness of such extremism; and our concerns about how the Government’s Online Harms Bill does not, and will not in its current form, provide the clarity or guidance required in tackling legal but harmful hateful extremism. As a result we will continue to see worrying levels of extremist activity online causing real harm offline despite the existence of the Bill.

 

6.       One of our three key recommendations to Government is to elevate hateful extremism to be a priority threat alongside terrorism and online child exploitation; and to implement the most robust proposals in the Online Harms White Paper. We also made recommendations in relation to closing existing gaps in criminal law that (as explained in detail in the report) mean that on occasion intentionally stirring up racial / religious hatred can be legal, as indeed can glorifying terrorism. An online harms bill in itself will not be able to deal with fundamental flaws in the criminal law and its efficacy will thus be undermined.

 

7.       In the absence of a legal hateful extremism framework, our ability to counter online hateful extremism is severely reduced. The online world has connected and magnified extremist threats through the dissemination of extremist content, extremist conspiracy theories, and in recruiting online. On mainstream platforms, extremist content is often subtly disguised, utilising memes or drawings. On fringe sites, hateful extremist content can be explicit, graphic, and advocate extreme antisemitic, anti- Muslim, Islamist, or other supremacist ideologies. Research suggests that online extremism can often have real world, offline harms.

 

8.       Over the last few years, the Government has made considerable progress in working to remove illegal terrorist content online, including establishing the Counter-Terrorism Internet Referral Unit (CTIRU) set up by Counter Terrorism Policing in 2010, to refer illegal terrorist content to technology companies for removal. Although the Government has made significant efforts to deal with illegal terrorist content, the legal but harmful hateful extremist material we outline in our report is insufficiently being captured due to the legal nature of such content.

 

9.       The scale of lawful online extremism is eye-watering. Internal government research has shown how as one example, on average during April and July 2020, between 6,000-8,000 items of antisemitic content was uploaded every day to just one forum board, on just one particular platform. One example of legal but harmful hateful extremist content includes a video, promoted during the COVID-19 pandemic, which spread false and dangerous antisemitic conspiracy theories linked to COVID-19 and had been viewed over 5.9 million times by June 2020. Other research has found how hundreds of thousands of Far-Right posts existed around COVID-19, and that there have been millions of engagements with known disinformation sites. The report Failure to Protect, published in August 2021 by the Center for Countering Digital Hate (CCDH), highlighted how online platforms failed to act on 89% of antisemitic conspiracies and just 5% of posts blaming Jewish people for the Covid-19 pandemic were addressed. As our report shows however, there are no laws outlawing antisemitic and other hateful extremist conspiracy theories, which is further compounding the problem.

 

10.   Social media platforms have often been ineffective in removing hateful extremist content. In November 2020, The Guardian reported that research from CCDH had uncovered how extremist merchandise had been sold on Facebook and Instagram to help fund Neo-Nazi Ukrainian groups. The Guardian reported that after “being contacted by The Observer, Facebook began taking down the neo-Nazi [sic] material”. A similar incident was also picked up by The Sun, following CCDH research in Autumn 2020 regarding Daesh propaganda. This featured beheadings and glorifying 9/11 terror attacks to a combined 7,700 followers on Instagram. Facebook, Instagram’s parent company, removed the accounts only after The Sun had approached them for comment, despite Instagram insisting it had not breached their guidelines. This suggests a repeating, inconsistent, and ineffective approach taken by some social media platforms in regulating extremist content.

 

11.   The Government’s proposals for a strong regulatory regime have the potential to become a robust legal framework which would minimise the many legal and illegal harms that are occurring in society. However, at present there is not a clear mechanism to ensure how these powers would be applied to hateful extremism. The White Paper of 2019 noted that, while extremist content and activity falls within the scope of the Paper, extremism is a harm with a less clear definition (when compared with the definitions of terrorism or revenge pornography, for example).

 

12.   The full Government response (FGR) sets out that the regulatory framework will establish different requirements on companies in scope, with regard to categories of content and activity on their services. The categories will cover that which is illegal; that which is harmful to children; and that which is legal when accessed by adults but which may be harmful to them. However, our review suggests that certain ‘legal but harmful’ categories of hateful extremist materials will still be able to exist. The White Paper offered no clarity on what the Government defines as extremism, or the extremist content they believe should fall within it. No further clarity has been provided in the FGR. Nor did the FGR to the Online Harms White Paper consultation engage with the concept of hateful extremism. The continuing lack of clarity about what the Government understands ‘extremism’ to be will prove difficult for any regulator to oversee, and will not provide the clarity that many social media companies are seeking.

 

13.   The Government may not define ‘extremism’ at all, instead leaving it to companies to decide, and to set their own terms and conditions, and to enforce those terms and conditions. Such an approach will not result in a consistent approach taken by social media companies to the same hateful extremist content. This is particularly pertinent, as much of this content is shared between different platforms. As highlighted above, social media companies often do not enforce their own terms and conditions with regards to hateful and violent extremist content. It is hard not to be sceptical about what these proposals could achieve with regard to hateful extremism, in the short- or long-term, or that they would make any substantial difference to the growing and frightening threat of hateful extremism online.

 

14.   If a legal framework for hateful extremism is developed as we suggest, this could be incorporated into the Online Harms Bill and provide the urgent clarity for both social media companies and the potential future regulator Ofcom. In the absence of such a framework for hateful extremism, we do not believe that the threat of online hateful extremism will be sufficiently minimised sufficiently. We believe the Online Harms Bill needs to go much further in addressing online extremism, and will not in itself offer a sufficiently robust response to the prevalent and appalling hateful extremist activities and material online.

 

15.   We are also concerned about the framework set out in the FGR to the Online Harms White Paper, which will establish differentiated expectations on companies. We believe that a small number of high-risk, high reach companies, which could include larger platforms such as Facebook or Google, could be required to address content which is legal but harmful to adults, as well as illegal material. However, the Government has not set out the specifications required to classify as a ‘high reach’ company. Companies in Category 2 will only need to address relevant illegal content and activity (although Category 2 companies will also have to address legal but harmful material where it affects children, if the service is deemed likely to be accessed by children). There is a risk that the most potent and divisive content and extremist ideologies may sit within Category 2 platforms, in which case content may not need to be addressed. As we have laid out, it is the smaller platforms which propagate and host some of the most dangerous extremist content in Britain. While smaller platforms could still be considered a ‘Category 1 service’, namely based on risk, we are concerned about the threshold and criteria being used to determine which category services should sit within.

 

16.   We are concerned that the new framework will not go far enough in tackling the spread of extremist conspiracy theories and disinformation, which have only increased in scale and reach in the last few years. As noted in our review, we are only concerned with those conspiracy theories which can reasonably be described as harmful and extremist. We commend the creation of the Government’s cross-Whitehall Counter Disinformation Unit, and their plans to include disinformation and misinformation that could cause significant harm to an individual within the scope of the framework duty of care, potentially as a priority harm. However, according to the FGR, the online safety framework will expect companies to take action against disinformation which is classed as illegal. Where disinformation content is illegal, it is almost always for other reasons – e.g. it falls under hate crime or incitement to violence and would become a matter for law enforcement, not the regulator.

The framework would not make disinformation or extremist conspiracy theories illegal in themselves, including the extremist antisemitic conspiracy theories highlighted by CCDH and in our own report.

 

Instead, the framework will introduce requirements on platforms for how they should take action against certain kinds of disinformation. We are not left any wiser on what these methods for tackling disinformation are, and whether it is believed they will have the desired effect or be effective in removing extremist content.

 

17.                   It is important to recognise that the harm of disinformation can be experienced by entire demographics and can impact local community tensions, even undermining our democratic institutions. Tackling such content is made harder, as the majority of it will be considered legal but harmful. We suggest that the Government also needs to provide clarity on extremist conspiracy theories; the Government should devise a classification system for such conspiracy theories, based on the level of harm and potential risk of harm. In the absence of the classification system, the regulator and social media companies are unlikely to know what should or shouldn’t be removed, especially if such conspiracy theories / disinformation are considered legal.

 

18.   A carefully devised standardised classification system for extremist content, based on the scale of harm to individuals, to public order, and to undermining our democracy, could be (among other things) included in a “code of practice” for countering hateful extremism. This classification system could also include extremist conspiracy theories and disinformation. In effect, a classification system could become a guide and a reference point for the regulator and social media platforms – providing transparency, clarity, conformity and consistency.

 

19.   Therefore, although we support and credit the Government’s desire for a safer world online, we believe they need to go much further in tackling extremist content online. The framework only mentions extremism in passing, and places too much trust in service providers to tackle this growing issue. The White Paper also serves as a reminder of the need for a more robust legal footing for countering hateful extremism, noting that: “The regulatory approach will impose more specific and stringent requirements for those harms which are clearly illegal, than for those harms which may be legal but harmful, depending on the context”.

 

20.                   As we have already outlined, we believe our laws have failed to evolve in response to the threat of hateful extremism. It is our belief that a great deal of the online hateful extremist activity we have evidenced in this report should be designated illegal but is currently classified as lawful. If we want to counter hateful extremism, this needs to urgently change. As the Online Harms Bill is primarily about regulating existing legal frameworks, introducing a legal framework to tackle hateful extremism would help provide greater clarity, not only to users but also to social media companies and regulators.

21.   To close, we feel that to omit hateful extremism from the Online Harms Bill would be a missed opportunity to reduce the incidence of the very real harms and threats that are all too prevalent within our society, and we would seek to encourage the committee to take full advantage of the extensive research we undertook, and act on the clear message from our analysis and findings.

 

22.   We hope you have found this letter and summary points of assistance, and we are both available to give further testimony in person if the committee would find this helpful.

 

Yours sincerely,

 

 

 

Sara Khan, former Lead Commissioner at the Commission for Countering Extremism 2018-2021

Sir Mark Rowley, Former Assistant Commissioner for Specialist Operations of the Metropolitan Police Service 2014-2018

 

 

13 September 2021


Annex 1: diagrammatic representation of existing legislation and gaps in criminal law which permits hateful extremist activity in Britain.