Written evidence submitted by Legal to Say, Legal to Type (OSB0049)

 

 

About Legal to Say, Legal to Type

 

‘Legal to Say. Legal to Type.’ is a coalition of civil society organisations and industry groups formed in response to the publication of the Draft Online Safety Bill. The campaign is calling on the government to amend the Bill and ensure the internet is kept “free, open and secure”.

 

Summary of key points

 

       The bill will create two tiers of free speech online: free speech for journalists and politicians, and censorship for ordinary citizens.

 

       The bill's introduction of the “Duty of Care” model is overly simplistic. The lack of an explicit definition of the term “harmful” will effectively outsource decision making on what is and isn’t permitted to say from the UK parliament and courts to Silicon Valley tech companies. The threat of large fines will create a commercial incentive to over-censor.

 

       The bill will make it harder for the police to properly hold online abusers accountable, as it forces platforms to delete valuable evidence before the victims of targeted harassment or threats can report it to the relevant authorities.

 

       Law enforcement and regulatory agencies must be equipped with the necessary resources and skills to effectively police the expanding online space.

 

       This bill sets a dangerous international precedent and could lead to a more authoritarian clampdown on use of legal speech online.

 

       The government’s intention to hand regulatory oversight to Ofcom - already the adjudicator for other forms of communication - would be an unprecedented consolidation of power.

 

       There must be consistency across society: if it is legal offline, it must be legal online.

 

 

 

Q .Will the proposed legislation effectively deliver the policy aim of making the UK the safest place to be online?

 

No. This bill will create a more dangerous online space by failing to hold abusers to account while simultaneously over-censoring the general public..

 

Under this proposed bill, platforms will prioritise removing material as quickly as possible and it isn’t clear how long each platform archives deleted  content. Social media content can play a vital role in identifying and prosecuting criminals and automatic removal of content  means abusers cannot be held to account. Deleting content before law enforcement can use it for their investigations means it will become harder to prosecute criminals[1]. Likewise, removing this content immediately may make it harder for victims to identify threats to their safety.  The Online Safety Bill offers protections to trolls and abusers by deleting them rather than prosecuting them.

 

Racism, homophobia, and transphobia have no place in society and they are already illegal. The problem is not a lack of legislation, it is a lack of adequate resources to tackle these issues. To make the internet safer, resources and funding must be directed towards law enforcement and education - to tackle the root causes of this behaviour. By failing to do so, this bill will actually make it harder for law enforcement to properly hold online abusers to account.

 

Q. Will the proposed legislation help to deliver the policy aim of using digital technologies and services to support the UK’s economic growth? Will it support a more inclusive, competitive, and innovative future digital economy?

 

No. This bill will create significant barriers to entry for new British startups hoping to enter the digital economy. Startups will be forced to commit significant resources and time to installing content monitoring and filtering systems or face large fines which could put them out of business. This will disincentivize competition and further monopolise the digital sector; concentrating power in Silicon Power.

 

Furthermore, smaller platforms who are based abroad may choose to cut off UK users from using their services entirely rather than take on the regulatory burden of the bill. This will lead to a less inclusive and innovative digital economy in the UK.

 

 

 

 

Q. Is the “duty of care” approach in the draft Bill effective?

 

The bill's introduction of the “Duty of Care” model is overly simplistic. The lack of an explicit definition of the term “harmful” will effectively outsource decision making on what can and can’t be said online from the UK parliament and courts to Silicon Valley tech companies. The threat of large fines will incentivise firms to define harm extremely broadly and lead to over-censorship.

 

This will be exacerbated by the use of content moderation algorithms which disproportionately censor marginalised voices. In 2018, Tumblr implemented a blanket ban of all adult content and its system repeatedly misclassified innocuous LGBTQ+ content as inappropriate and removed it. There have also been examples of platforms exhibiting racial biases: an image-detection algorithm was found to be cropping out black faces in favor of white faces[2]. A 2019 study showed that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English[3].

 

If something is legal offline, it should be legal online.If the government believes that particular content should be criminalised online, they should address this through parliament and the courts, not big tech.The duty of care principle must be removed or amended to apply to illegal content only. Where there are gaps in the current law, parliament should step in to fill them - and include specific protections for ordinary citizens - especially marginalized communities whose interactions with online media are threatened by this bill. This must be done in accordance with existing UK equality legislation.

 

 

Q. Does the Bill deliver the intention to focus on systems and processes rather than content, and is this an effective approach for moderating content? What role do you see for e.g. safety by design, algorithmic recommendations, minimum standards, default settings?

 

A focus on systems and processes - while the correct approach - is held back by poor definitions and confusion around what should be removed. In order for these systems to operate properly, there must be a clear definition of harm.

 

By being built on discriminatory datasets, algorithms can create a more hostile online environment: for example, the way in which content is moderated means that it is twice as likely to be deleted if it is in Arabic or Urdu than if it is in English. Sites like Tumblr have opted for a full-scale AI-driven ban on all “not suitable” content. This has removed entirely innocuous content and censored a platform that has been central to the LGBTQ+ community for self-expression and development[4].

 

Altering content moderation algorithms cannot be done in retrospect. A 2016 research paper from India highlighted that the technical, societal, or ethical considerations of data-driven AI decision making must be addressed before they are deployed. Failing to do so means the foundation of systems is “inherently flawed”[5].

 

Allowing AI-driven decision making systems to operate without clearly defining their scope is reckless, counterproductive and will lead to mass over-censoring of online content.

 

Q. How does the draft Bill differ to online safety legislation in other countries (e.g. Australia, Canada, Germany, Ireland, and the EU Digital Services Act) and what lessons can be learnt?

 

Looking at similar legislation from around the world highlights the risk this bill poses.In June this year the French Constitutional Council found large parts of France’s online hate speech legislation unconstitutional[6]. It breached the legality test for “impermissible vagueness”: users must know with certainty whether they will be legally obliged to delete content they are about to post. This bill threatens to do the same.

 

The fines under this bill are disproportionately large. Failure to remove “legal yet harmful” content will result in fines up to 10% of annual turnover. In Facebook’s case, this would be £8.8 billion. Compare this to Germany’s Network Enforcement Act, which limits fines to £20 million. The incentive in the UK will be to over-censor. Silicon Valley will control what we say and do online.

 

The UK must realise its power in the global community and be cognizant of other countries looking to it for guidance on these matters. Recently, the Pakistan government published their Citizens Protection (against Online Harm) Rules[7], which has appropriated the term “online harm” and lifted the UK’s definition of “extremism” from their “Prevent” strategy. This definition is overly broad and has been criticised by both the UN Special Rapporteurs on the rights to freedom of peaceful assembly and association and on counter-terrorism[8] for being “inherently flawed”.

 

 

Q. Does the proposed legislation represent a threat to freedom of expression, or are the protections for freedom of expression provided in the draft Bill sufficient?

 

This bill will be catastrophic for free speech. The threat of  large fines provides a financial incentive to over-censor. We have already seen this in action when it comes to ad revenue - Facebook has repeatedly banned adverts with LGBTQ+ themes[9].

 

It is clear that even those writing this bill do not believe the protections for freedom of speech are sufficient - if they were they would not feel the need to create a special exemption for politicians.

 

According to Article 10 of the European Convention on Human Rights[10] and the UK’s own Human Rights Act 1998[11] restrictions to freedom of expression have to be prescribed by law and necessary in a democratic society for a legitimate aim. By restricting speech that is legal, this bill fails the ‘prescribed by law’ test.

 

It will also censor the views of marginalised groups for whom online spaces are particularly important  This includes the LGBTQ+ community - who’s innocuous content has been removed from platforms in the past[12] due to it being perceived as inappropriate or harmful. This is an infringement of basic human rights and must be amended. These groups have come to rely on social media platforms for self expression. 90% of LGBTQ+ young people say they can be themselves online and 96% say the internet has helped them understand more about their sexual orientation and/or gender identity[13].and development[14].

 

 

Q. The draft Bill specifically places a duty on providers to protect democratic content, and content of journalistic importance. What is your view of these measures and their likely effectiveness?

 

The Online Safety Bill will create two tiers of free speech: free speech for journalists and politicians, and censorship for ordinary British citizens. The exemptions in the bill are both unfair and unworkable.

 

The “journalistic content” exemption, will censor ordinary citizens and allow journalists to speak freely online. This is another outsourcing of our rights to Silicon Valley, who - given the lack of clear definition for “journalism” - will decide what is journalism and what is not. This sets a dangerous precedent and will censor countless citizen journalists and online news sources. Citizen journalism is particularly important in conflict situations - there are at least 10 cases where prosecutors secured convictions against individuals linked to war crimes in Iraq and Syria in cases that involved videos and photos shared over social media.

The government must address this gap and make clear how non-traditional news sources will be handled under this bill.

 

“Democratic content”, or “content promoting or opposing a political party” is a weak definition that could provide a platform for extremist content and ideology. Far-right or extremist public figures like Tommy Robinson have been banned from social media in the past for breaching hate-speech regulations, yet given he has previously ran in elections, it is unclear whether this bill would exempt him on political grounds. Failure to properly define who is covered by this exemption will mean people like Robinson, or former politicians and activists could be given special privileges online. This further divides society by creating one rule for politicians, public figures, think-tanks, or public bodies, and one rule for everyone else.

 

 

Q. Are the definitions in the draft Bill suitable for service providers to accurately identify and reduce the presence of legal but harmful content, whilst preserving the presence of legitimate content?

 

No. By failing to clearly define what constitutes harm this bill will damage the online space and remove completely legitimate and beneficial content.

 

Due to the lack of clear definition of what is deemed ‘“harmful”, service providers will over-regulate and over-censor to avoid being financially penalised - the consequences of which will facilitate criminal activity and remove legitimate content of historical significance. We will be outsourcing our decision making to Silicon Valley. Simply put, if the Government believes that a type of content is sufficiently harmful, it should be criminalised.

 

Failure to properly define what is legal has led to the removal of content that is of significant historical and legal importance. In 2016 Facebook was forced to back down in the face of public controversy after removing the famous ‘napalm girl’ image from the Vietnam War[15] due to an alleged breach of its content policies.

 

The media have long played a role in exposing atrocities and war crimes across the globe; through documenting instances of human rights violations and sharing them on online platforms. This content - from journalists and civilians alike - are often used in formal investigations to help convict the perpetrators[16].  Yet, under this bill social media companies would remove this content before it can go towards exposing these crimes and holding the abusers to account. This bill will enable the continuation of war-crimes and human rights abuses by disposing evidence before it can come to light.

 

 

 

Q. What role do algorithms currently play in influencing the presence of certain types of content online and how it is disseminated? What role might they play in reducing the presence of illegal and/or harmful content?

 

The algorithms currently used by social media platforms are designed to drive engagement, meaning the most shocking content often rises to the top of users’ feeds. These algorithms reward controversy, drive polarisation, and destroy nuance. Under the proposed bill, the use of algorithms will be expanded and their use will create a more problematic online environment.

 

Algorithms can perpetuate online racism. In September 2020, an image-detection algorithm used by Twitter was found to be  cropping out Black faces in favor of white ones. In a 2019 study[17] researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English. The conscious and unconscious biases of Silicon Valley coders will determine how people in the UK are able to express themselves online.

 

Algorithms cannot detect context or nuance. For survivors of sexual abuse or other forms of trauma, it can often be helpful to share their experiences with other survivors. By necessity, the descriptions of these experiences will contain language that will be explicit and use strong and offensive language. The Bill could prevent the formation of support networks online by allowing algorithms to remove this content. This will contribute to existing stigmas that can silence people and stop them coming forward to talk about their experiences.

 

Algorithms contribute to discrimination and marginalisation. Sites like Tumblr have opted for a full-scale blanket ban on all “not suitable” content; the removal of which is powered by AI-driven content recognition. The effect of this been to eradicate a platform that has been central to the LGBTQ+ community for self-expression and development[18].

 

By quickly removing illegal content, algorithms will make it harder for police to identify and prosecute criminals. Social media content has become central in identifying and prosecuting criminals and by allowing algorithms to remove it, abusers cannot be held to account[19].

 

We do not need to outsource our decision making to Silicon Valley. Algorithms should not dictate what we can and cannot say online.

 

 

Q. How will Ofcom interact with the police in relation to illegal content, and do the police have the necessary resources (including knowledge and skills) for enforcement online?

 

We do not need a new bureaucratic or regulatory framework. Racism, homophobia, and transphobia have no place in society, but they are already illegal. Legislation governing online activity already exists[20] and should be expanded to address the rapidly developing online space.

 

This existing legislation must include significant resources, training, and toolkits for law enforcement to police this legislation and monitor criminal activity online. A 2016 Middlesex University research paper found that only 1 in 6 police officers who had investigated online grooming or indecent images of children had been given any “specific” training[21]. In addition, after the system for flagging online hate crime was being underused by the police, the Home Office stopped including these figures in their annual report all together[22].

 

We do not need any bureaucratic or regulatory overreach. We must give traditional law enforcement the resources and training they need to tackle egregious online behaviour.

 

20 September 2021

 

 

 

 

 

 

 

9 of 9

 


[1] “Video Unavailable” Social Media Platforms Remove Evidence of War Crimes. Human Rights Watch, 2020

[2]Twitter apologises for 'racist' image-cropping algorithm. The Guardian, 2020

[3] Sap, Maarten, 2019. "The Risk of Racial Bias in Hate Speech Detection"

 

[4] Why Tumblr’s ban on adult content is bad for LGBTQ youth. The Conversation, 2018

[5] Artificial intelligence policy in India: a framework for engaging the limits of data-driven decision-making (Murda, 2016)

[6] French constitutional court blocks large portion of online hate speech law (Euronews, 2020)

[7] Government of Pakistan (Information Technology and Telecommunications Division) Ministry of Information Technology and Telecommunication): Citizens Protection Rules

[8] Pakistan: Online Harms Rules violate freedom of expression (Article 16, 2020)

[9] Facebook blocked many gay-themed ads as part of its new advertising policy, angering LGBT groups (The Washington Post, 2018)

 

[10] Article 10 European Convention on Human Rights

[11] Human Rights Act 1998, (HM Gov, 1998)

[12] Tumblr's adult content ban dismays some users: 'It was a safe space' (The Guardian, 2019)

[13] Stonewall. (2017) Stonewall School Report 2017

[14] :The mental health and experiences of discrimination of LGBTQ+ people during the COVID-19 pandemic: Initial findings from the Queerantine Study (Dylan Kneale, Laia Bécares)

[15] Facebook backs down from ‘napalm girl’ censorship and reinstates photo (The Guardian, 2016)

[16]Video Unavailable” Social Media Platforms Remove Evidence of War Crimes. Human Rights Watch, 2020

[17] The Risk of Racial Bias in Hate Speech Detection (Sap et al, 2019)

[18] Why Tumblr’s ban on adult content is bad for LGBTQ youth. The Conversation, 2018

[19] “Video Unavailable” Social Media Platforms Remove Evidence of War Crimes. Human Rights Watch, 2020

[20]  Defamation Act 2013, The Public Order Act 1986, The Protection from Harassment Act 1997. Section 5 Public Order Offence, Law of Defamation, Section 57 Terrorism Act (2000)

[21] Enhancing Police and Industry Practice (Middlesex University, 2016).

[22] UK stopped analyzing online hate data after police ‘underused’ system. Politico, 2021.