Hannah Leeming—written evidence (FEO0048)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

Introduction

 

I am a third year student at the University of East Anglia studying History and Politics BA. Modules such as Social and Political Theory and Parliamentary Studies have given me a solid knowledge on this topic. As an avid user of online platforms, I am experienced in how social media and internet intermediaries deal with freedom of speech amongst its users. My reasoning for submitting evidence is that I believe that I have the knowledge and perspective of a young person online. This angle may not be seen as often in parliamentary evidence. I will be answering three of the questions set out in the call for evidence, as subtitled below.

 

Summary

 

        Online user-generated content is protected by the Human Rights Act 1998, Article 10. In practice, the law is not adequately enforced. It only covers interference from public authority, when online content removal is judged by private companies. This is not necessarily unfavourable, but these ‘terms of service’ are not scrutinised as much as British law is. They should be subject to public scrutiny.

 

        Lawful but harmful content has a negative effect on people. For this reason, it should be regulated. However, this could lead to a restriction of freedom of speech. So, instead of turning to further legislation, the government should work together with online platforms to regulate content instead of restrictions.

 

        Online platforms need further direction in protecting freedom of expression on the internet. Putting platforms under further legislation is unlikely to be beneficial, due to the ever-changing nature of the internet. The government should be wary of letting self-regulation continue the way it is going. They must be involved in promoting free speech online, whilst finding a balance, and not inadvertently enabling harm.

 

Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced?

 

  1. Several laws cover online user-generated content. Protecting the British public from harm are the Communications Act 2003 and the Malicious Communications Act 1988. The former law aims to protect people from receiving electronic communications which are “grossly offensive” or “indecent, obscene or [of a] menacing character” are prosecutable.[1] Under the latter, harmful electronic communications such as threats and falsities are a criminal offence.[2] These laws adequately protect the public from online hatred, as they demonstrate that being threatening online will not be tolerated. Evidently, the laws have worked, with the police arresting nine people a day from online trolling in 2017, a steep increase compared to years before.[3]

 

  1. Freedom of expression online is also protected by law. In the UK, user-generated content is safeguarded under the Human Rights Act 1998 (HRA), section 10. It ensures the right to hold opinions, receive and impart information and ideas without interference from public authority.[4] Disagreeable or offensive speech is not criminal; anyone can display opinions online within reason. It is subject to the interests of national security, public safety and prevention of crime, to name a few examples.[5] Arguably, the law is one of the most important fundamental freedoms that stabilises democratic society.[6]

 

  1. In principle, online user-generated content should be protected adequately by this law. Online content was not in mind when the act was created, nevertheless, its tenets transfer to expression in the cybersphere. However, in practice, the law is not adequately enforced. This has been seen online in recent years, including when Twitter suspended developer Victoria Fierce for swearing at the US Vice President Mike Pence.[7]

 

  1. The problem with the HRA is that it protects speech against interference from ‘public authority’ only. Freedom of expression online is not being threatened by the government, but the private companies that own content sharing platforms. When registering to be a member of an internet intermediary, the user agrees to the ‘terms of service’; rules that are to be followed to remain on the platform. These are the criteria which accounts and speech are judged upon if they have potentially broken the rules. Platforms have to abide by UK law for their website to run (or fear the consequence of access to them being blocked). However, this can lead to a lack of transparency as to why some content is removed, while others of a similar vein are not.
  2. Private decision-making behind closed doors does not necessarily result in freedom of expression being abandoned.[8] It is clear that they want to cooperate with British and international freedom of speech laws through their terms of service. Yet, when the government creates legislation on free speech and hate speech, they are subject to scrutiny by democratic chambers and civil servants, unlike online terms of service, which are up to private companies to decide.[9] This is concerning, as it means private platforms are constructing their own judgments on enforcing British law, which is not their prerogative.

 

  1. The ways that social media companies display ‘private sovereignty’ in judging user-generated content on their platforms should be subject to public scrutiny, to avoid the emergence of wrongful conducts.[10] Consequently, the government should facilitate cooperation with social media giants. MPs should work alongside social media companies and facilitate a harmonious relationship. Terms of service should be altered and agreed upon by both parties. Together they could foster an environment for free speech to be protected.

 

Should ‘lawful but harmful’ online content also be regulated?

 

  1. Lawful but harmful online content undoubtedly has a negative effect on people, such as cyberbullying, the spread of misinformation and depictions of self-harm. Content regulation has been called upon by many, as they wish for access to harmful material to be restricted.[11] This is understandable, as parents do not want their children to engage with harmful content, and its effects on the most vulnerable in society can be deadly.

 

  1. This type of content has real-life consequences, therefore it should be regulated. Research has found that children who are victims of cyberbullying are more likely to display suicidal tendencies than those who are not.[12] Although technically legal, cyberbullying should be regulated. Victims of these attacks are caused harm that goes beyond the damage that words do themselves.[13] Despite cyberbullying usually taking place in private chat rooms, more needs to be done in educating the population that online harms contribute to real life consequences.
  2. Also, Moonshot, an organisation that specialises in monitoring extreme online content found an increase of the use of hashtags encouraging or inciting violence against China and Chinese people by 300% between 21.02.20 and 17.04.20, which mentioned coronavirus.[14] Racism is a completely unacceptable frame of speech, and platforms evidently need to work harder in eradicating all forms of this harmful content on their websites. It is detrimental to the psychological well-being of those on the receiving end, and in times of extreme mental health crises, this content needs to be regulated.

 

  1. It is difficult for lawful but harmful content to be regulated, because the internet works around and beyond political boundaries.[15] No one rule can be made and eradicate all harmful content. Nevertheless, with online abuse becoming increasingly common in today’s society, more needs to be done to tackle this behaviour.[16]

 

  1. However, by regulating increasing amounts of content online, freedom of expression could be further restricted. Private decision making would be amplified, possibly leading to flawed decisions and over-regulation, in regards to supposedly illicit content.[17] This would result in heightened discrepancy between the government and social media platforms.

 

  1. This is why, instead of the government introducing uniform regulations across all internet intermediaries, they should cooperate with social media companies to create specialised procedures to ensure ‘lawful but harmful’ content is regulated whilst not limiting freedom of expression.

 

Should online platforms be under a legal duty to protect freedom of expression?

 

  1. Online platforms should have further direction by the government in protecting freedom of expression. As seen with examples above, there have been clear abuses of free speech online in recent years. It can be shocking and distasteful, but social media is an essential medium for freedom of expression - and there is no freedom at all if it is limited to expressing views of the majority.[18] There needs to be communication between the government and social media companies about how freedom of expression should be effectively maintained online.
  2. Putting internet intermediaries under further legislation is unlikely to be helpful in the long run. Laws are blunt tools, usually too broad to be useful, or too specific to be applicable in the ever-changing world of the internet.[19] The cybersphere is constantly shifting, and laws made today may not be applicable tomorrow. Introducing another piece of legislation to protect freedom of expression is more likely to help those who are abusing free speech. As well as protecting the right people from their content being restricted, it could shield those who mean harm.

 

  1. Yet, some guidance is needed. Parliament should be cautious of allowing self-regulation to continue as the only adjudicator; they must be a part of deciding whether content platforms deem as good behaviour is also accepting of free speech.[20] MPs need to be involved in making sure the terms of service of each social media site in the UK are in line with our laws on freedom of expression, not just what the private moderators deem right or wrong. There need to be people behind the scenes ensuring free speech, from social media platforms as well as the government.

 

Conclusion

 

Overall, finding the balance between ensuring freedom of expression and protecting people from harm is a difficult task. On one hand, nobody deserves to be a victim of abuse on the internet. This has negative effects on people, as there are too many casualties of illegal and legal hatred online. However, freedom of expression is an integral right in Britain. It must be protected at all costs, and sometimes this comes at the expense of offending people. Leaving online platforms to solely judge this by their privately created terms of service cannot continue. The government must have a say in how online content is judged to be unacceptable, and be able to work with social media companies to foster a harmonious relationship and rules that both can agree on.

 

Recommendations

 

The recommendations I advise for the committee to take on board as follows:

 

        Groups should be set up including MPs and employees from large social media companies such as Facebook, Twitter and Instagram. In these groups, discussions upon regulation of online harms and the protection of freedom of expression should take place. This would help the government understand how and why social media companies make certain decisions. Resolutions that both parties agree upon could be fostered surrounding new challenges that social media sites face with every new day.

 

        The Government should work with each social media platform to create terms of service which are agreed upon by both. This would mean more transparency between the two parties, whilst also showing strength towards people who break the rules - the law and the platform are on the same side.

 

        The government should aim to introduce more education in schools about what is illegal online, as well as ‘lawful but harmful’ content. They should deter children and young adults from exhibiting these behaviours. If online harms are not warned against in adolescents who are beginning to use the internet, it will become increasingly difficult for social media companies to monitor ‘lawful but harmful’ content.

 

 

15 January 2021

6

 


[1]              The Communications Act, 2003, c. 127 (UK): para. 1 (a).

[2]              Malicious Communications Act, 1988, c. 27 (UK): para 1.1 (a) and (b).

[3]              Charlie Parker, “Police Arresting Nine People a Day in Fight Against Web Trolls,” The Times, 2017, https://www.thetimes.co.uk/article/police-arresting-nine-people-a-day-in-fight-against-web-trolls-b8nkpgp2d

[4]              Human Rights Act 1998, c.42 (UK): Article 10 (1).

[5]              Human Rights Act 1998, c.42 (UK): Article 10 (2).

[6]              Estelle Chambers et al., “How can the law in England and Wales be reformed in order to regulate offensive online communications with respect to the right of freedom of expression?, The Student Journal of Professional Practice and Academic Research 1,1 (2019): 105.

[7]              Casey Newton and Russell Brandom, “Twitter Is Locking Accounts That Swear at Famous People,” The Verge (2017). https://www.theverge.com/2017/2/24/14719828/twitter-account-lock-ban-swearing-abuse-moderation.

[8]              Luca Belli, Pedro Francisco, and Nicolo Zingales, “Law of the Land or Law of the Platform? Beware of the Privatisation of Regulation and Police,” in Platform Regulations: How Platforms Are Regulated and How They Regulate Us (Rio de Janeiro: FGV Direito Rio, 2017), 46.

[9]              Alexander Brown, “What is so Special about Online (as Compared to Offline) Hate Speech?,” Ethnicities 18, 3 (2017): 315, https://doi.org/10.1177/1468796817709846.

[10]              Belli, Francisco and Zingales, “Law of the Land,” 46.

[11]              R. F. Jørgensen, “Internet and Freedom of expression,” European Master Degree in Human Rights and Democratization, 2000–2001, Raoul Wallenberg Institute, (2001): 2.

[12]              Ann John, et al., “Self-Harm, Suicidal Behaviours, and Cyberbullying in Children and Young People: Systematic Review” J Med Internet Res 20, 4 (2018): 11.

[13]              Chara Bakalis, “Rethinking Cyberhate Laws,” Information & Communications Technology Law 27, 1 (2018): 103.

[14]              “COVID-19: Conspiracy Theories, Hate Speech and Incitements to Violence on Twitter,” Moonshot CVE, May 7, 2020, http://moonshotcve.com/covid-19-conspiracy-theories-hate-speech-twitter/.

[15]              Kitsuron Sangsuvan, “Balancing Freedom of Speech on the Internet under International Law,” North Carolina Journal of International Law and Commercial Regulation 39, 3 (2014): 736.

[16]              Laura Bliss, “The crown prosecution guidelines and grossly offensive comments: an analysis,” Journal of Media Law 9, 2 (2017): 187.

[17]              Jennifer M. Urban, Joe Karaganis, and Brianna Schofield, Notice and Takedown in Everyday Practice (California: Berkeley Law School, University of California, 2017), 42.

[18]              Ammar Oozeer, “Internet and Social Networks: Freedom of Expression in the Digital Age,” Commonwealth Law Bulletin 40, 2 (2014): 351 - 352.

[19]              Mark Bunting, “From Editorial Obligation to Procedural Accountability: policy approaches to online content in the era of information intermediaries,” Journal of Cyber Policy 3, 2 (2018): 178.

[20]              Chara Bakalis, “Rethinking Cyberhate Laws,” Information & Communications Technology Law 27, 1 (2018): 110.