International Observatory of Human Rights—written evidence (FEO0046)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

Summary

 

1.              This submission by the International Observatory of Human Rights (IOHR) recognises the need for further protection of freedom of expression online.

 

 2.              This submission sheds light on legal duties and obligations of online platforms, examining what could be considered lawful but harmful, potential for international collaboration, the role of digital citizenship and the difference between online and offline freedom of expression.

 

3.              It outlines transparency, and the role that algorithms play and finally gives recommendations to improve regulation within existing legal frameworks.

 

Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?
 

  1. This is a complicated question and in many ways the answer is multifaceted. One must consider events of the last year and how the lines between individuals’ offline lives and online lives have become blurred. In many ways, and increasingly, there is little distinction between the two, and as such we must ensure that we do not overly regulate and restrict people's rights online in ways that we would not seek to offline.

 

  1. In this way, the existing British legal framework of illegal activity - which has been formed through democratic process - adequately and legitimately provides the starting parameters of what should and should not be permitted online. Beyond that, rights such as freedom of expression tend to thrive from being as unregulated as possible.
     
  2. The Law Commission is currently reviewing which kind of internet offences people may be prosecuted over. This will hopefully lead to greater clarity and a more coherent implementation of the rules.[1] In their consultation, the Law Commission seemed to suggest criminalising actions which are “grossly offensive”, this is overly broad and would have a chilling effect on freedom of expression.
     
  3. Equally, the international human rights framework provides a good basis for how states should approach freedom of speech online. While different national instruments or platform community guidelines might vary, they should all be routed in the UN Guiding Principles on Business and Human Rights. International law, such as Article 19 of the ICCPR[2], also provides clear parameters for the restriction of freedom of expression as a limited right.
     
  4. These international laws have also been imported into domestic legislation through the Human Rights Act 1998.
     
  5. However, as recognised in the Online Harms Bill White Paper, net utility is hindered on the internet by actions that are “lawful but harmful”[3]. For example, people could be using their freedom of expression to bully someone. It is also noteworthy that offline, there is precedent in taking actions against harms without criminalising them (e.g., promoting sensible gambling or a recommended weekly alcohol consumption).
     
  6. Similar to the labelling of TV content to protect from harm, a standardised framework across platforms should be established to ensure continuity in the approach to legal but harmful content. Should similar labelling be explored as a remedy to legal but harmful content to avoid the arbitrary removal of legitimate freedom of expression?
     
  7. The major problem is that harms of these nature can vary greatly, and it is not always possible to neatly distinguish between what is and is not acceptable. Particularly concerning is the language around the concept of legal but harmful in the Online Harms White Paper. Current proposals create a distinction between speech made offline and online and what is permitted thereof.
     
  8. With illegal harms, current legislation is largely clear and consistent (for example, the Global Internet Forum for Counter Terrorism largely binds the UK government on what can be considered illegal terrorist content online[4]). Neatly identifying and reacting to less clear, legal harms, such as disinformation is more complicated. The White Paper tries to avoid this issue by avoiding identification and definition of these harms.
     
  9. This ambiguity will lead to platforms becoming overly cautious for fear of reprisal. In turn, this is likely to result in platforms expanding interventionist actions that will disrupt freedom of speech and lead to arbitrary removals of its legitimate exercise.
     
  10. As a result, one must ask whether further legislation is the best way to address ‘lawful but harmful’ content online. Instead, a better approach might be ensuring platforms’ terms of services and are brought up to international human rights standards and actions relating to the protection of freedom of speech, such as content removal, have appropriate safeguards (e.g., increased transparency, complaint reporting).
     
  11. Root causes to these harms should also be addressed through increasing competition and digital citizenship and growing digital literacy throughout society.

 

Should online platforms be under a legal duty to protect freedom of expression?
 

  1. Currently, platforms are only liable when they are made aware of content. This is in line with the e-Commerce Directive and our current laws[5].
     
  2. Given the growing prominence of these platforms, it is right that liability laws are reviewed.
     
  3. That being said, there are dangers for freedom of expression in making platforms liable for any and all illegal or harmful content that happens on their service. In this scenario, the platform would be required to monitor all communications and may lead to decisions about what is and is not illegal being made by the platforms rather than in the courts and our domestic laws.
     
  4. Increasing liability often leads to over censorship as platforms inadvertently crackdown on lawful content to ensure they are shielded from legal challenges. It also raises fears around the privacy of users, which has an effect on users' confidence in exercising their freedom of expression.

 

  1. The mooted appointment of Ofcom as regulator of harmful content as outlined in the Online Harms White Paper may not be the best model for regulating online platforms. The Online Harms Foundation have called into question its effectiveness in terms of limiting bad actors, who they state are likely to be found on smaller platforms than the big tech platforms that Ofcom’s current version of the regulatory proposal covers.
     
  2. However, current UK regulation - or self-regulation - requires voluntary compliance and has resulted in terms of services falling short of international human rights standards where the protection of freedom of speech is concerned.
     
  3. One way liability laws can be improved is around transparency. For example, platforms should be more transparent on the issue of algorithms and AI. Algorithms and artificial intelligence are widely used in relation to content removal and prioritising search results, both of which directly relate to freedom of expression and as such should be transparent and open to scrutiny and redress where they are not working.
     
  4. The Online Harms Bill seeks to “help shape an internet that is open and vibrant but also protects its users from harm.”[6] However, without a wider published strategy on freedom of expression (particularly around disinformation and misinformation) the government risks viewing too many areas of government responsibility as the prerogative of internet companies.
     
  5. It is not helpful, nor advantageous, to pretend the suggested steps listed in the White Paper are simply consequences of a general duty of care. Freedom of expression, as well as the regulation of harmful content, is not best served through rigid codes of practice.
     
  6. Without greater clarity on a number of key issues such as; defining harms; the scope of legal but harmful, how much discretion Ofcom, or another regulator, would have; and on accountability and due process, then the Online Harms Bill is likely to have a chilling effect on Freedom of Expression.

 

  1. While liability laws do present opportunities to address the issue of online harms, the Government must also provide clear guidance on the correct procedures for removing illegal content online, consistent with due process safeguards and the protection of freedom of expression in international law.
     

Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?
 

  1. Some states perform comparatively better than others (e.g., Iceland consistently tops Freedom House's Freedom on the Net rankings[7]), but no clear success story has surfaced to match individual country environments.
     
  2. States have typically pursued one of three models; Self-regulation or regulation by contract, interventionist regulation and finally co-regulation.[8]
     
  3. Similar to the current UK policy, the United States opts for regulation by contract: Companies have conditional immunity from liability as long as they uphold their set terms of service. In the US this is largely covered by Section 230 of the Communication Decency Act[9].
     
  4. As mentioned above, regulation by contract relies on voluntary compliance on the side of the platforms as well as users of those platforms understanding their rights protected in the codes of conduct. These models are easily manipulated to circumvent the rule of law and it is often hard to hold platforms to account.[10]
     
  5. On the flip side, you have interventionist regulation. Such frameworks are characterised by states accumulating disproportionate censorship powers. This may manifest through content-blocking, fines and even prison sentences.[11]
     
  6. In interventionist models, the laws that underpin regulators powers are often overly broad. This problem is accentuated in countries in which the regulator lacks independence. Interventionist policies ensure the state is ultimately in charge of what speech is and is not acceptable and poses a number of problems.
     
  7. The third method, a co-regulatory framework such as that implemented in the EU’s new Digital Services Act, differs from interventionist regulation but ultimately suffers from the same shortcomings. These frameworks involve tighter regulation, encouraged, or supported, and while the state is not the ultimate arbiter, they tend to give state institutions too much scope over the regulation of online content.
     
  8. As mentioned previously, domestic, international laws and the UN’s guiding principles present a good starting point for which platforms should base their terms and service. This could pave the way for a new framework in which it could become the role of the state to address cases where they deem platforms have fallen short of these standards - effectively acting as a grievance procedure. NGO Article 19 have proposed a similar framework and in practice “believe that the creation of a new cause of action could be derived either from traditional tort law principles or, as noted above, the application of constitutional theory to the enforcement of contracts between private parties”.[12]
     
  9. In relation to digital regulation, most commentators feel previous efforts at international cooperation has led to the establishment of overly broad censorship powers. These efforts have in turn had a chilling effect on freedom of expression, often making states the ultimate arbiter in decisions of what can and cannot be expressed freely online.
     
  10. Increased international cooperation does also offer opportunities for the advancement of freedom of expression - primarily in relation to increasing global access to the internet - which also offers the chance to improve digital citizenship - and boosting competition.

 

  1. Coordinated action presents the best opportunity to bridge existing gaps in internet access. At the turn of the century, efforts such as the G8 ‘Digital Opportunity Task Force’ and its ‘Genoa Plan of Action’ were integral in franchising internet access to individuals in Least Developed Countries.
     
  2. Accompanying digital literacy programmes ensured newly franchised individuals were taught good digital citizenship skills, increasing utility for all internet users in the process and expanding the right to freedom of expression in the process.
     
  3. Monitoring the progress of the EU Digital Services Act with a view to closer liaison and cooperation in the future may assist in the development of best practise and standardised frameworks.
     
  4. More recent initiatives such as the International Partnership on Information and Democracy, a non-binding agreement between 38 countries to “promote and implement democratic principles in the global information and communication space” also play a valuable role in advising on best practice in different national scenarios[13].
     
  5. The UK is also a member of the ‘Freedom Online Coalition’, a partnership of 31 governments working closely together to coordinate their diplomatic efforts and engage with civil society and the private sector to support internet freedom – freedom of expression, association, assembly, and privacy online – worldwide.[14]
     
  6. There is also more scope for international cooperation to drive competition. At the moment, a small number of platforms hold all the power. This may make freedom of expression susceptible to infringement if malicious states manage to pressure these gatekeepers into changing their terms of service.
     
  7. As in most markets, increased competition increases consumer welfare standards. When coupled with greater digital citizenship, consumers can shop around terms of services and platforms would be incentivised to improve the protections afforded to their users.

 

How should good digital citizenship be promoted? How can education help?
 

  1. Promotion of digital citizenship in an education setting can be promoted by creating the conditions for students and children to develop and own responsibility and safety. The instruction of this in schools can help young people understand how to leave a responsible digital footprint in the world.
     
  2. Education can be successful in promoting digital citizenship and there are currently several initiatives that are in place. However, the majority of these initiatives are focused on programmes that have been integrated into the Department for Education’s existing curriculum on statutory relationships, sex, and health education, which is followed in PSHE and Citizenship within schools.[15]
     
  3. Given the current climate in which learning is mostly online, there remains significant concern for the consideration of creating digital citizenship as a standalone subject within the curriculum to support children to navigate the online world, in which they increasingly spend more of their time, more safely.[16]
     
  4. Organisations are working on developing teaching materials that can be used as a means to creating greater understanding of what digital citizenship is and how it can be promoted amongst young people.[17]
     
  5. The challenge however remains in developing the platforms to such a level that students feel as though they are genuinely interacting with each other. This has become even more pertinent in the COVID and post-COVID eras in which online teaching has become the rule rather than the exception.

 

Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?
 

  1. As mentioned at the beginning of this submission, this past year has shown us how our online lives and offline lives are increasingly converging. With that in mind, one must ensure that regulation online does not overly infringe on the right to freedom of expression in ways we would not seek offline.
     
  2. That being said, some online harms are distinguishable and distinct from threats to freedom of expression offline and as such the remedies might differ to.
     
  3. When freedom of expression is under threat one of the first indicators in the social media world today is self-censorship. Authentic subjects become taboo for equal discussion if they breach the socio-political correctness of the Zeitgeist.  This is not to suggest hate speech or harmful views are acceptable, however freedom of expression can be stifled for moderate voices where open debate, or a different opinion, is subdued for the fear of social sanction from a louder minority. 
     
  4. In June 2020, JK Rowling posted a tweet to her 14.2 million Twitter commenting on what she perceived as a misappropriation of language on the issue of sex. She said:
     

‘People who menstruate.’ I’m sure there used to be a word for those people. Someone help me out. Wumben? Wimpund? Woomud?
 

  1. We use this example not to take a position on what JK Rowling said, but to look at the response to her voicing her opinion. While recognising that this debate is particularly charged, voicing an opinion on either side of it does not justify a backlash of such magnitude that it might lead to self-censorship in the future.
     
  2. As the committee will know, many Members of Parliament are consistently threatened, harassed, and targeted online. The cost of free speech should never be death threats and violence.
     
  3. The instantaneous speed and reach in the echo chamber online environment has removed the ability to revise a comment, give context or add details. Retrospectively there is little to provide protection against the escalation of words to intimidation or mob mentality which can lead to threats and bullying online. Few people would scream abusive comments in person, yet in an online setting it has become acceptable behaviour.
     
  4. Yet these examples are far from isolated. Online platforms typically remove nuance from public debate, amplify the voices of the most extreme positions, create echo chambers, and ultimately lead to many individuals being less likely to exercise their right to freedom of expression for fear of reprisal in the form of mob justice. Improved digital literacy and a focus on digital citizenship from a young age could address many of these root causes.
     
  5. Anecdotally, Jon Ronson’s book “So you’ve been publicly shamed” looks at how social media users have increasingly become the judge, jury, and executioner on the internet - often to devastating effect. The right to fair trial is a pillar of a healthy democracy and it is not the prerogative of the social media users to decide what is and is not wrong.
     
  6. The introduction of digital literacy learning, and online etiquette must be reinforced with a framework of social sanctions to ensure adherence. A time offline for abusive or threatening language has already been engaged in the US, but perhaps it is time to bring that to a local level to ensure there is a safe space for discussion.

 

How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?

 

  1. The Trust Project in the United States created a set of Trust indicators for news sites around the world to build into the algorithms that are used to promote news, their motto is:
     

“To amplify journalism’s commitment to transparency, accuracy, inclusion and fairness so that the public can make informed news choices.”[18]

  1. The framework of these indicators and the methodology used to create, train, and monitor them is applicable for a wider content pool. They work because the volume of sign up by content providers, news aggregators and developers has created a standard of compliance and a joint need to innovate under the same framework. 

 

  1. The UK would benefit from a dedicated central independent body to bring together relevant parties to form an advisory board to guide on the development, implementation, and training of such algorithms.

 

Recommendations
 

1.          Access to digital literacy training needs to be continuous and easily accessible throughout society as is essential for addressing the root causes that lead to online harms.

 

2.          Digital citizenship therefore needs to be promoted from a young age and added as a standalone subject in the UK national curriculum.
 

3.          The regulation of online content should be led by the existing British legal framework of illegal activity and the protections afforded through the Human Rights Act 1998. This is particularly important in the current context as our online and offline lives increasingly converge.
 

4.          Online platforms’ terms of service should be rooted in the UN guiding Principles of Human Rights and Business.
 

5.          There are already guarantees in Article 19 of the ICCPR provide clear guidance on what falls outside of the exercise of the freedom of speech, this should be used as a guideline for content removal and redress. States should then act as a grievance measure where standards fall short.
 

6.          The fact that international law explicitly states what can and cannot be exercised through the right to freedom of expression makes the “legal but harmful” classification problematic. If the government does include the classification, it must do more to clearly define these types of harms to ensure it does not lead to censorship of the legitimate exercise of freedom of speech.
 

7.          Government regulation should focus on ensuring greater transparency, reporting and accountability in the use of algorithms and artificial intelligence.
 

8.          International cooperation should focus on increasing transparency, competition, and the expansion of digital citizenship.
 

9.          To establish a distinct independent regulator within OFCOM to advise, regulate and oversee the creation of the framework to underpin the use, deployment, and training of these algorithms. Ultimately the regulator must ensure that their use delivers the stated purpose and does not contribute to other undisclosed activities. This structure has already been used where critical industries have been readied for market, such as OFWAT’s role in the retail water industry.

 

10.     For the regulator to articulate the penalty structure for any noncompliance or failure in spot testing undertaken by the regulator as in OFCOM regulation policy today[19].

 

11.     To establish a dedicated central independent industry body group to work with the regulator to bring together relevant parties to form a digital advisory board to guide on the development, implementation, and training of such algorithms.  Recommendation to follow the same format as the Digital TV group[20] to ensure that the code of development is transparent and compliant technically, legally, and ethically and is peer policed and monitored and then overseen by the regulation body.
 

12.     To establish a joint government and industry funding pot to support the training, ongoing development, and employment of regulator personnel to ensure that they have the technical capabilities to police and the financial incentive to attract the highest level of competence to maintain parity with industry standards.

 

 

15 January 2021

 

10

 


[1]              Law Commission. Reform of the Communication Offences. Access: https://www.lawcom.gov.uk/project/reform-of-the-communications-offences/

[2]              International Covenant on Civil and Political Rights (1966). Access: https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx

[3]              UK GOV. (2019) Online Harms White Paper. Access: https://www.gov.uk/government/consultations/online-harms-white-paper

[4]              UK GOV. (2020). Interim code of practice on terrorist content and activity online. Access: https://www.gov.uk/government/publications/online-harms-interim-codes-of-practice/interim-code-of-practice-on-terrorist-content-and-activity-online-accessible-version

[5]              European Parliament. e-Commerce Directive. Access https://ec.europa.eu/digital-single-market/en/e-commerce-directive

[6]              UK GOV. (2019) Online Harms White Paper. Access: https://www.gov.uk/government/consultations/online-harms-white-paper

[7]              Freedom House. (2020). Freedom on the Net 2020. Accessed: https://freedomhouse.org/country/iceland/freedom-net/2020

[8]              Haw Ang, P. (2008). International Regulation of Internet Content: Possibilities and Limits. Access: https://www.dhi.ac.uk/san/waysofbeing/data/governance-crone-ang-2008.pdf

[9]              Brannon, V. (2019). Liability for Content Hosts: An Overview of the Communication Decency Act’s Section 230. Accessed: https://fas.org/sgp/crs/misc/LSB10306.pdf

[10]              Article 19. (2018). Side-stepping rights: Regulating speech by contract. Access: https://www.article19.org/wp-content/uploads/2018/06/Regulating-speech-by-contract-WEB.pdf

[11]              Article 19. (2013). Freedom of expression and ICTs: Overview of international standards. Access: https://www.article19.org/wp-content/uploads/2018/02/FoE-and-ICTs.pdf

[12]              Article 19. (2018). Side-stepping rights: Regulating speech by contract. Access: https://www.article19.org/wp-content/uploads/2018/06/Regulating-speech-by-contract-WEB.pdf

[13]              Forum on Information & Democracy. Access: https://informationdemocracy.org/

[14]              Freedom Online Coalition. Access: https://freedomonlinecoalition.com/

[15]              Online Harms White Paper: Full government response to the consultation, 15 December 2020 https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response#part-5-what-part-will-technology-education-and-awareness-play-in-the-solution

[16]              Gianfranco Polizzi, 2020. Digital literacy and the national curriculum for England: Learning from how the experts engage with and evaluate online content. Computers & Education, Volume 152, July 2020. https://www.sciencedirect.com/science/article/pii/S0360131520300592

[17]              See for example: https://facingtoday.facinghistory.org/digital-citizenship-and-facing-history

[18]              The Trust Project. Access: https://thetrustproject.org/#indicators

[19]              OFCOM regulation penalty structure (https://www.ofcom.org.uk/about-ofcom/policies-and-guidelines/penalty-guidelines)

[20]              Digital TV Group. Access: https://dtg.org.uk/