Index on Censorship – written evidence (DAD0032)

 

  1. Index on Censorship, a registered UK charity and company, campaigns for freedom of expression worldwide. We publish work by censored writers and artists, promote debate, and monitor threats to free speech, and have more than 45 years’ experience in this field. Freedom of expression online and combating online censorship is a priority area for Index’s work. Index works by informing, influencing, debating and supporting.

 

  1. (Question 1) How has digital technology changed the way that democracy works in the UK and has this been a net positive or negative effect?

Digital technology has had a profound impact, both negative – as demonstrated by the Cambridge Analytica case – and positive, such as allowing politicians to communicate directly with the public and for the public to communicate directly with politicians (please also see our response to question 8 below). In its submission to the Online Harms White Paper consultation Index emphasised the importance of basing any future regulatory action on clear evidence, including evidence of harmful impacts.

 

  1. (Question 2) How have the design of algorithms used by social media platforms shaped democratic debate? To what extent should there be greater accountability for the design of these algorithms?

Governments should have no role in the design of algorithms. However, increased transparency about the role that algorithms play in decision-making and in driving content to internet users is extremely important. This includes considering the impacts on human rights, including impacts on freedom of expression, privacy and the right to access information. This has been recognised by for example the Council of Europe, which has initiated a process to develop recommendations on the human rights impacts of algorithmic systems.[1] The Electronic Frontier Foundation has highlighted the importance of empowering users as part of the solution.[2]

 

  1. (Question 3) What role should every stage of education play in helping to create a healthy, active, digitally literate democracy?   

Digital media literacy should include a component that helps individuals to understand and to exercise their civil and human rights in full. Often the focus is on technical knowledge, rather than rights-based education. It is also important to recognise that efforts need to go beyond school-age children and young people. For example, research that focused on Facebook and fake news indicated that over 65s are more likely to share fake news than those from younger age groups.[3]

 

  1. (Question 6) To what extent does increasing use of encrypted messaging and private groups present a challenge to the democratic process?

Disinformation can be spread through private groups, including ones using encrypted communications. However, rather than posing a challenge to the democratic process, encryption and private groups can play an important role in strengthening democracy. In countries with repressive regimes they can be essential tools for mobilising democratic opposition. Free media is an essential part of a functioning democracy: journalists who need to protect sources often need to use encrypted communications.

 

  1. (Question 7) What are the positive or negative effects of anonymity on online democratic discourse?

Index on Censorship strongly defends the right to anonymity online. Anonymity can be extremely important for human rights defenders working under repressive regimes and for members of minority groups, and for journalists. Contributors to Index on Censorship’s magazine are often individuals in a position where personal safety is a major concern, so they do not wish their name to be revealed. Some might choose to use a pseudonym.[4]

 

  1. The United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye, has questioned the value of real-name policies, highlighting the importance of anonymity for vulnerable users, and has suggested that narrowly crafted rules on impersonation, which limit the ability of users to portray another person in a confusing or deceptive manner might be a better approach.[5] Index believes that this should be explored further as a better alternative than blanket real-name policies.

 

  1. (Question 8) To what extent does social media negatively shape public debate, either through encouraging polarisation or through abuse deterring individuals from engaging in public life?

As Index on Censorship has pointed out in a submission to the Joint Committee on Human Rights, social media has increased the ease with which the general public can communicate with members of parliament. This undoubtedly has a positive outcome – bringing parliamentary accountability closer to individual constituents, allowing for MPs to communicate directly with the public, without dilution by the mainstream media, and vice versa. Lively debates occur on social media with and between MPs and, overall, this has increased the volume of democratic participation. However, the flip side of the increase in the ease with which individuals can directly communicate with MPs is that it is more common for MPs to receive abuse. In some cases, this has resulted in MPs receiving a litany of racist or misogynistic abuse, and direct threats of violence.

 

  1. The precise level of the problem is difficult to quantify because social media companies do not release sufficient data to allow for comprehensive, evidence-based research in this area. It is also made more complicated by the fact that studies often combine a wide range of behaviours and speech under the term “abuse” that fail to separate, for example, threats of violence from other forms of discourse that might “offend, shock or disturb”, and indeed what might be considered to be strong criticism.

 

  1. Social media can shape the public debate in a positive way (for example, the MeToo campaign). While a negative tone of debate and personal online attacks can deter individuals from engaging in the process, so can confusing and inconsistently applied community standards and terms and conditions of online platforms.

 

  1. As Index on Censorship also pointed out in its submission to the Joint Committee on Human Rights, extensive powers are available to protect MPs from harassment, both off and online. The rise of social media has created a significant new problem of online abuse which may sometimes spill over to offline abuse or even violence. However, Index on Censorship believes that the root of the problem is in a dramatic social change which has occurred in a relatively short and politically tumultuous period, rather than a deficiency in the criminal law. Ultimately, the focus should be on ensuring that the police and Crown Prosecution Service are sufficiently resourced to use the powers they have, rather than giving them more powers which they do not need. Free speech is the lifeblood of democracy and it should therefore be jealously guarded.

 

  1.                     (Question 9) To what extent do you think that there are those who are using social media to attempt to undermine trust in the democratic process and in democratic institutions; and what might be the best ways to combat this and strengthen faith in democracy?

Bad actors who attempt to undermine the democratic process and democratic institutions do so in various ways, including but most likely not exclusively online. Censorship and closing of user accounts are not the right way to approach this.

 

 

  1. As Index has stated: Index believes that all speech – eccentric, contentious, heretical, unwelcome, provocative and even bigoted – should be protected unless it directly incites violence. Social media and tech companies — as private entities — have the right to set whatever terms they choose, but the patchwork, inconsistent and opaque terms of service approach to policing speech online leaves them open to political and societal pressures. We strongly encourage the adoption of terms of service policies that maintain the widest possible scope for free speech online. This means we – as users – will have to tolerate the fraudulent, the offensive and the idiotic. The ability to express contrary points of view, to call out racism, to demand retraction and to highlight obvious hypocrisy depend on the ability to freely share information across the evenest possible playing field. Any other course of action will – in the end – diminish everyone’s right to free expression.[6]

 

  1.                     (Question 10) What might be the best ways of reducing the effects of misinformation on social media platforms?

Increasing digital media literacy is a critical part of the solution. It is also important to distinguish between disinformation (deliberately manipulative) and misinformation (not aiming to cause deliberate harm). The root causes of misinformation can be complex and very difficult to address. However, tackling the issue through online censorship is not the answer. Index on Censorship also has significant concerns about proposals that would result in “kitemarking” news sources, because of the questions it raises about who would decide what constitutes “approved news”. The dividing line between opinion and misinformation may be very difficult to define: one person’s point of view can be another person’s misinformation.

 

  1.                     (Question 11) How could the moderation processes of large technology companies be improved to better tackle abuse and misinformation, as well as helping public debate flourish?

As highlighted in Content Moderation is Broken. Let Us Count the Ways[7] social media platforms engage in content moderation that involves depublication, downranking and sometimes outright censorship. This is usually based on the platforms’ community standards. The community standards are often not clearly formulated and their application is not transparent. Appeals processes may be non-existent or difficult to understand.

 

  1. The Manila Principles on Intermediary Liability[8], which aim to guide government, industry and civil    society in the development of best practices should be a starting point. Users should also have easily accessible information and tools to help them to respond to unwanted communications, for example by muting other users.

 

  1.                     (Question 13) How can elected representatives use technology to engage with the public in local and national decision making?  What can Parliament and Government do to better use technology to support democratic engagement and ensure the efficacy of the democratic process?

Elected representatives and government officials should – as a matter of priority – undertake to increase their own digital media literacy. Successive consultations from various arms of parliament and government have highlighted that the understanding among law and policy makers about how social media platforms operate is often patchy or limited and that data on its impact for good or ill is partial. 

 

5

 


[1] See for example Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT, Draft Recommendation of the Committee of Ministers to member States on the human rights impacts of algorithmic systems MSI-AUT(2018)06rev1, Council of Europe, 26 June 2019.

[2] See for example Jillian C. York, David Greene, and Gennie Gebhart Censorship Can't Be The Only Answer to Disinformation Online May 1, 2019 at https://www.eff.org/deeplinks/2019/05/censorship-cant-be-only-answer-disinformation-online

[3] Andrew Guess, Jonathan Nagler and Joshua Tucker, Less than you think: Prevalence and predictors of fake news dissemination on Facebook, Science Advances 09 Jan 2019: Vol. 5, no. 1, eaau4586

[4] Rachael Jolley, Anonymity: worth defending, 19 September 2016, at https://www.indexoncensorship.org/2016/09/anonymity-worth-defending/

[5] Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, UN Doc. A/HRC/38/35 6 April 2018, para. 30.

[6] Index on Censorship statement, Alex Jones, Infowars and the internet, 4 August 2018, at https://www.indexoncensorship.org/2018/08/alex-jones-infowars-and-the-internet/

[7] Jillian C. York and Corynne McSherry, Content Moderation is Broken. Let Us Count the Ways.

April 29, 2019, at https://www.eff.org/deeplinks/2019/04/content-moderation-broken-let-us-count-ways

[8] Available at https://www.manilaprinciples.org/