Digital technology has had a profound impact, both negative – as demonstrated by the Cambridge Analytica case – and positive, such as allowing politicians to communicate directly with the public and for the public to communicate directly with politicians (please also see our response to question 8 below). In its submission to the Online Harms White Paper consultation Index emphasised the importance of basing any future regulatory action on clear evidence, including evidence of harmful impacts.
Governments should have no role in the design of algorithms. However, increased transparency about the role that algorithms play in decision-making and in driving content to internet users is extremely important. This includes considering the impacts on human rights, including impacts on freedom of expression, privacy and the right to access information. This has been recognised by for example the Council of Europe, which has initiated a process to develop recommendations on the human rights impacts of algorithmic systems. The Electronic Frontier Foundation has highlighted the importance of empowering users as part of the solution.
Digital media literacy should include a component that helps individuals to understand and to exercise their civil and human rights in full. Often the focus is on technical knowledge, rather than rights-based education. It is also important to recognise that efforts need to go beyond school-age children and young people. For example, research that focused on Facebook and fake news indicated that over 65s are more likely to share fake news than those from younger age groups.
Disinformation can be spread through private groups, including ones using encrypted communications. However, rather than posing a challenge to the democratic process, encryption and private groups can play an important role in strengthening democracy. In countries with repressive regimes they can be essential tools for mobilising democratic opposition. Free media is an essential part of a functioning democracy: journalists who need to protect sources often need to use encrypted communications.
Index on Censorship strongly defends the right to anonymity online. Anonymity can be extremely important for human rights defenders working under repressive regimes and for members of minority groups, and for journalists. Contributors to Index on Censorship’s magazine are often individuals in a position where personal safety is a major concern, so they do not wish their name to be revealed. Some might choose to use a pseudonym.
As Index on Censorship has pointed out in a submission to the Joint Committee on Human Rights, social media has increased the ease with which the general public can communicate with members of parliament. This undoubtedly has a positive outcome – bringing parliamentary accountability closer to individual constituents, allowing for MPs to communicate directly with the public, without dilution by the mainstream media, and vice versa. Lively debates occur on social media with and between MPs and, overall, this has increased the volume of democratic participation. However, the flip side of the increase in the ease with which individuals can directly communicate with MPs is that it is more common for MPs to receive abuse. In some cases, this has resulted in MPs receiving a litany of racist or misogynistic abuse, and direct threats of violence.
Bad actors who attempt to undermine the democratic process and democratic institutions do so in various ways, including but most likely not exclusively online. Censorship and closing of user accounts are not the right way to approach this.
Increasing digital media literacy is a critical part of the solution. It is also important to distinguish between disinformation (deliberately manipulative) and misinformation (not aiming to cause deliberate harm). The root causes of misinformation can be complex and very difficult to address. However, tackling the issue through online censorship is not the answer. Index on Censorship also has significant concerns about proposals that would result in “kitemarking” news sources, because of the questions it raises about who would decide what constitutes “approved news”. The dividing line between opinion and misinformation may be very difficult to define: one person’s point of view can be another person’s misinformation.
As highlighted in Content Moderation is Broken. Let Us Count the Ways social media platforms engage in content moderation that involves depublication, downranking and sometimes outright censorship. This is usually based on the platforms’ community standards. The community standards are often not clearly formulated and their application is not transparent. Appeals processes may be non-existent or difficult to understand.
Elected representatives and government officials should – as a matter of priority – undertake to increase their own digital media literacy. Successive consultations from various arms of parliament and government have highlighted that the understanding among law and policy makers about how social media platforms operate is often patchy or limited and that data on its impact for good or ill is partial.
 See for example Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT, Draft Recommendation of the Committee of Ministers to member States on the human rights impacts of algorithmic systems MSI-AUT(2018)06rev1, Council of Europe, 26 June 2019.
 See for example Jillian C. York, David Greene, and Gennie Gebhart Censorship Can't Be The Only Answer to Disinformation Online May 1, 2019 at https://www.eff.org/deeplinks/2019/05/censorship-cant-be-only-answer-disinformation-online
 Andrew Guess, Jonathan Nagler and Joshua Tucker, Less than you think: Prevalence and predictors of fake news dissemination on Facebook, Science Advances 09 Jan 2019: Vol. 5, no. 1, eaau4586
 Rachael Jolley, Anonymity: worth defending, 19 September 2016, at https://www.indexoncensorship.org/2016/09/anonymity-worth-defending/
 Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, UN Doc. A/HRC/38/35 6 April 2018, para. 30.
 Index on Censorship statement, Alex Jones, Infowars and the internet, 4 August 2018, at https://www.indexoncensorship.org/2018/08/alex-jones-infowars-and-the-internet/
 Jillian C. York and Corynne McSherry, Content Moderation is Broken. Let Us Count the Ways.
April 29, 2019, at https://www.eff.org/deeplinks/2019/04/content-moderation-broken-let-us-count-ways
 Available at https://www.manilaprinciples.org/