The Human Rights, Big Data and Technology Project - written evidence (DAD0077)

 

 

Introduction

The Human Rights, Big Data and Technology Project (‘HRBDT’),[1] based at the University of Essex’s Human Rights Centre, welcomes the opportunity to participate in the House of Lords Select Committee on Democracy and Digital Technologies call for evidence. Established in 2015, HRBDT identifies the opportunities and risks posed to human rights by big data and emerging technologies and proposes policy, practical and regulatory responses at the national and international level.[2] This includes extensive work on mis and disinformation[3]. This submission addresses questions 1, 11 and 12.

The impact of Misinformation on Democracy and Human Rights

  1. The internet and the growth of social media facilitate the spread of misinformation. Echo chambers, filter bubbles, memes, trolls and synthetic media (also known as deep fakes) all contribute to spreading misinformation faster and reach a larger number of users than before. As many studies have shown,[4] misinformation can adversely affect the effective functioning of the democratic system. It can undermine public debate and influence the thoughts and choices of individuals and groups, including their voting preferences.
  2. Misinformation – shared widely and quickly through social media platforms - also poses significant threats to human rights. In contrast to studies on democracy, the impact of misinformation on human rights has received relatively little attention. However, it can adversely affect a range of rights including freedom of expression, freedom of association and assembly, the right to participate in public affairs, the right to health, the right to security, and, at its most serious, the right to life.
  3. The spread of misinformation and its adverse impact on human rights is further amplified by techniques used by social media platforms to profile behaviours or to target social media users directly. Targeting and profiling are used to provide users with information, such as news, and other types of recommendations, deemed to be the most relevant to them on the basis of their online activity, known or inferred interests and preferences, online social network, geographic location and identity. Algorithmic content filters selectively funnel information, creating echo chambers and ‘filter bubbles’ through which individuals are exposed to a limited range of information. This stands in direct contrast to society’s development of opinion and thought and debate which requires access to a diverse variety of information to grow healthily.
  4. Misinformation can have a specific impact on minorities and vulnerable groups. Profiling and targeting of these groups with selected information and messages can exacerbate societal fractures and incite feelings of hatred.[5] Filter-bubbles and echo chambers created through such techniques can amplify the spread of hate speech.[6] This was seen in Myanmar, where the Facebook platform was used to carry out a long campaign against the Rohingya, spreading misinformation and inciting hatred and paving the way for the ethnic cleansing and alleged genocide carried out in summer 2017. The United Nations Fact-Finding Mission to Myanmar severely criticise Facebook for this saying that ‘by creating an environment where extremists’ discourse can thrive, human rights violations are legitimised and incitement to discrimination and violence facilitated’.[7]
  5. Misinformation can also affect the right of individuals to take part in political activities and affect the outcome of elections. A study on the 2016 US Presidential Elections reported that African-American voters were targeted with disinformation about electoral procedures causing some votes to be invalidated.[8] Similarly, studies show that misinformation has been used as a tool for both domestic and foreign interference in electoral campaigns in democratic countries and in more authoritarian regimes to influence the outcome of elections.[9]
  6. Twitter and Facebook reported that the Chinese government has used misinformation as part of its responses to street protests that have taken place in Hong Kong since June 2019, attempting to discredit and ‘other’ protest leaders, casting them as cockroaches, and spreading rumours and untruths.[10] These allegations indicate how such tactics can be used elsewhere to discredit and delegitimize opponents to a ruling regime, affecting their freedom of expression, association and assembly.[11]
  7. Targeted misinformation campaigns against journalists, human rights defenders and political opponents can have a chilling effect on human rights. Such campaigns can impact how these groups choose to exercise their freedom of expression or hate campaigns against them can ultimately threaten their safety and life.[12] This also undermines opportunities for free and pluralistic debates within societies - a fundamental requirement for a stable democracy.
  8. So called ‘conspiracy theories on the risks of vaccinations or misinformation about certain medical practices can compromise the right to health and, potentially, the right to life. As confirmed by the World Health Organisation, misinformation spread about vaccinations has an impact on public health. It has already led to the re-emergence of infective diseases that had been eradicated in several parts of the world.[13] The United Kingdom and other European countries have already lost their status as being measles-free’. There are similar risks for the re-emergence of other partially eradicated diseases.[14]
  9. As confirmed by recent data from the Office for National Statistics, although the number of people with access to high-speed internet is increasing, around 20% of all UK citizens still have no basic digital skills.[15] For this group, the online space remains complex and difficult to access. Social media platforms are often their first and only point of entry. Consequently, without an adequate digital literacy, this group can be more susceptible to misinformation campaigns, having fewer tools with which to verify and corroborate information they receive online. In such a situation, their right to receive information and their right to freedom of thought and opinion can be affected. Equally, however, the increasing sophistication of misinformation can mean that all users may struggle to identify false information, underscoring the need to avoid solutions which place the burden on users.

 

The need of a multi-pronged and multi-stakeholder approach

  1. States, digital platforms, media outlets, and civil society actors have proposed and implemented a range of approaches to tackling misinformation. These include technical approaches to fact-check and verify content and sources; media literacy campaigns; regulatory models and content moderation. However, in our view, a focus on one approach alone is unlikely to effectively address the harm posed by misinformation. Rather, as recognised by the European Union, a plurality of strategies is needed.[16]
  2. In attempting to deal with misinformation, some states and technology companies have adopted approaches that, in themselves, create new threats to human rights and democracy. It is critical that solutions designed to deal with misinformation do not end up introducing new forms of harm to human rights. Democratic states also need to be aware that ‘whataboutism[17] is a real risk in the misinformation space – meaning that any regulatory models adopted by democratic states may be replicated in less democratic states with the justification that a democratic state has introduced the same model first. This may ultimately result in adverse consequences for human rights for human rights in a less democratic state.

States’ Role in Tackling Misinformation

  1. Under international and national human rights law, states have an obligation to prevent, protect and fulfil human rights. This is both in relation to their direct actions and omissions and with regard to third parties, such as business enterprises.
  2. Some states have adopted anti-misinformation laws as a means to address misinformation.[18] These laws often make the dissemination of misinformation a criminal offence. They pose serious threats to human rights and in particular to freedom of expression, assembly and association. The terms used in these laws are often vague, making it unclear what constitutes misinformation and opening it to a plurality of subjective interpretations. Some commentators have described the effects of these laws by drawing parallels to censorship laws.[19] This is a particular risk in states in which misinformation is often conflated with critiques or questioning of the government and its officials. Such laws not only infringe upon freedom of expression but also the liberty, security and even life of journalists, human rights defenders and social media users more broadly.[20] Anti-misinformation laws can therefore introduce significant threats to human rights and do not offer an effective solution to addressing the effects of misinformation.
  3. Where states introduce regulation, it should be aimed at identifying and addressing the effects of misinformation, particularly when it involves targeted action by foreign states and examining how social media companies in particular can prevent their platforms being used as a vehicle for the spread of misinformation without restricting the exercise of rights, as discussed below. This includes an assessment of how advertising models can incentivise the production and spread of misinformation and what the state can do to regulate such space. Furthermore, states should commit to securing an adequate infrastructure for a plurality of news and information. Indeed, ensuring a diversity of information and a free and pluralistic debate is another key step that states can take to counter the traction of misinformation.
  4. States can support education and awareness campaigns about misinformation through digital literacy programmes. Education campaigns can include the technical aspects of how misinformation is spread online, what online targeting and profiling is and how artificial intelligence is used to foster news dissemination and general knowledge that would increase individual understanding of the world. However, it is critical that education campaigns are not viewed as the sole or main solution, particularly as the increasing sophistication of misinformation may mean that, in the near future, even advanced users may find it difficult to detect.

Digital Platforms’ Role in Tackling Misinformation

  1. Companies, and especially digital media platforms, play a key role in tackling misinformation. As their platforms are often used to launch and spread misinformation, they are well placed to quickly identify potential misinformation, fact-check and verify it. However, the way in which a platform acts upon suspected misinformation has significant implications for human rights.
  2. In order to tackle misinformation, digital media companies should address the associated business model. In the digital space one of the main sources of revenue is digital advertising and host websites make their pages as captivating as possible to induce users to click and spend time on them. Misinformation is often characterized by news containing shocking and catchy headlines, thus potentially feeding into the business model of digital advertising.[21] Moreover, some of the advertisements also contain misinformation and the digital media platforms receive payments for these to appear on their websites.[22] In light of this, transparency from companies on their business model and advertising revenues behind published content is crucial for enabling users to autonomously make assessments on the content.
  3. Digital media companies have taken different approaches to tackling misinformation. These approaches have ranged from flagging the content as such, removal of the content, suspension or removal of accounts, to demoting and de-ranking content.[23] These measures, particularly at their most extreme involving account suspension or deletion and removal of content, can affect freedom of expression and association. Content moderation policies can be drafted using vague terminology and it can often be unclear how an assessment of the ‘truthfulness’ of a given content is reached, particularly where the initial assessment is made by an algorithm.
  4. Automated fact-checking algorithms can be used to identify suspect content and assess whether it constitutes misinformation. Due to the high volume of online content on social media platforms, the platforms have introduced automated processes to screen and assess whether a piece of information constitutes misinformation. However, automated fact-checkers could bring new forms of harm to the individuals when they are not operating under an adequate human supervision that correct eventual biases and mistakes. The data on which the algorithms rely may be incomplete and potentially biased,[24] thus resulting in incorrect assessments that could seriously threaten a user’s freedom of expression. Several digital media platforms employ human supervision on their automated fact-checkers or a mixed human-automated fact-checking system. Yet, staff employed in this capacity need to be adequately trained and equipped to be able to assess both the reliability of the content, the possible harm its publishing may cause and the suitability of the algorithm to make such an assessment.
  5. Moreover, the possibility for users to challenge the removal or de-ranking of their content, also for the lack of effective grievance process outside of the company, is limited. Companies’ internal processes for reviewing user complaints can be generally opaque. In order to provide an effective protection of users’ basic rights, it is fundamental for digital media platforms to be fully accountable for their actions and to provide users with accessible and transparent grievance mechanisms that include effective remedial processes.
  6.                      The initiative of some platforms to provide users with a comparison of misinformation with an ‘authoritative’ source rather than removing or demoting the content could be a viable solution to limit the adverse impact on freedom of expression. This approach could also ensure a plurality of information and content remaining on platforms, while ensuring that debate and discussion are fostered, thus strengthening a democratic society. Ensuring a plurality and diversity of information is fundamental to democracy and can be the strongest response to misinformation. Yet, questions remain on what content qualifies for such a comparison and what should be the comparison material.

 

Annex 1

About the Human Rights, Big Data and Technology Project

 

1.       The Human Rights, Big Data and Technology Project (HRBDT) began in 2015 with £4.7 million funding from the Economic and Social Research Council and further funding from the University of Essex. One of the largest of its kind in the world, the Project is based at the Human Rights Centre at the University of Essex with over 30 academics, and additional researchers based at Cambridge University, the Geneva Academy and Queen Mary University. The team addresses human rights and technology issues across a range of disciplines including computer science, economics, law, philosophy, political science, communication studies and sociology.

 

2.       The core objective of HRBDT is to identify and assess the risks and opportunities for human rights posed by big data, artificial intelligence and other emerging technologies and to propose solutions to ensure that new and emerging technologies are designed, deployed and regulated in a way that is enabling of, rather than threatening to, human rights. HRBDT’s research assesses the adequacy of existing ethical and regulatory approaches to big data and new and emerging technologies from a human rights perspective. HRBDT’s research also demonstrates how human rights standards are capable of adapting, and offering solutions to, rapidly evolving technological landscapes. We engage with responses to the risks and opportunities posed by data and technology at the multilateral and multi-stakeholder level as well as within specific sectors, such as the law enforcement, health, education, social care, and humanitarian sector, and at the national level in States such as Brazil, Germany, India, the UK and the US.

 

3.       Our cutting-edge research focuses on engagement with and informing the practices of transnational governance (particularly at the UN level) and across multiple national level bodies in States such as the Brazil, Germany, India, the UK and the US. Focused on producing evidence-based and innovative research to support decision-making in policy, regulatory and commercial settings, HRBDT has been at the forefront of national and international debates on the human rights impact and governance of big data, artificial intelligence and a range of emergent technologies since its inception. Our research provides greater insight into the range of opportunities and risks relating to the use of AI, and guidance as to how AI can be developed and deployed in a human rights compliant manner.

 

4.       HRBDT is regularly invited to speak on panels at the UN Human Rights Council, at expert meetings organised by the Office for the High Commissioner for Human Rights and the Office for the High Commissioner for Refugees, and high-level multi-stakeholder global forums. HRBDT engages with major telecoms and technology businesses on the role of businesses in addressing the human rights impact of data and AI. HRBDT also frequently engages with national bodies, including the Investigatory Powers Commissioners Office, the Home Office, a range of law enforcement bodies, and Select Committees. HRBDT works in partnership with civil society organisations in the UK and overseas including the American Civil Liberties Union, Amnesty International and Liberty.

 

5.       Working with both national and international actors, HRBDT is strategically positioned with insight on how State and business practice at the national level feeds into contemporary debates, as well as how the international human rights legal framework is translated and implemented domestically.

 

 

 

 

 

9


 


[1] The Human Rights, Big Data and Technology Project, available at <http://www.hrbdt.ac.uk>.

[2] For more information on the Project, please see Annex 1.

[3] In this submission, we refer to mis and disinformation as ‘misinformation’ as this is the overall label used by the Committee in its call for evidence.

[4] See, for instance, Samuel C. Woolley & Philip N Howard (eds.), Computational Propaganda (Oxford University Press, 2018), European Commission, A multi-dimensional approach to disinformation. Report of the independent High level Group on fake news online disinformation (March 2018) and House of Commons Digital, Culture, Media and Sport Committee, ‘Disinformation and ‘fake news’: Interim Report’, 24 July 2018, available at https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/363/363.pdf.

 

[5] Renee Di Resta et al., ‘The Tactics & Tropes of the Internet Research Agency’ New Knowledge Report, December 2018, available at https://cdn2.hubspot.net/hubfs/4326998/ira-report-rebrand_FinalJ14.pdf

[6] See, Paolo Costa, ‘Microtargeting and fake news: in the bubble of online advertising’ (Spindox, 13 November 2018) and Christina Fink, ‘Dangerous Speech, Anti-Muslim Violence, and Facebook in Myanmar’ (2018) 71 Columbia SIPA Journal of International Affairs 1.5, available at https://jia.sipa.columbia.edu/dangerous-speech-anti-muslim-violence-and-facebook-myanmar

[7] OHCHR, ’Myanmar: UN Fact-Finding Mission Releases Its Full Account of Massive Violations by Military in Rakhine, Kachin and Shan States’, available at https://www.ohchr.org/EN/HRBodies/HRC/Pages/NewsDetail.aspx?NewsID=23575&LangID=E; BSR, ‘Human Rights Impact Assessment: Facebook in Myanmar’, October 2018, available at https://fbnewsroomus.files.wordpress.com/2018/11/bsr-facebook-myanmar-hria_final.pdf.

[8] Philip N. Howard et al. ‘The IRA, Social Media and Political Polarization in the United States, 201-2018’ (Working Paper 2018.2, Oxford Project on Computational Propaganda).

[9] Samuel C. Woolley & Philip N Howard (eds.), Computational Propaganda (Oxford University Press, 2018).

[10] As reported by ‘Hong Kong protests: YouTube takes down 200 channels spreading disinformation’ (The Guardian, 23 August 2019), available at https://www.theguardian.com/technology/2019/aug/23/hong-kong-protests-youtube-takes-down-200-channels-spreading-disinformation; Twitter Safety ‘information operations directed at Hong Kong’(Twitter Blog, 19 August 2019), available at https://blog.twitter.com/en_us/topics/company/2019/information_operations_directed_at_Hong_Kong.html and Nathaniel Gleicher, ‘Removing Coordinated Inauthentic Behavior from China’ (Facebook Newsroom, 19 August 2019), available at https://newsroom.fb.com/news/2019/08/removing-cib-china/.

[11]Steven Lee Myers and Paul Mozur, ‘China Is Waging a Disinformation War Against Hong Kong Protesters’(New York Times, 13 August 2019) Emily Stewart, How China used Facebook, Twitter, and YouTube to spread disinformation about the Hong Kong protests, (Vox, 23 August 2019).

[12] See, for instance, Carly Nyst and Nick Monaco, ‘State-sponsored trolling: How Governments Are Deploying Disinformation as Part of Broader Digital Harassment Campaigns’ (Institute for the Future & Digital Intelligence-Futures Lab, 2018) and Claire Wardle & Hossein Derakhshan, ‘Information Disorder. Toward an interdisciplinary framework for research and policymaking’ (Council of Europe Report DGI 09-2017, 27 September 2017).

[13] World Health Organisation, ‘Vaccine Safety Communication in the Digital Age’, April 2019, available at https://www.who.int/vaccine_safety/publications/hi-light-vax-safe-digital-age/en/.

[14] Nicola Davis, ‘Lives at risk from surge in measles across Europe, expert warn’ (The Guardian, 29 August 2019).

[15] Office for National Statistics, Exploring the UK’s digital divide, 4 March 2019, available at https://www.ons.gov.uk/peoplepopulationandcommunity/householdcharacteristics/homeinternetandsocialmediausage/articles/exploringtheuksdigitaldivide/2019-03-04#why-does-digital-exclusion-matter.

[16] European Commission, A multi-dimensional approach to disinformation. Report of the independent High level Group on fake news online disinformation (March 2018).

[17] Natalie Nougayrede, ‘In this age of propaganda, we must defend ourselves. Here’s how’ (The Guardian, 31 January 2018), available at https://www.theguardian.com/commentisfree/2018/jan/31/propaganda-defend-russia-technology.

[18] Poynter, ‘A guide to anti-misinformation actions around the world’, available at https://www.poynter.org/ifcn/anti-misinformation-actions/.

[19] See also the Joint Declaration on ‘“Fake News,” Disinformation and Propaganda’, The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information (3 March 2017), available at https://www.osce.org/fom/302796.

[20] Reporters Without Borders, 2019 World Press Freedom Index, available at https://rsf.org/en/ranking.

[21] ‘Cutting the Funding of Disinformation: The Ad-Tech Solution’ (Global Disinformation Index, May 2019), available at https://disinformationindex.org/wp-content/uploads/2019/05/GDI_Report_Screen_AW2.pdf.

[22] Douglas Guilbeault, ‘Digital Marketing in The Disinformation Age’, 71 Columbia SIPA Journal of International Affairs 1.5, 17 September 2018, available at https://jia.sipa.columbia.edu/digital-marketing-disinformation-age.

[23] See, for example, digital companies’ reports to the European Commission on the implementation of the Code of Practice against Disinformation (January 2019), available at https://ec.europa.eu/digital-single-market/en/news/first-results-eu-code-practice-against-disinformation.

[24] See Lorna McGregor, Daragh Murray, and Vivian Ng, “International Human Rights Law as a Framework for Algorithmic Accountability” (2019) 68(2) International & Comparative Law Quarterly, 309-343.