Written evidence submitted by SumOfUs (OSB0068)

 

 

  1.                Background

 

The Joint Committee on the Draft Online Safety Bill invites submissions on the Draft Bill and scrutinises it to identify the gaps and make recommendations to improve the drafting of the Bill. In this context, the Joint Committee will look at the scope of content that falls within the sphere of the Draft Bill and the nature of potential harms relating to misinformation or disinformation.

 

SumOfUs is a non-profit campaigning organisation that works on issues relating to the threats of online disinformation on global tech platforms. While we welcome the fact that the Draft Bill recognises physical or psychological harm that misinformation or disinformation may cause upon individuals, for the Draft Bill to achieve its objective of addressing the impact of misinformation or disinformation on society at large, we believe further amendments and/or clarifications are necessary. It is imperative to recognise the wider societal harm of misinformation or disinformation, including to democratic processes, protection of human rights, and public trust in institutions. Below we offer some specific points which we hope will help to focus the Committee’s discussions.

 

This submission has been prepared for SumOfUs by legal experts: Elif Mendos Kuşkonmaz (University of Portsmouth) and Susie Alegre (Research Fellow, University of Roehampton and Associate, Doughty Street Chambers).

 

 

  1.                Summary

 

 

 

 

 

 

 

  1.                The Problem

 

3.1.            The Draft Bill seeks to address many issues relating to Online Safety. This submission focuses on one area of concern, the negative cumulative and societal impacts of online disinformation or misinformation and the Draft Bill’s adequacy to tackle that problem.

 

3.2.            The business model of social media platforms is based on capturing users’ attention and marketing users’ attention to advertisers. Because of this attention-based model, the platforms favour content that goes viral and/or evokes emotional responses irrespective of its veracity. This in turn means that the users of these platforms are constant recipients of content that influences their thoughts and opinions (subconsciously or consciously) because the platforms use digital techniques to maximise their users’ interaction on the platforms and micro-target them. The main driving force behind these techniques is the profit that the platforms make from selling detailed profiles of their users. In fact, a 2021 report by the New Economics Foundation on online targeting revealed that the largest online platforms generated billion-dollar revenues from online advertising.[1] The report noted that $76 Million in ad revenues flow each year to disinformation sites in Europe.[2] The platforms thus thrive on online misinformation and disinformation campaigns, which – as we mentioned above, allow them to maximise their impact on their audience. The challenges of the profit-driven goals of platforms for preventing the spread of online misinformation and disinformation campaigns have been observed by the House of Commons Digital, Culture, Media and Sport Committee, which has repeatedly highlighted that the need to tackle online harms including misinformation or disinformation runs at odds with the financial incentives of the platforms.[3]

 

3.3.            The issue becomes even more troubling when misinformation or disinformation is spread by foreign actors, including those that are state-sponsored, to deliberately manipulate the platforms’ audience by exploiting their overarching influence and reach. This is particularly concerning when these techniques are used in the context of elections.  The House of Commons Digital, Culture, Media and Sport Committee’s 2019 report on disinformation and fake news referred to the studies undertaken to analyse foreign actors’ influence in the UK’s electoral process, such as the research by Cardiff University and the Digital Forensics Lab of the Atlantic Council, and considered that there is ‘strong evidence that points to hostile state actors influencing democratic processes.’[4] In a following report on Covid-19 and misinformation, the Committee noted the repeated evidence put before them on the influence of foreign actors in spreading misinformation or disinformation about Covid-19.[5]

 

3.4.            It is imperative to realise that some online harms are difficult to identify based solely on individual harm because they can be used over time to influence and manipulate collective opinions and emotions. The storming of the US Capitol following the 2020 US Presidential elections showed that misinformation or disinformation could undermine confidence in the electoral process with real world consequences including violence.[6] The impacts of misinformation or disinformation campaigns, however, are not limited to electoral process. A growing body of research shows that women parliamentarians are disproportionately targeted by gendered disinformation campaigns, which involve the spread of deceptive or inaccurate information and images against women political leaders, journalists, and female figures.[7] These campaigns draw on and compound misogynistic narratives and gender stereotypes. They have significant implications on women withdrawing from online/offline public life and from democratic process. During the UK General Election in 2019, 18 women MPs stood down and this flagged issues around online abuse and hostility against women parliamentarians.[8] There is a growing concern over disinformation campaigns against members of minority groups and these campaigns’ role in fuelling increasing incitement to racial hatred and racial profiling.[9] For example, it has been reported that Facebook has become the platform to spread disinformation or misinformation to damage Muslim communities and contributing to potential violence against members of these communities.[10] Disinformation campaigns on Facebook and other social media platforms have targeted the Black Lives Matter movement to suppress the effect of the movement by discouraging people from supporting it.[11] In most cases, specific content may not amount to either illegal content or content harmful to adults in the current definition. But the harms resulting from that content are based on the cumulative impact of content on a large number of people. These examples demonstrate that misinformation or disinformation campaigns may compound the problems with social cohesion including gender inequality and racial hatred.

 

3.5.            In this context, we consider that it is important to consider the risks posed by online misinformation or disinformation on collective interests such as the wider protection of human rights, democracy, and the rule of law in the development of the Bill.

 

 

  1.                International Law Background

 

4.1.            According to section 3 of the Human Rights Act 1998, all UK laws must be read and given effect, so far as it is possible to do so, in a way that is compatible with the rights set out in the Act. This includes the absolute rights of individuals not to have their thoughts and opinions manipulated as affirmed in Articles 9 and 10 of the HRA 1998. These Articles should be interpreted in light of the international human rights framework such as the European Convention on Human Rights and the International Covenant on Civil and Political Rights 1966, both of which affirm the rights to freedom of thought and freedom of opinion.

 

4.2.            The Draft Bill refers to privacy and freedom of expression, but while the rights to private life and freedom of expression may be limited in certain circumstances, the rights to freedom of thought and freedom of opinion inside one’s own head are protected absolutely in international human rights law.[12] These include the right to form opinions free from manipulation. One of the reasons why these rights are so important is because the manipulation of individual opinions can have wide-ranging consequences on society at large as we have seen with the impact of anti-vaccination disinformation on public health in the context of the pandemic, and the impact of gendered disinformation on the continuation of misogynistic narratives. Following the campaign by Everyone’s Invited movement, the investigations into peer-on-peer sexual abuse and sexual violence at schools and colleges revealed the grim role of the online world in normalising sexual harassment and violence.[13] Because misinformation or disinformation reach huge audiences and can be fed into the online information system through social media platforms over time, the problem does not stop at the manipulation of individual opinions through particular pieces of content. Rather, the main challenge behind misinformation or disinformation campaigns is swaying public opinion or opinions of large groups to one direction by manipulation and undue influence through the control of information flows.

 

4.3.            The wide-ranging potential consequences of misinformation or disinformation campaigns for society can be compared to the risks of climate change in terms of their broad impact on our future societies. Recent climate change litigation in Europe based on human rights grounds (e.g., Urgenda Foundation v State of the Netherlands)[14] shows how human rights law requires actions to be taken to prevent harms against large groups or the population as a whole, even if the harms might only materialise over the long term. This helps to understand how cumulative and societal harms are just as important as individual harms from a human rights perspective.

 

4.4.            There are examples in other jurisdictions recognising some of the wider consequences of online content on collective interests such as democratic process and values including protection of human rights. For example, Article 26 of the proposed EU Digital Services Act prescribes due diligence obligations for online platforms according to which they ‘shall identify, analyse and assess ... any significant systemic risks stemming from the functioning and use made of their services in the Union’ including ‘intentional manipulation of their service, including by means of inauthentic use or automated exploitation of the service, with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security.’

 

4.5.            The rights to freedom of thought and freedom of opinion provide protection against the wide-ranging consequences of misinformation and disinformation on society because they are rights that may be affected not only by a single piece of content, but also, and perhaps more importantly by the way that information is curated and delivered online over time to influence and manipulate individual and collective opinions and emotions. For this reason, we believe that the rights to freedom of thought and freedom of opinion are key legal tools in addressing the wide-ranging consequences of misinformation and disinformation that should be considered in the development of the Bill.

 

4.6.            The Draft Bill and the harms it is designed to address will have significant consequences for human rights and democracy, both in the risk of interference with rights and the potential to protect rights and support democratic processes. Given the extremely sensitive nature of the issues at stake and the potential impact of legislation in this area, it is particularly important that the legislation is clear and meets the requirements of legal certainty. The current draft does not, in our view, meet that requirement insofar as it describes duties to respond to risks and harms and the comments below flag some of the areas that we believe need to be tightened in the next stage of drafting.

 

 

  1.                Comments

 

5.1.            Addressing Mis/Disinformation based on ‘Individual’ Harm

 

5.1.1.      The draft Online Safety Bill imposes duties of care and obligations on providers of ‘user-to-user services’ that enable user-generated content and ‘search services’. It envisages common duties as well as distinct duties for different categories of services. The criteria in determining those categories will be set down in regulations.

 

5.1.2.      In general, the duty-of-care obligations of different categories of services will arise if the content meets a certain threshold. The Draft Bill thus lays out three main categories of content and a threshold for each type of relevant content:

 

(i)                  illegal content that is where the content amounts to a relevant offence specified in the Draft Bill such as terrorist offences and child sexual exploitation and abuse offences;

(ii)                content that is harmful to children; and

(iii)              content that is harmful to adults.

Mis/disinformation is covered by ‘content harmful to adults’ but duties relating to this category of content are only imposed on providers designated as Category 1, which essentially targets the largest providers of user-to-user services including social media platforms.

 

5.1.3.      Clause 46 defines ‘content that is harmful to adults’ as content for which ‘the provider of the service has reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical
or psychological impact on an adult of ordinary sensibilities’. The Secretary of State might designate categories of ‘priority content’ presumably to expedite the regulatory response. According to the definition of content harmful to adults, the harm must be either a direct or indirect consequence of the content. Based on the definition of content harmful to adults, the Draft Bill is limited in addressing the wider consequences of misinformation or disinformation on society that we have highlighted above because in its current form the definition of harmful content in clause 46 only covers misinformation or disinformation leading to individual physical or psychological harm.

 

5.1.4.      The bigger problem for misinformation or disinformation is the societal harms it can cause. For example, misinformation or disinformation on Covid-19 and anti-vaccination campaigns might result in immediate physical and mental harms on people mainly due to the poor perception of the risks related to Covid-19 and immunisation campaigns. But it has also bred distrust in public institutions and undermined public health efforts in some places. In a joint statement, UN bodies including the WHO warned about the polarising effect of misinformation or disinformation on topics related to Covid-19 on democracy, human rights and social cohesion.[15] Following this warning, the DCMS conducted an inquiry into the impact of misinformation or disinformation about Covid-19 and reported about the harms caused to public health and critical national infrastructure.[16] In its current form, the Draft Bill only partially addresses online harms due to misinformation or disinformation on Covid-19 and anti-vaccination campaigns because they would have to fit into the category of ‘physical and psychological harm’ to an individual to trigger duty-of-care obligations of Category 1 providers.

 

5.1.5.      Clause 46 further considers that in determining content that is harmful to adults, ‘in the case of content which may reasonably be assumed to particularly affect people with a certain characteristic (or combination of characteristics), or to particularly affect a certain group of people, the provider is to assume that A possesses that characteristic (or combination of characteristics), or is a member of that group (as the case may be).’ In this way, the Draft Bill prescribes the possibility that the adult believed to be at risk of harm could also be assumed to have any characteristic, or combination of characteristics, or be a member of a group, which would make them particularly susceptible to harm. While we welcome the initiative to address group harm, we note that harm is still identified on an individual level because the provider would be expected to judge the likely physical or psychological harm of the content on a person of ordinary sensibilities who possess certain characteristics.

 

5.1.6.      As we suggested above, the rights to freedom of thought and freedom of opinion are key in addressing the wider consequences of misinformation and disinformation on society because it requires actions to be taken to prevent the manipulation of individual and collective opinions. For this reason, substantial impairment of human rights should be included in the definition of harm in clause 46. To achieve that, we suggest the introduction of a sub-category of harmful content in clause 46 with the following definition: ‘Content harmful to adults which is reasonably assumed to impair or indirectly impair human rights of people, individually or collectively.

 

5.1.7.      We recognise that some of the consequences of misinformation and disinformation may be captured by clause 112 which lays out the powers of Secretary of State and OFCOM to issue a public statement notice to address threats to public health and national security. These powers may be relevant where misinformation or disinformation campaigns become a national security or public health issue for example where they involve illegal influence over election process or anti-vaccination programmes. However, we note that in its current form the power to issue a public statement notice is weak. According to clause 112(1), the Secretary of State may require OFCOM to exercise its media literacy functions if there are reasonable grounds believing that there is a threat to the health and safety of the public, or national security. Following the Secretary of State’s directions, OFCOM may require a particular service provider or all service providers to make a statement on how they comply with threats to national security, public health, or public safety. While we welcome the efforts to promote media literacy, we note the potential limited impact in addressing collective harms by misinformation or disinformation. This is because there is no information about the minimum standards required for the public statement or the process or consequences should the provider’s response be deemed inadequate.

 

5.1.8.      Clause 112 shows the recognition that misinformation or disinformation campaigns could indeed result in collective harms including threats to public health and national security. However, based on our observations on the inadequacy of addressing these harms by way of a public statement, we emphasise that harms to collective interests should be included in the definition of harm for the purpose of determining online harmful content to trigger duty-of-care obligations of providers.

 

 

5.2.            Cumulative Effects of Mis/Disinformation

 

5.2.1.      The challenge with misinformation or disinformation is not solely about the veracity of the information in online communication services. The potential for manipulation of or undue influence on users is related to the method of delivery of content, not just individual pieces of content. This is far more potent in the way online information is delivered compared to traditional media outlets as the targeted curation of content limits a user’s access to alternative sources of information. This online influence industry rests on a business model that uses dark patterns to encourage users to spend the maximum time on their platforms. The surveillance advertising model also allows for microtargeting which can support the abuse of services by other actors to manipulate public opinion and emotions for their own interests as revealed by the Cambridge Analytica scandal. National courts have already recognised the dangers of microtargeting, which involves repeated and bulk delivery to a platform user based on the user’s browsing history, to the person’s right to respect for their autonomy and freedom of choice.[17] A report by researchers at the Computational Propaganda Research Project (COMPROP), based at the Oxford Internet Institute, University of Oxford, investigated the interference of Russia’s Internet Research Agency (IRA) with the US elections and demonstrated that the challenge is as much about the techniques and strategies used to spread misinformation or disinformation as it is about the content.[18]

 

5.2.2.      We observe that the Draft Bill partly addresses this aspect because according to its clause 46(5), in determining whether the content is harmful, service providers will consider how many users may be assumed to encounter the content by means of the service and how easily and widely content may be disseminated by means of the service. In this way, it covers situations where the content itself may not be harmful as per clause 46, but may be deemed harmful when it is repeatedly sent to a user. However, it is less clear whether an assessment of the harm would also consider the cumulative effect of the so-called unharmful content because as we mentioned above, the challenge with misinformation and disinformation does not only rest on the content, but rather techniques used over time to sway public opinion in a particular direction. It is thus imperative to clarify how OFCOM will consider the operation of providers in determining the harm that the content may cause.

 

 

5.3.            Duties of Care in relation to Mis/Disinformation

 

5.3.1.      In relation to content harmful to adults, the Draft Bill imposes several duty-of care obligations for Category 1 providers. We would like to focus here on the duty of providers in clause 11 to specify in their terms of service how harmful content ‘is to be dealt with by the service’.

 

5.3.2.      With regards to this duty, we observe that it is left to the providers to draw their own lines in terms of action. The phrase ‘dealt with’ is particularly unclear in relation to steps the providers should take to address the risks that misinformation or disinformation poses. This phrase is different from the phrase ‘mitigate and effectively manage risks’ used in relation to safety duties for illegal content and content likely to be accessed by children in clause 9 and 10. Based on this comparison, we infer that the purpose is to impose different types of responses on the providers in proportion to the harm that the content may cause (e.g., based on the different harm threshold between illegal content and content harmful to adults). While it may be appropriate to have differing responses according to the type of content, it is equally important to clarify what the duty for the providers to ‘deal with’ content that is harmful to adults including mis- or disinformation means.  The Bill must have sufficient legal certainty to allow providers and users to understand how it will be used and the duties, obligations, and remedies that it provides for them.

 

 

 

Contact: olivia@sumofus.org

 

17 September 2021

 

9


 


[1] New Economic Foundation, ‘I-Spy: The Billion Dollar Business of Surveillance Advertising to Kids’, available at: https://neweconomics.org/2021/05/i-spy.

[2] Ibid, p. 12.

[3] Digital, Culture, Media and Sport Committee, 2020, Misinformation in the Covid-19 Infodemic, (HC 234, 2019-21) available at: https://committees.parliament.uk/publications/1954/documents/19089/default/; Digital, Culture, Media and Sport Committee, 2019, Disinformation and ‘fake news’: Final Report, (HC 1791, 2017-2019), available at: https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf.

[4] Digital, Culture, Media and Sport Committee, 2019, Disinformation and ‘fake news’: Final Report, (HC 1791, 2017-2019), available at: https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf, p. 70.

[5] Digital, Culture, Media and Sport Committee, 2020, Misinformation in the Covid-19 Infodemic, (HC 234, 2019-21) available at: https://committees.parliament.uk/publications/1954/documents/19089/default/., p. 6.

[6] ‘Foreign Threats to the 2020 US Federal Elections’ (16 March 2021), available at: https://www.dni.gov/index.php/newsroom/reports-publications/reports-publications-2021/item/2192-intelligence-community-assessment-on-foreign-threats-to-the-2020-u-s-federal-elections.

[7] Inter-Parliamentary Union, ‘Sexism, Harassment and Violence Against Women Parliamentarians’ (ipu.org, 2018), available at: https://www.ipu.org/resources/publications/issue-briefs/2018-10/sexism-harassment-and-violence-against-women-in-parliaments-in-europe; Lucina Di Meco, ‘Online Threats to Women’s Political Participation and the Need for a Multi-Stakeholder, Cohesive Approach to Address Them’, available at: https://www.unwomen.org/-/media/headquarters/attachments/sections/csw/65/egm/di%20meco_online%20threats_ep8_egmcsw65.pdf?la=en&vs=1511.

[8] Genevieve Gorrell et al., ‘Which politicians receive abuse? Four factors illuminated in the UK General Election 2019’ (2020) 9(18) EPJ Data Science, available at: https://doi.org/10.1140/epjds/s13688-020-00236-9; Kim Barker and Olga Jurasz, ‘Gendered Misinformation & Online Violence Against Women in Politics: Capturing Legal Responsibility?’ (policyblog.stir.ac.uk, 23 March 2020), available at: https://policyblog.stir.ac.uk/2020/03/23/gendered-misinformation-online-violence-against-women-in-politics-capturing-legal-responsibility/.

[9] CoE Steering Committee on Anti-Discrimination, Diversity, and Inclusion, ‘Covid-19: An analysis of the anti-discrimination, diversity and inclusion dimensions in Council of Europe member States’ (17 July 2020), available at: https://rm.coe.int/item-7-cdadi-2020-9-covid-19-an-analysis-of-the-anti-discrimination-di/1680a040a4.

[10] Muslim Advocates and Global Project Against Hate and Extremism, ‘Complicit: The Human Cost of Facebook’s Disregard for Muslim Life’, available at: https://muslimadvocates.org/wp-content/uploads/2020/10/Complicit-Report.pdf.

[11] ‘Information is Power. Disinformation is Dangerous’ (medium.com, 8 July 2020), available at: https://medium.com/@BlackLivesMatterNetwork/information-is-power-disinformation-is-dangerous-ae808b7a0e69.

[12] Evelyn Aswad, ‘Losing the Freedom to be Human’ (2020) 52 Columbia Human Rights Law Review 206; Susie Alegre, ‘Rethinking Freedom of Thought for the 21st Century’ (2017) 3 EHRLR 221.

[13] Ofsted, ‘Review of Sexual abuse in schools and colleges’ (10 June 2021), available at: https://www.gov.uk/government/publications/review-of-sexual-abuse-in-schools-and-colleges/review-of-sexual-abuse-in-schools-and-colleges#print-or-save-to-pdf.

[14] Urgenda Foundation v State of the Netherlands (13 January 2020) available at: https://www.urgenda.nl/wp-content/uploads/ENG-Dutch-Supreme-Court-Urgenda-v-Netherlands-20-12-2019.pdf; Climate Change Litigation Database, available at: http://climatecasechart.com/climate-change-litigation/non-us-jurisdiction/european-court-of-human-rights/.

[15] ‘Managing the Covid-19 Infodemic: Promoting healthy behaviours and mitigating the harm from misinformation and disinformation’ (who.int, 23 September 2020), available at: https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation.

[16] Digital, Culture, Media and Sport Committee, 2020, Misinformation in the Covid-19 Infodemic, (HC 234, 2019-21) available at: https://committees.parliament.uk/publications/1954/documents/19089/default/.

[17] Lloyd v Google LLC [2018] EWHC 2599 (QB).

[18] Philip N. Howard et al, ‘The IRA, Social Media and Political Polarization in the United States, 2012-2018’ available at: https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/The-IRA-Social-Media-and-Political-Polarization.pdf.