Written evidence submitted by the RSA (OSB0070)

About the RSA, background, and reason for submission

  1. The RSA (royal society for the encouragement of arts, manufactures and commerce) believes in a world where everyone is able to participate in creating a better future. Through our ideas, research and a 30,000 strong Fellowship we are a global community of proactive problem solvers, sharing powerful ideas, carrying out cutting-edge research and building networks and opportunities for people to collaborate, influence and demonstrate practical solutions to realise change.

 

  1. Since 2018 the RSA has been investigating issues that arise at the intersection of technology and society. This line of work led us to our ongoing investigation into misinformation and disinformation, and what could be meaningfully done to remedy the individual and societal harms it causes. Our final report is forthcoming and our response below is built and adapted from this research.

Response to the inquiry

  1. Our response to the Joint Committee (JC) covers the themes raised by the following questions, as taken from the call for evidence:
    1. Objectives. Will the proposed legislation effectively deliver the policy aim of making the UK the safest place to be online?
    2. Objectives. Does the Bill deliver the intention to focus on systems and processes rather than content, and is this an effective approach for moderating content? What role do you see, for example, for safety by design, algorithmic recommendations, minimum standards, default settings?
    3. Content in scope. Earlier proposals included content such as misinformation/ disinformation that could lead to societal harm in the scope of the Bill. These types of content have since been removed. What do you think of this decision?
    4. Algorithms and user agency. What role do algorithms currently play in influencing the presence of certain types of content online and how it is disseminated? What role might they play in reducing the presence of illegal and/or harmful content?
    5. Algorithms and user agency. Are there any foreseeable problems that could arise if service providers increased their use of algorithms to fulfil their safety duties? How might the Bill address them?
    6. The role of Ofcom. [Our response gives a broad view as to our recommended role of Ofcom and subsidiaries in combatting misinformation and disinformation].

Summary

  1. The draft Online Safety Bill is a welcome first step in controlling online harms. However, we argue that by not including societal, or collective, harms as specific areas of investigation within the Bill, it leaves both society and individuals unprotected. By societal harms we refer to online harms which lead to the erosion of important societal goals. For instance, trust in the democratic system, social trust and cohesion, trust in expertise, or belief in accepted facts. Evidence of societal harms caused by misinformation and disinformation has already be seen in the UK and around the world, such as distrust in electoral processes and distrust in science, and will be further explored below.
  2. We are experiencing an online environment in which there is an abundance of information and a proliferation of misinformation. It is an environment in which the responsibility for the quality of information is placed largely on platforms, resulting in poor outcomes, inconsistent oversight, and a deficit of legitimacy. The Bill in its current form does little to improve this. Yet the current state of affairs does contain kernels of some good practice and good will: within platforms, within public bodies, and most particularly within civil society.
  3. It is our belief that we should use regulation to improve the quality of information online and to protect against societal harms caused by misinformation. This should be done in a way which has independence from both government and platforms, in order to protect goals such as freedom of speech, while improving legitimacy. We believe this would be best done through a new independent and pluralist body, the Office for Public Harms, made up of stakeholders including citizens, platforms, civil society, Ofcom, and the media. The Office for Public Harms would have two primary functions:
    1. Investigate societal harms online. The Office would then publish evidence of any societal harms they identify and advise Ofcom on them. Ofcom’s remit would be to influence systemic factors, such as changes to the codes of practice in regard to misinformation.
    2. The Office would also act as a misinformation ombudsman, investigating where individuals or organisations feel that misinformation has been allowed to spread on platforms, or where there has felt to have been overreach into freedom of expression by platforms. It would then set means of redress where appropriate. The Office should retain protection of societal goals, including freedom of expression, as a primary objective and not seek to systematically remove content. Instead, it can influence through promotion of good quality content and through recommended changes to the amplification of content.

 

RSA full response

 

The scale of the problem

  1. Misinformation is a systemic threat to society and has been growing through the last decade. To give a sense of scale, consider:
  1. Per recent reports, 2.76 billion people use Facebook, Instagram, or Facebook Messenger daily.[1] Twitter averages 199 million monetisable daily users.[2] YouTube also holds an audience of 2 billion users monthly.[3]
  2. Social media is increasingly cited as an influential source of news in polls – a 2016 study from Pew Research found that 62 percent of adults in the US name social media as a news source.[4] In the UK, 49 percent of adults consume news via social media. Of those that do, 71 percent use Facebook and 46 percent use Twitter.[5]   
  3. On social media platforms like Facebook or Twitter, false stories have a greater reach and likelihood of virality than true stories. Falsehoods are 70 percent more likely to get retweeted than truths. These falsehoods reached 1,500 people on average six times quicker than accurate news stories.[6]
  4. Outputs from fact-checking organisations saw an increase of 900 percent between January and March 2020.[7]
  5. Facebook state that between the start of the pandemic and August 2021 they removed 20m pieces of content for breaking their rules on Coronavirus misinformation.[8]

 

  1. The information we receive through social media, including misinformation, is on a previously unseen scale. In aggregate, individual cases of misinformation and disinformation can lead to collective or societal harms; we refer to this as the erosion of the public square’. Societal harms in this sense may eventually lead to the erosion of important societal values and goals, such as trust in the democratic system, social trust and cohesion, trust in expertise, or belief in accepted facts. The very real impacts of misinformation at scale can already be seen. For example:

 

  1. Yet these movements can also have a slower and wider effect. An effect which erodes overall public trust in, for instance, science, technology, or the electoral process. This is another example of where collective or societal harm has occurred. RSA pre-pandemic polling estimated that around 6 percent of people believe, or are inclined to believe, in three major anti-scientific conspiracies: 5G is harmful, vaccines are harmful, and global warming is fake.[16]

Existing self-regulation and proposals in the Bill

  1. Our current means of maintaining the quality of information online is through platform self-regulation. Yet, as we have seen, information online is increasingly unreliable and also increasingly influential. Pre-pandemic polling by the RSA on the subject of misinformation shows that 71 percent of the public stated they want a “stronger independent regulator on the quality of news.[17] This statistic should of course be taken with much caution, not least a pandemic has occurred since, but also the public will have highly varying views on what a strong independent regulator means. Nevertheless, this speaks to general anxiety about the quality of our current information environment and the insufficient means of improving the state of information.
  2. The draft Online Safety Bill says relatively little on how it will deal with misinformation, and the remit to investigate societal harms has been removed from earlier iterations. The primary means the Bill uses to address misinformation and disinformation is through the formation of an advisory committee on disinformation and misinformation. The committee will be expected to provide advice to Ofcom as to how “providers of regulated services should deal with” misinformation and disinformation, as well as advise on the excise of Ofcom’s powers to request transparency reports, and on the promotion of media literacy.[18] While this committee is a necessary step, it is not currently clear enough how influential, if at all, the committee will be, because the committee, and Ofcom, have fewer powers to deal with misinformation, compared to other online harms.
  3. Misinformation and disinformation should not, for instance, be treated on a parity with CSEA or terrorism content in terms of the powers we afford Ofcom or any subsidiary. However, we believe that Ofcom could be given a clearer and proportionate remit to deal with societal harms, and to investigate the systemic factors which allow misinformation to proliferate. More on how we believe best to do this will be covered below.

The role of civil society

  1. Civil society has, to date, been a critical but under-appreciated facet in the fight against misinformation. By civil society actors we mean groups including fact-checkers, universities, researchers, specially convened commissions, media outlets, and others.
  2. For instance, universities, researchers, and commissions provide the data and frameworks to understand misinformation and its effects on society. Examples include work done by the Oxford Internet Institute, LSE’s Truth Trust and Technology Commission, and The Royal Society’s investigation into digital technology and information.[19], [20], [21]
  3. Fact-checkers, on the other hand, provide clear facts on a subject of contention, or on outrightly false content. Of course, clear facts do not always exist. In this case, fact-checkers also often provide critical nuance, context, and history on the subject. This informing of the public discourse allows for greater critical thought and a variety of informed perspectives, and so strengthens the public square. Examples of major UK fact-checkers include Full Fact, Infotagion, BBC’s Reality Check, or Channel 4’s FactCheck.
  4. However, fact-checkers remain largely unknown by the public, and are underutilised by platforms themselves. For instance, only around 7 percent of our polling sample regularly used major fact-checkers, and purposefully sought a range of news sources. Other have found platforms themselves can be highly inconsistent and untimely in their application of fact-checkers work. Avaaz found that on Facebook up to two-thirds of content which had already been fact-checked did not have a label applied by Facebook, and it could take up to 22 days for a flag to appear.[22]

Our recommendations:

  1. To improve the quality of information online, and to strengthen the public square, we believe that new forms of oversight are needed which are independent from government and from platforms. We believe the current system, which is an almost exclusively platform-controlled information environment, is not working and is inappropriate to meet societal goals. Providing new oversight would best ensure these societal goals, including free speech and others, are safeguarded. We recommend:

 

    1. The Bill includes the explicit remit to investigate societal or collective harms, ie beyond its current scope of individual harms.

 

    1. This should be done through an independent body, the Office for Public Harms, made up of a pluralist panel of stakeholders including citizens, Ofcom, platforms and wider industry, traditional media, fact-checkers, researchers, and other experts. The Office would work alongside Ofcom though be independent and would supersede the Advisory Committee, as recommended in the Bill. The Office for Public Harms takes inspiration from existing independent regulators including the Advertising Standards Authority.

The Office:

  1. Would have responsibility to investigate and analyse societal harms caused by misinformation and disinformation. It could do this through platform transparency reports, information requests, and evidence of harms being submitted by the public or by organisations. The Office would then publish its findings publicly, inform platforms of issues it finds, and advise Ofcom.

 

  1. Should also act as a misinformation ombudsman. It would investigate and suggest remedies for individual cases of harm caused by misinformation or disinformation, or where content has been felt to been unfairly removed or demoted. This would only occur if the remediation processes on platforms themselves are felt to be unsatisfactory. We feel that this measure is more appropriate than the current system, or that suggested in the Bill, because it offers an independent and multi-stakeholder backstop to an otherwise primarily platform-controlled online information environment.

 

  1. Would retain freedom of expression as a primary objective, as well protecting other important societal goals, and improving the quality of information online. The Office should not seek to routinely, if ever, remove content but instead suggest other forms of redress. This could include demanding that the codes of practice set by Ofcom are better followed (see below), sharing of best practice between platforms, user targeting but in the promotion of good quality information, or changes to content’s algorithmic amplification.

 

  1. Should be funded in a similar arrangement to the Advertising Standards Authority, whereby platforms pay a levy to its running and work. It should also be informed by citizens’ deliberation and panels to generate wider insight and legitimacy within its work.

 

  1. Ofcom’s responsibility would be to influence systemic factors, not to be involved in content. For instance, Ofcom would retain the right to request transparency reports and set codes of practice for platforms to follow. Codes of practice could include strict protocols and timescales with regards to fact checked content or with repeatedly offending accounts, or could mean changes to advertising practices.

 

 

16 September 2021

 


[1] Facebook (2021) FB Earnings Presentation Q2 2021 [Online] Available at: s21.q4cdn.com/399680738/files/doc_financials/2021/q2/Q2-2021_Earnings-Presentation.pdf [Accessed 13 September 2021].

[2] Twitter (2021) Letter to Shareholders [Online] Available at: s22.q4cdn.com/826641620/files/doc_financials/2021/q1/Q1'21-Shareholder-Letter.pdf [Accessed 12 August 2021].

[3]  YouTube (2021) YouTube for press [Online] Available at: blog.youtube/press [Accessed 12 August 2021].

[4] Gottfried, J and Shearer, E. (2016) News Use Across Social Media Platforms 2016 [Online] Washington DC. Pew Research Center. Available at: www.pewresearch.org/journalism/2016/05/26/news-use-across-social-media-platforms-2016/ [Accessed 3 September 2021].

[5] OFCOM (2021) News Consumption in the UK: 2021 OFCOM [PPT] [Online] Available at: www.Ofcom.org.uk/__data/assets/powerpoint_doc/0026/222479/news-consumption-in-the-uk-2021-report.pptx [Accessed 19 August 2021].

[6] MIT Sloan School (2018) Study: False news spreads faster than the truth [Online] Available at: mitsloan.mit.edu/ideas-made-to-matter/study-false-news-spreads-faster-truth [Accessed 3 September 2021].

[7] Brennen, JS, Simon, FM, Howard, PN and Nielsen, RK. (2020) Types, sources, and claims of COVID-19 Misinformation [Online] Available at: reutersinstitute.politics.ox.ac.uk/types-sources-and-claims-covid-19-misinformation.

[8] Bickert, M (2021) How We’re Taking Action Against Vaccine Misinformation Superspreaders. [Online] Facebook. Available at: about.fb.com/news/2021/08/taking-action-against-vaccine-misinformation-superspreaders/ [Accessed 3 September 2021].

[9] See: Goodman, J and Carmichael, F (2020) Coronavirus: Bill Gates ‘microchip’ conspiracy theory and other vaccine claims fact checked [Online] BBC News. Available at: www.bbc.co.uk/news/52847648 [Accessed 31 August 2021]. And see: Schraer, R (2021) Covid vaccine: Fertility and miscarriage claims fact-checked [online] BBC News. Available at: www.bbc.co.uk/news/health-57552527 [Accessed 31 August 2021].

[10] UK Government COVID Dashboard (2021Vaccinations in England [Online] Available at: coronavirus.data.gov.uk/details/vaccinations?areaType=nation&areaName=England [Accessed 3 September 2021].

[11] Office for National Statistics (2021) Coronavirus and vaccine hesitancy, Great Britain [Online] Available at: www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/healthandwellbeing/bulletins/coronavirusandvaccinehesitancygreatbritain/28aprilto23may2021 [Accessed 3 September 2021].

[12] BBC Reality Check (2021) Capitol riots: Who broke into the building? [Online] BBC. Available at: www.bbc.co.uk/news/55572805 [Accessed 3 September 2021].

[13] Lawrence, D and Davis, G (2020) Qanon in the UK: The growth of a movement [Online] London: HOPE not Hate. Available at: www.hopenothate.org.uk/wp-content/uploads/2020/10/qanon-report-2020-10-FINAL.pdf [Accessed 02 September 2021].

[14] Hern, A (2020) 5G conspiracy theories fuel attacks on telecoms workers. London: The Guardian. Available at: www.theguardian.com/business/2020/may/07/5g-conspiracy-theories-attacks-telecoms-covid [Accessed 02 September 2021]

[15] Full Fact (2020) Here’s where those 5G and coronavirus conspiracy theories came from [Online] London: Full Fact. Available at: fullfact.org/online/5g-and-coronavirus-conspiracy-theories-came/ [Accessed 14 September 2021].

[16] Full findings will be released in our forthcoming report.

[17] Emphasis has been added.

[18] HM Government (2021) Draft Online Safety Bill [Online] London: HM Government p90. Available at: assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/985033/Draft_Online_Safety_Bill_Bookmarked.pdf [Accessed 13 September 2021].

[19] Oxford Internet Institute (no date) Programme on Democracy and Technology [online] Available at: www.oii.ox.ac.uk/research/programme-on-democracy-and-technology/ [Accessed 24 August 2021].

[20] London School of Economics (2020) Tackling the information crisis: a policy framework for media system resilience [PDF] London: LSE. Available at: www.lse.ac.uk/media-and-communications/assets/documents/research/T3-Report-Tackling-the-Information-Crisis-v6.pdf [Accessed 24 August 2021].

[21] Royal Society (no date) Digital technology and information [Online] Available at: royalsociety.org/topics-policy/projects/digital-technology-and-information/ [Accessed: 24 August 2021].

[22] Avaaz (2020) How Facebook can Flatten the Curve of the Coronavirus Infodemic [PDF] Avaaz. Available at: avaazimages.avaaz.org/facebook_coronavirus_misinformation.pdf  [Accessed 24 August 2021].