Written evidence, Carnegie UK (CCE0010)

 

 

  1. We welcome the Committee’s call for evidence in its inquiry on “Mobilising action on climate change and environment: behaviour change”.  We not that the scope of the call is broad: our submission fits most neatly into the section on “the role of Government and other actors”, with a focus specifically on the actions the Government can take, via its draft Online Safety Bill, to address the impact of online disinformation on public understanding of, and attitudes to, climate change.

 

About our work

 

  1. Carnegie UK Trust was set up in 1913 by Scottish-American philanthropist Andrew Carnegie to improve the wellbeing of the people of the United Kingdom and Ireland. Our founding deed gave the Trust a mandate to reinterpret our broad mission over the passage of time, to respond accordingly to the most pressing issues of the day and we have worked on digital policy issues for a number of years. Carnegie UK is a signatory of the Association of Charitable Foundations commitment on climate change.[1]

 

  1. In early 2018, Professor Lorna Woods (Professor of Internet Law at the University of Essex) and former civil servant William Perrin started work to develop a model to reduce online harms through a statutory duty of care, enforced by a regulator. The proposals were published in a series of blogs and publications for Carnegie and developed further in evidence to Parliamentary Committees[2]. The Lords Communications Committee[3] and the Commons Science and Technology Committee[4] both endorsed the Carnegie model, as have a number of civil society organisations[5]. In April 2019, the government’s Online Harms White Paper[6], produced under the then Secretary of State for Digital, Culture, Media and Sport, Jeremy Wright, proposed a statutory duty of care enforced by a regulator in a variant of the Carnegie model and this approach remains central to the Government’s plans for the Online Safety Bill. [7] The European Commission has included a duty of care in its proposal for a Digital Services Act. We talk frequently to our international counterparts about our work, for example in Ireland, Canada, Australia, New Zealand and the US, as well as representatives from the UN and the EU.

 

  1. In December 2019, while waiting for the Government to bring forward its own legislative plans, we published a draft bill[8] to implement a statutory duty of care regime, based upon our full policy document of the previous April[9]We have published analysis of,[10] and a suite of proposed amendments to[11], the draft Online Safety Bill, and given written[12] and oral[13] evidence to the Joint Committee undertaking pre-legislative scrutiny.

 

  1. As we set out below in relation to climate change disinformation, we strongly believe that the Online Safety Bill needs to be both simplified and strengthened if it is to be effective, with its scope widened by a new definition of harm that will capture not just harms to individuals but to society as a whole. Climate change disinformation is one such societal harm that, at present, would not be captured by the scope of the Bill. We set out below how we think the Government should rectify this and hope that this proposal, and the analysis that informs it, is helpful to the Committee as they continue their deliberations on this important topic.

 

Tackling climate change disinformation through the Online Safety Bill

 

  1. Climate change is a serious threat to the safety and security of United Kingdom citizens. Malicious actors spreading false information on social media could undermine collective action to combat these threats. Yet the draft Online Safety Bill is not designed to tackle a threat to society like climate change disinformation.

 

  1. The Carnegie UK amendments to the Bill[14] are the radical work needed to bring climate change within scope in a proportionate manner.  We describe here how they would work.

 

The hazard

 

  1. Prime Minister Boris Johnson noted when, chairing a 2021 United Nations Security Council session,[15] that climate change is a threat to our security”; the WHO has identified climate change as one of the top 10 global health threats[16]. Authoritative reports indicate the issue is more pressing than previously thought. In 2019, the House of Commons declared an environment and climate change emergency, described as “the most important issue of our time”[17]COP26[18] was seen as a critical opportunity to accelerate the global response in the light of an increasing threat.  Yet, the issue remains contentious, and some[19] – perhaps those who have economic interests at stake[20] - seek to sow confusion about the existence of climate change[21], its causes and the ways to tackle it, which “is likely to have disastrous consequences worldwide in the twenty-first century[22]. Against this background, the question of what information is being made available to people to inform their choices (and perhaps to encourage behaviour change) is important. Media and social media information sources are central to this.

 

Existing media regulation on climate change disinformation

 

  1. On radio and TV, unchallenged assertion of statements about climate that fly in the face of well-established scientific consensus are now unlikely to happen. OFCOM has not banned climate disinformation but requires broadcasters to ensure the audience is not misled as to information’s veracity or the status of the information in the scientific community.   For example, OFCOM found the BBC in breach of the impartiality rules[23] when in 2018 it gave a platform to Lord Lawson, with no adequate challenge in the programme; the BBC had already found a breach of its own rules. 

 

  1. Given the scientific consensus around climate change, it might be in future that OFCOM could choose to deal with climate change denial through the application of its rules in the Broadcasting Code on harmful content[24]. OFCOM followed this route in tackling COVID misinformation[25]. It relied on pre-existing research[26] into the harms caused by unsound health or financial advice (“health and wealth claims”).  OFCOM’s approach to scientific misinformation in the context of the pandemic was challenged, but the High Court[27] found that its approach complied with free speech requirements and Article 10 ECHR.

 

Threat from climate change disinformation on social media

 

  1. Social media is an important source for many people of news and information and evidence is emerging of its role in climate change disinformation. Civil society groups have found that social media platforms are a route for amplification for key climate change deniers[28] and a source of funding for them[29]. Avaaz[30] reported an estimated 25 million views of climate change and environment misinformation on Facebook in the US in just over 60 days.  Recent analysis of activity on Facebook[31] undertaken by ISD during COP26 demonstrates the scale of the challenge in dealing with climate change mis/disinformation. Their research compared the levels of engagement generated by reliable scientific organisations and climate sceptic actors respectively and found that the posts from the latter frequently received more traction and reach than the former. For example, in the fortnight over which COP26 took place, sceptic content garnered 12 times the level of engagement of authoritative sources on the platform; and 60% of the "sceptic" posts they analysed could be classified as actively and explicitly attacking efforts to curb climate change. 

 

  1. While some platforms have started to take some actions, climate change disinformation remains prevalent and there have been suggestions that services should do more.

 

Online Safety regulation and climate change

 

  1. As drafted, the draft Online Safety Bill does little to tackle climate change disinformation. Climate change disinformation is a good example of something that is not contrary to the criminal law but could be very harmful indeed to society at large, falling into well-known gaps in the proposed regime. [32]

 

  1. Carnegie UK’s detailed recommendation for changes to the draft Bill would address climate disinformation, amongst other serious threats to UK public health, safety and national security. Carnegie UK’s approach would improve the systems and processes of social media and search companies so as to reduce the harm from climate dis- and misinformation. Carnegie UK suggests that:

 

    1. a definition of harm is adopted that can encompass climate disinformation by addressing ‘harm to public safety, public health and national security’.

 

    1. A duty is imposed on user-to-user services and search to take reasonable steps to prevent reasonably foreseeable harm arising from the operation of the platform, but applied in a proportionate manner and only where appropriate. This duty would include harms arising from climate disinformation circulating on the platform flowing from the definition in a) above.  We emphasise that this is not about banning certain types of speech, but rather looking at how the features on the platform and their design, as well as policies, contribute to any harms.

 

    1. The duty of care in b) implies a risk assessment. The companies’ respective risk assessments should be based on OFCOM’s market overview risk assessment. That assessment and the resulting risk profiles that OFCOM will draw up underpin the regime. In the context of climate change misinformation, OFCOM should focus in particular on the role of platforms in distributing and amplifying such content. And in so doing, should work closely with expert civil society groups, bodies such as the Royal Society and academic researchers.

 

    1. Service operators within scope should also pay particular attention to how malicious actors exploit their services to disseminate climate disinformation. This focus would be in addition to concerns around services’ systems and processes that might arise from any unintended amplification of climate change misinformation as a consequence of design which is focused on high levels of user engagement. OFCOM should be able to make companies repeat this risk assessment process if their assessment is inadequate, a point that is not clear in the draft Bill.

 

    1. Climate disinformation should then form part of companies’ risk of harm prevention and mitigation plans and OFCOM will monitor their effectiveness. Where companies have already committed to voluntary action (see e.g. Facebook’s Climate Change Science Centre[33], Twitter’s ‘pre-bunking[34] efforts, and Google’s ad policies), the regulator can help disseminate this. However even the best corporate action may not have picked up all weaknesses in the companies’ product, processes, policies and business model. The regulator (working with civil society) can help ensure that this is followed through, best practice assessed, disseminated, followed through and that measures are improved over time.  The assessment should focus on all points of the content distribution cycle, from user onboarding, content creation, dissemination and user tools to curate their information environment, through to moderation, labelling disinformation[35], engaging with trusted flaggers, respecting fact checkers decisions[36] and dealing (transparently and effectively) with complaints.

 

    1. Advertising should be brought into the scope of the online safety regime. The draft bill excludes advertising, with the consequent risk that the systems and processes related to advertising delivery and content monetisation arising from advertising might also lie outwith the regime. The Carnegie proposals include advertising, to ensure these processes fall within scope. Here, providers of advertising services on platforms should prevent monetisation of climate disinformation content and/or give advertisers the opportunity to opt out of appearing alongside such content. The regulator could also help assess the risks inherent in different definitions of climate change denial advertising. The study into Facebook’s approach to climate misinformation found 113 instance of adverts containing misinformation, and these were sources of disinformation that had already been flagged[37]; according to news reports[38] the problem was still ongoing during COP 26. We note that Google is moving in this direction but regulatory oversight would help ensure that it delivers.

 

    1. We suggest that as part of its market risk assessment OFCOM focus upon areas where greater media literacy could prevent harms. This seems appropriate for tackling climate change disinformation – we note that Facebook currently provides considerable authoritative information to people searching for climate info. The regulator could ensure that such approaches actually work and disseminate and require best practice.

 

  1. Taken together these measures would greatly strengthen companies’ systems and processes to combat climate disinformation.

 

  1. In our opinion the Carnegie approach would be far more effective than trying to use the draft Bill as it stands. The draft Bill is designed not to tackle societal harm. If the government were to shoehorn climate change disinformation as a ‘priority harm’, the structures of the draft Bill are not strong enough to respond proportionately to the seriousness of the issue.

 

  1. Climate change and technology was a theme of the Future Tech Forum[39] a recent high level multi-stakeholder event chaired by Secretary of State Nadine Dorries MP. If the government takes tackling climate change disinformation seriously, it could do worse than follow the Carnegie amendments.

 

  1. We would be happy to talk further to the Committee and its members about our work, and about the threat of online disinformation to attitudes on climate change, either formally or informally.

 

 

Carnegie UK

December 2021

Contact: maeve.walsh@carnegieuk.org

 


[1] https://www.acf.org.uk/ACF/Connect_collaborate/Funder-Commitment-on-Climate-Change.aspx

[2] Our work, including blogs, papers and submissions to Parliamentary Committees and consultations, can be found here: https://www.carnegieuktrust.org.uk/project/harm-reduction-in-social-media/

[3] https://publications.parliament.uk/pa/ld201719/ldselect/ldcomuni/299/29902.htm

[4] https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/822/82202.htm

[5] For example, NSPCC: https://www.nspcc.org.uk/globalassets/ documents/news/taming-the-wild-west-web-regulate-social-networks.pdf; Children’s Commissioner: https://www.childrenscommissioner.gov.uk/2019/02/06/childrens-commissioner-publishes-astatutory-duty-of-care-for-online-service-providers/; Royal Society for Public Health:  https://www.rsph.org.uk/our-work/policy/wellbeing/new-filters.html

[6] https://www.gov.uk/government/consultations/online-harms-white-paper

[7] https://www.gov.uk/government/publications/draft-online-safety-bill

[8] https://www.carnegieuktrust.org.uk/publications/draft-online-harm-bill/

[9] https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/04/08091652/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf

[10] https://d1ssu070pg2v9i.cloudfront.net/pex/pex_carnegie2021/2021/07/01142144/draft-OSB-CUKT-response-FINAL-1.pdf

[11] https://d1ssu070pg2v9i.cloudfront.net/pex/pex_carnegie2021/2021/11/10133722/Amendments-Explanatory-Notes-Carnegie-UK-Revised-Online-Safety-Bill-1.pdf

[12] https://committees.parliament.uk/writtenevidence/39242/pdf/

[13] https://committees.parliament.uk/oralevidence/2794/pdf/

[14] https://d1ssu070pg2v9i.cloudfront.net/pex/pex_carnegie2021/2021/11/10133722/Amendments-Explanatory-Notes-Carnegie-UK-Revised-Online-Safety-Bill-1.pdf

[15] https://www.gov.uk/government/speeches/pm-boris-johnsons-address-to-the-un-security-council-on-climate-and-security-23-february-2021

[16] https://www.who.int/news-room/spotlight/ten-threats-to-global-health-in-2019

[17] https://www.parliament.uk/business/news/2019/may/mps-debate-the-environment-and-climate-change/

[18] https://ukcop26.org/

[19] https://www.merchantsofdoubt.org/

[20] https://www.sciencenews.org/article/climate-change-disinformation-denial-misinformation

[21] https://www.cambridge.org/core/journals/global-sustainability/article/discourses-of-climate-delay/7B11B722E3E3454BB6212378E32985A7

[22] https://royalsocietypublishing.org/doi/10.1098/rsos.190161

[23] https://www.ofcom.org.uk/__data/assets/pdf_file/0012/112701/issue-351-broadcast-on-demand-bulletin.pdf

[24] https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code/section-two-harm-offence

[25] https://blogs.lse.ac.uk/medialse/2020/12/17/coronavirus-and-harm-in-broadcast-content/

[26] https://www.ofcom.org.uk/research-and-data/tv-radio-and-on-demand/attitudes-to-potential-harm

[27] https://www.bailii.org/ew/cases/EWHC/Admin/2020/3390.html

[28] https://www.counterhate.com/toxicten

[29] https://www.newsguardtech.com/special-reports/brands-send-billions-to-misinformation-websites-newsguard-comscore-report/

[30] https://secure.avaaz.org/campaign/en/facebook_climate_misinformation/

[31] https://www.isdglobal.org/digital_dispatches/how-climate-sceptic-actors-mobilised-on-facebook-during-cop26-to-undermine-the-summit/

[32] https://www.carnegieuktrust.org.uk/blog-posts/the-draft-online-safety-bill-carnegie-uk-trust-initial-analysis/

[33] https://about.fb.com/news/2020/09/stepping-up-the-fight-against-climate-change/

[34] https://www.axios.com/exclusive-twitter-takes-aim-climate-misinformation-cop26-b8207414-3085-43b6-b7c7-fa23eaaf2208.html

[35] https://www.counterhate.com/toxicten

[36] https://www.scientificamerican.com/article/climate-denial-spreads-on-facebook-as-scientists-face-restrictions/

[37] https://influencemap.org/report/Climate-Change-and-Digital-Advertising-86222daed29c6f49ab2da76b0df15f76#2

[38] https://www.reuters.com/business/cop/during-cop26-facebook-served-ads-with-climate-falsehoods-skepticism-2021-11-18/

[39] https://www.gov.uk/government/publications/future-tech-forum-chairs-statement-london/future-tech-forum-chairs-statement-london