[COR0153]
Written evidence submitted by Carnegie UK Trust (COR0153)
1. We welcome the Committee’s new inquiry into Online Harms and the opportunity to submit evidence. Our submission sets out the background to our work on the development of a statutory duty of care for online harms reduction and responds both to the terms of reference set out for this inquiry and to some of the recent statements by Ministers on the Government’s intentions, including in response to your Committee’s questioning on 13 May 2020.
2. We would be happy to provide further information on our work in writing or to discuss it with Committee members at a future evidence session.
About our work
3. The Carnegie UK Trust was set up in 1913 by Scottish-American philanthropist Andrew Carnegie to improve the wellbeing of the people of the United Kingdom and Ireland. Our founding deed gave the Trust a mandate to reinterpret our broad mission over the passage of time, to respond accordingly to the most pressing issues of the day and we have worked on digital policy issues for a number of years.
4. In early 2018, Professor Lorna Woods (Professor of Internet Law at the University of Essex) and former civil servant William Perrin started work to develop a model to reduce online harms through a statutory duty of care, enforced by a regulator. The proposals were published in a series of blogs and publications for Carnegie and developed further in evidence to Parliamentary Committees[1]. The Lords Communications Committee[2] and the Commons Science and Technology Committee[3] both endorsed the Carnegie model, as have a number of civil society organisations[4]. In April 2019, the government’s Online Harms White Paper[5], produced under the then Secretary of State for Digital, Culture, Media and Sport, Jeremy Wright, proposed a statutory duty of care enforced by a regulator in a variant of the Carnegie model. France[6], and apparently the European Commission, are now considering duty of care models for online harms.
5. In December 2019, while waiting for the Government to bring forward its own legislative plans, we published a draft bill[7] to implement a statutory duty of care regime, based upon our full policy document of the previous April[8]. We are also supporting Lord McNally on his Private Bill (The Online Harm Reduction Regulator (Report) Bill)[9], introduced into the House of Lords on 14 January 2020, which would provide an opportunity for full Parliamentary debate on the nature of the regulatory regime and, if passed, empower OFCOM to prepare for its introduction.
The nature, prevalence and scale of online harms during the Covid-19 period
6. Government action on online harms is delayed, despite promises in the 2017 and 2019 manifestos and the two 2019 Queen’s speeches. With rolling policy crises and a changing cast of Ministers, this is unsurprising. The recent Interim Response[10] to the White Paper consultation was helpful, but still leaves many questions unanswered. In her evidence to the Home Affairs Committee, the Minister of State for Digital and Culture, Caroline Dinenage, confirmed that the Government’s full response would follow “later this year” but was unable to give further details on many of the areas that were of concern to the Committee, nor a commitment that a Bill would be introduced before the end of the current Parliamentary session.
7. We regret the delays to the Government proposals, particularly given the evidence of an upsurge in many of the harms that the Bill would cover. Many of the harms in scope of the proposed Online Harms legislation – in particular, child sexual abuse and exploitation (which are criminal offences) – have been exacerbated by the Covid19 crisis as more and more time is spent online at home and children and young people’s unsupervised social media activity increases. For example, the NSPCC and the Children’s Commissioner have recently flagged the increased risk to lonely and vulnerable children of online grooming during lockdown.[11] Meanwhile, the social media companies’ capacity to remove child sexual abuse imagery has fallen, with the Internet Watch Foundation reporting an 89% reduction in the number of images taken down between March and April, compared to the previous period.[12] In her evidence to the Committee, the Home Office Minister, Baroness Williams, referred to a “21% uptick” in hate crime during the lockdown period while Minister Dinenage suggested emerging evidence points to a rise in incidents of revenge porn and sexploitation.[13]
8. Many other online harms, outwith the current scope of the Government’s regulatory proposals, are also being exacerbated during this time. The National Crime Agency and Victim Support have both recently reported on a surge in online scams targeting vulnerable or self-isolating people during the lockdown[14]. Evidence is emerging of widespread fraudulent activity on eBay; for example, a recent report charted significant scams relating to the sale of vehicles during lockdown[15]. Online scams are Britain’s biggest category of property crime, but we understand will not be in the scope of the Government’s online harm proposals. Similarly, the Government lacks clarity in whether online marketplaces such as Amazon and eBay will be in scope of the online harms regime with respect to sale of illegal weapons. It’s possible that discussion promoting the sale of an illegal weapon on Facebook might be caught by an online harms regime but the actual sale in the new Facebook marketplace or on eBay might not be. Scams and sale of goods offences are often referred to as ‘consumer harms’.
9. Finally, people’s exposure to misinformation and disinformation has increased during the pandemic. These harms are also outwith the proposed current scope of regulation, despite the potential to have real world consequences. OFCOM’s weekly polling of people’s news consumption on Covid19 has produced good data, that are independent of the tech companies[16]. Exposure to misinformation remains high at around 50%. The challenge of the spread of misinformation and disinformation relating to the virus is not just impacting on public health but, when combined with conspiracy theories about 5G, on the security of essential infrastructure.
Steps that could be taken to mitigate these concerns
10. In their recent appearances in Parliament – whether before this Committee[17], in front of the DCMS Select Committee[18], the Lords Democracy and Digital Technologies Committee[19], or on the floor of both Houses[20] – Government Ministers have been keen to emphasise and praise the action of the social media companies during the Covid19 pandemic to respond to the public health threats arising from the online spread of Covid19 mis-/ dis-information and their willingness to co-operate with the Government’s Counter-Disinformation unit. Indeed, the Minister for Digital and Culture described the Government as a kind of “trusted flagger” to the tech companies, although figures on the volume, type and impact of the activity undertaken through this collaboration have not been forthcoming.
11. It is true that the social media companies have introduced a number of important measures during the course of the pandemic to respond to the spread of harmful disinformation – such as WhatsApp’s “velocity limiter” to reduce the number of times things can be forwarded which, it claims, has led to a 70% reduction in “highly forwarded” messages on its services.[21]. Most social media companies have also introduced changes to the design of their services to promote information from authoritative services while reducing the prominence given to unverified information by their discovery and search functions. Some have followed the lead of Pinterest which has had a health misinformation policy since 2017 and took a decision last year not to have anti-vax material on their platform. Its’ very clear community guidelines do not allow content that might have “immediate and detrimental” effects to health or public safety, so this allowed Pinterest to easily extend these to cover searches for Covid19 and limit the results to material from authoritative sources.[22] In the absence of a regulatory system we have no way to verify the claims of social media companies about the effectiveness of their actions.
12. It remains the case, however, that the design and business model underlying the platforms encourages and facilitates the problem of mis/disinformation and that much of the action that Ministers are seeking to praise is reactive: the tech companies have been forced by political pressure to fire-fight a crisis on their platforms that is damaging trust in both national governments’ handling of the pandemic and trust in previously authoritative sources of information and news. The actions taken are also reactive rather than systemic: whether they are superficial changes to platform design to attempt to reduce the spread of material that is already out of control; the promotion and signposting of authoritative sources; or the flagging, correction or takedown of untrue or harmful content, once it has been established as such. These steps may be part of the solution but, as they stand, are insufficient.
13. The Minister for Digital and Culture frequently referred to disinformation falling into a “harmful but legal” category and the “difficult balance to strike” in dealing with harmful or damaging information content “within the realms of people’s freedom of speech”. This overlooks the fact that it is not only the expression of the harmful content in itself that causes problems but the speed and scale of its spread and promotion – a spread encouraged and facilitated by the platforms’ own system design. This includes for example, their algorithms, recommender models, reliance on user profiling and micro-targeting[23], or nudges to users to like or share content without time for reflection. A significant part of the problem, in our view, relates to these information flows, and this is an aspect that does not readily fit a framework designed around a distinction on whether content is illegal or legal but harmful.
14. We set out in more detail below how our proposal for a systemic duty of care, enforced by a regulator, enables regulation to bite at a platform design level – tackling these information flow issues – and requires risk mitigation rather than regulating individual pieces of content. Such a systemic approach should cover not just disinformation, whether resulting in electoral harms or public health harms, but also consumer harms (including online scams, fraud and the sale of unsafe products). As Professor Woods has argued in her comprehensive paper on the subject, this approach is entirely consistent with the protection of people’s fundamental rights, including the right to freedom of expression[24].
The adequacy of the Government’s online harms proposals to address issues arising from the pandemic, as well as issues previously identified
15. Until the Government publishes its full response to the White Paper and its own Bill, we have significant reservations about the adequacy of its proposals to deliver on its frequently restated ambition to “make the UK the safest place in the world to be online”. Last year’s White Paper described a regime that was largely framed around types of content, supported by a series of codes of practice addressing broad categories of harms. The Interim Response in February has gone some way to describing a “systems-based” regime but Ministers’ recent evidence has not provided any further clarity on its nature. They also continue to rule out significant harms (such as harms to democracy,[25] consumer harms and disinformation) within the potential scope of legislation, while recent statements by the Secretary of State and the Minister of State suggest a further narrowing in the scope of regulatory action to ‘illegal harms’ (whatever they may be).
16. We recommend that the Committee challenges the Government on this apparent limiting of the online harms regime to those that lie only within the boundaries of criminal law. There are many areas where a regulatory system penalises people for things that are not criminal offences and where a regulator and the companies that are regulated are trusted to make a judgement – eg advertising, radio and TV broadcasting, utility regulation. In broadcasting, OFCOM is charged with ensuring: ‘that generally accepted standards are applied to the content of television and radio services so as to provide adequate protection for members of the public from the inclusion in such services of offensive and harmful material.’ [26] Such a limitation would also introduce in these regulated areas a difference of approach between on-line and off-line activity.
17. The illegal/legal but harmful distinction is poorly defined and risks undermining the systemic nature of a duty of care. It removes responsibility from the platform and keeps the burden on society at large. The distinction creates an arbitrary line and a bias towards retrospective rather than ‘reasonably foreseeable’ action. Illegal/legal implicitly suggests waiting for something bad to happen to categorise it as criminal and then decide whether something should have been done with it. We set out in a short annex to this submission some examples to illustrate this problem.
18. In a further retreat from a comprehensive, systems-based approach, the Secretary of State has also suggested that regulation may primarily be about companies “enforcing their own terms and conditions” and being more transparent about their activities in relation to harmful content. While companies should enforce their community standards, an approach solely reliant on this would leave reduction and management of harms in the hands of companies overseas and who have in any event not proven effective at doing so to date. Rather than taking back control, this explicitly cedes control to foreign companies. (Indeed, if one considers the circumstances whereby social media is now akin to a form of critical national infrastructure, then there is a question as to whether it should in fact be afforded the same level of formal scrutiny and safeguarding.[27])
19. This Government’s proposed approach will not be sufficient to reduce the potential harm already experienced by many users online; nor will it prevent the emergence of harm from whatever the next global, societal or democratic crisis may be. The Carnegie April 2019 policy document[28] ‘Online harm reduction – a statutory duty of care and regulator’ and our response[29] to the Government’s White Paper discuss the arguments for a systemic approach at length, building on a “precautionary principle” that places responsibility for the management and mitigation of the risk of harm - harms which they have had a role in creating or exacerbating - on the tech companies themselves. In summary:
“At the heart of the new regime would be a ‘duty of care’ set out by Parliament in statute. This statutory duty of care would require most companies that provide social media or online messaging services used in the UK to protect people in the UK from reasonably foreseeable harms that might arise from use of those services. This approach is risk-based and outcomes-focused. A regulator would have sufficient powers to ensure that companies delivered on their statutory duty of care. …
“Everything that happens on a social media or messaging service is a result of corporate decisions: about the terms of service, the software deployed and the resources put into enforcing the terms of service and maintaining the software. These design choices are not neutral: they may encourage or discourage certain behaviours by the users of the service … A statutory duty of care is simple, broadly based and largely future-proof. For instance, the duties of care in the Health and Safety at Work Act 1974 still work well today, enforced and with their application kept up to date by a competent regulator.
“A statutory duty of care focuses on the objective – harm reduction – and leaves the detail of the means to those best placed to come up with solutions in context: the companies who are subject to the duty of care. A statutory duty of care returns the cost of harms to those responsible for them, an application of the micro-economically efficient ‘polluter pays’ principle … The continual evolution of online services, where software is updated almost continuously makes traditional evidence gathering such as long-term randomised control trials problematic. New services adopted rapidly that potentially cause harm illustrate long standing tensions between science and public policy. For decades scientists and politicians have wrestled with commercial actions for which there is emergent evidence of harms: genetically modified foods, human fertilisation and embryology, mammalian cloning, nanotechnologies, mobile phone electromagnetic radiation, pesticides, bovine spongiform encephalopathy. In 2002, risk management specialists reached a balanced definition of the precautionary principle that allows economic development to proceed at risk in areas where there is emergent evidence of harms but scientific certainty is lacking within the time frame for decision making.[30]
“Emergent evidence of harm caused by online services poses many questions: whether bullying of children is widespread or whether such behaviour harms the victim; whether rape and death threats to women in public life has any real impact on them, or society; or whether the use of devices with screens in itself causes problems. The precautionary principle provides the basis for policymaking in this field, where evidence of harm may be evident, but not conclusive of causation. Companies should embrace the precautionary principle as it protects them from requirements to ban particular types of content or speakers by politicians who may over-react in the face of moral panic. Parliament should guide the regulator with a non-exclusive list of harms for it to focus upon. Parliament has created regulators before that have had few problems in arbitrating complex social issues; these harms should not be beyond the capacity of a competent and independent regulator. Some companies would welcome the guidance. [31]
20. The effectiveness of this approach is underlined by the challenge now faced by Governments and the social media companies in responding to Covid19 misinformation, which William Perrin set out in a recent blog post:
“the design choices that lead to the viral spread of disinformation or misleading information on coronavirus – ease of liking, sharing or forwarding; algorithmic promotion of sensationalist content; lack of transparent community guidelines or enforceable terms and conditions; failure to address effectively the proliferation of fake accounts or botnets – are the same that lead to the prevalence of online scams, the intimidation and harassment of public figures, or the possible subversion of electoral and democratic processes. The durability of the systemic statutory duty of care model is demonstrated by Covid19 itself – neither the lengthy codes of practice on online harm detailed by the UK Government, nor our own work mentioned the concept of disinformation relating to a pandemic – yet if a duty of care had been in place, it would have caught this problem.” [32]
21. In relation to the emergence of significant new consumer harms during the course of the Covid19 pandemic, we remain of the view that consumer harms should also be in the scope of the online harms proposals, whether or not the cause of those harms constitutes a criminal offence. The issues should be straightforward to deal with: if any competent regulator or other statutory body identifies a new vector for online harm that breaches their own specialist regulatory regime they should be able to hand a dossier to OFCOM to assess and, if appropriate, process in the online harms regime. Such interlocking regulation would protect consumers and increase the effectiveness of regulators such as the Financial Conduct Authority that find it hard to get purchase with online companies. We are working further on this idea and will publish new thinking on this soon, which we will be happy to share with the Committee.
Conclusion
22. There appears to be a disconnect arising between an increasing appetite amongst the public for regulatory action (see recent Doteveryone[33] and Open Knowledge Foundation[34] surveys) to address online harms and the overtly conciliatory tone struck by the Secretary of State for Digital, Culture, Media and Sport towards the tech companies as the Covid19 crisis has progressed. The Government’s long-held ambition to “make the UK the safest place to be online and the best place to start a business” is increasingly being replaced with a Ministerial mantra that its online harms regulation must be right from the perspective of businesses as well as protecting freedom of speech.
23. With this in mind, we wish to conclude by drawing the Committee’s attention to our letter – written pre-Covid19 – to the Home Secretary and the DCMS Secretary of State on the opportunities for the UK Government in developing a “British model for regulation”, which would set the pace and direction for the rest of the world to follow and which would provide a “gold standard” of digital regulation for businesses operating here.[35] The UK’s leadership on action to address illegal activity online, whether the Internet Watch Foundation’s work on child sexual exploitation and abuse, or our influence within the 5 Eyes community on dealing with terrorism and extremist content online, can be built on. Moreover, we believe that there is Parliamentary support and enthusiasm for such a proportionate, systemic and world-leading approach.
24. As the Minister of State for Digital and Culture told your Committee in relation to the Government’s Online Harms proposals: “the eyes of the world are on us” and countries are “looking at how we are implementing this”. We remain confident that the adoption of a systemic duty of care, enforced by a regulator, is the best way to protect users from harm online while also supporting business growth and innovation and would urge the Government to commit to its introduction at the earliest opportunity.
25. We are happy to work with the Committee, in whatever way is most helpful, to chart a course towards the introduction of a Bill before the end of this Parliamentary session. Whether through the vehicle of pre-legislative scrutiny of a Government Bill, which the Secretary of State intimated may be a possibility, or the Second Reading of Lord McNally’s Bill, we hope that the many advocates for urgent action in Parliament will be able soon to contribute meaningfully to development of world-leading online harms legislation here in the UK.
May 2020
ANNEX: How an illegal/legal but harmful approach to categorising online harms is poorly defined.
The government seems convinced of need to tackle the very worst harms online via regulation – CSEA, terrorism. However, in the interim response to the White Paper consultation the government took the following position which seemed to limit the scope of harms.
‘4. To ensure protections for freedom of expression, regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that may not be illegal but has the potential to cause harm, such as online bullying, intimidation in public life, or self-harm and suicide imagery.
A strength of a statutory duty of care approach is that it focuses on reasonably foreseeable harm as an outcome of actions by the platform in a regulatory regime as opposed to focussing on types of content that trigger the usually high thresholds of illegality. Introducing the illegal/legal distinction without detailed explanation gives rise to a range of issues.
Criminal law issues
Regulatory issues
In regulatory practice Parliament may have set up a regime to tackle issues without primary recourse to the courts and where an action in breach of a regulatory regime may not clearly be criminal but is harmful.
Civil Issues
Societal Issues
[1] Our work, including blogs, papers and submissions to Parliamentary Committees and consultations, can be found here: https://www.carnegieuktrust.org.uk/project/harm-reduction-in-social-media/
[2] https://www.parliament.uk/business/committees/committees-a-z/lords-select/communications-committee/inquiries/parliament-2017/the-internet-to-regulate-or-not-to-regulate/
[3] https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/822/82202.htm
[4] For example, NSPCC: https://www.nspcc.org.uk/globalassets/ documents/news/taming-the-wild-west-web-regulate-social-networks.pdf; Children’s Commissioner: https://www.childrenscommissioner.gov.uk/2019/02/06/childrens-commissioner-publishes-astatutory-duty-of-care-for-online-service-providers/; Royal Society for Public Health: https://www.rsph.org.uk/our-work/policy/wellbeing/new-filters.html
[5] https://www.gov.uk/government/consultations/online-harms-white-paper
[6] http://www.iicom.org/images/iic/themes/news/Reports/French-social-media-framework---May-2019.pdf
[7] https://www.carnegieuktrust.org.uk/publications/draft-online-harm-bill/
[8] https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/04/08091652/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf
[9] https://services.parliament.uk/bills/2019-21/onlineharmsreductionregulatorreportbill.html
[10] https://www.gov.uk/government/consultations/online-harms-white-paper/public-feedback/online-harms-white-paper-initial-consultation-response
[11] https://www.nspcc.org.uk/what-we-do/news-opinion/coronavirus-children-groomed-online/; https://www.childrenssociety.org.uk/news-and-blogs/press-releases/childrens-society-responds-to-report-on-lockdown-risks-facing
[12] https://www.theguardian.com/society/2020/apr/27/lockdown-hampering-removal-of-child-sexual-abuse-material-online
[13] We discuss below some of the difficulties around definitions of harms, particularly in relation to abusive and offensive behaviour online. Many of these are in scope of the Law Commission’s ongoing review: https://www.lawcom.gov.uk/law-commission-to-undertake-phase-2-of-the-abusive-and-offensive-online-communications-project/
[14] https://www.bbc.co.uk/news/uk-52288675; https://www.bbc.co.uk/news/business-52664539
[15] https://www.theguardian.com/money/2020/may/04/fraudsters-use-covid-lockdown-to-scam-motorhome-buyers
[16] https://www.ofcom.org.uk/research-and-data/tv-radio-and-on-demand/news-media/coronavirus-news-consumption-attitudes-behaviour
[17] https://parliamentlive.tv/Event/Index/e5ed9e46-6100-475e-9f29-c5918a096eed
[18] https://hansard.parliament.uk/Commons/2020-04-27/debates/824B1FA1-5616-42D1-B6D8-2AE0BC5C6E3C/DigitalCultureMediaAndSport; https://hansard.parliament.uk/Lords/2020-04-29/debates/48C8302C-6B49-446F-A735-A408EDBB84A3/SocialMediaFakeNews
[19] https://parliamentlive.tv/Event/Index/94a59f9e-6424-4aef-baaf-295020eccfae
[20] https://www.parliamentlive.tv/Event/Index/de3b564a-8dd2-4e59-86b1-17d6866c81f5
[21] https://techcrunch.com/2020/04/27/whatsapps-new-limit-cuts-virality-of-highly-forwarded-messages-by-70/?guccounter=1
[22] Pinterest’s community guidelines say that: “Medically unsupported health claims that risk public health and safety, including the promotion of false cures, anti-vaccination advice, or misinformation about public health or safety emergencies” It also won’t have conspiracy theories or content that originates from disinformation campaigns. (See https://policy.pinterest.com/en-gb/community-guidelines
[23] See the recent report by the Centre for Data Ethics and Innovation, which recommended that online targeting be subject to the duty of care: https://www.gov.uk/government/publications/cdei-review-of-online-targeting
[24] https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/12/10111353/The-Carnegie-Statutory-Duty-of-Care-and-Fundamental-Freedoms.pdf
[25] https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/07/01124836/Joint-submission-Dem-OHWP-final.pdf
[26] In this regard, Ofcom has taken decisions and imposed sanctions on broadcasters recently to address Covid19 mis/dis-information: eg https://www.ofcom.org.uk/about-ofcom/latest/features-and-news/david-icke-and-eamonn-holmes-decision
[27] https://www.cpni.gov.uk/critical-national-infrastructure-0
[28] See https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/04/08091652/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf
[29] https://www.carnegieuktrust.org.uk/publications/response-to-the-online-harms-white-paper/
[30] https://webarchive.nationalarchives.gov.uk/20190701152341/https://www.hse.gov.uk/aboutus/meetings/committees/ilgra/pppa.htm
[31] https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/07/04163920/Online-Harm-White-paper-.pdf
[32] https://www.carnegieuktrust.org.uk/blog/regulation-misinformation-and-COVID19/
[33] https://www.doteveryone.org.uk/2020/05/people-power-and-technology-the-2020-digital-attitudes-report/
[34] https://blog.okfn.org/2020/05/05/brits-demand-openness-from-government-in-tackling-coronavirus/
[35] https://www.carnegieuktrust.org.uk/news/draft-online-harm-bill-dcms-letter/
[36] https://www.lawcom.gov.uk/law-commission-to-undertake-phase-2-of-the-abusive-and-offensive-online-communications-project/
[37] https://www.moneysavingexpert.com/news/2019/01/martin-lewis-drops-lawsuit-as-facebook-agreed-to-donate-p3m-to-a/