Written evidence from Carnegie UK Trust (TEC 36)


Public Administration and Constitutional Affairs Committee

The Work of the Electoral Commission inquiry


  1.  We welcome the Committee’s inquiry into the work of the Electoral Commission and the opportunity to submit evidence. Our response is limited to a specific issue regarding the introduction of a statutory duty of care for online harm reduction, which we expect the Government to bring forward in the Online Harms Bill early in the New Year, and how the work of the Electoral Commission might fit into a wider regulatory approach that includes the reduction of harms to democracy and electoral processes.  We would be happy to provide further information to the Committee if helpful.


About our work

  1. The Carnegie UK Trust was set up in 1913 by Scottish-American philanthropist Andrew Carnegie to improve the wellbeing of the people of the United Kingdom and Ireland. Our founding deed gave the Trust a mandate to reinterpret our broad mission over the passage of time, to respond accordingly to the most pressing issues of the day and we have worked on digital policy issues for a number of years.


  1. In early 2018, Professor Lorna Woods (Professor of Internet Law at the University of Essex) and former civil servant William Perrin started work to develop a model to reduce online harms through a statutory duty of care, enforced by a regulator. The proposals were published in a series of blogs and publications for Carnegie and developed further in evidence to Parliamentary Committees[1]. The Carnegie April 2019 policy document[2] ‘Online harm reduction – a statutory duty of care and regulator’ discuss the arguments for a systemic approach at length, building on a “precautionary principle” that places responsibility for the management and mitigation of the risk of harm - harms which they have had a role in creating or exacerbating - on the tech companies themselves.


  1. The Lords Communications Committee[3] and the Commons Science and Technology Committee[4] both endorsed the Carnegie model, as have a number of civil society organisations[5]. In April 2019, the government’s Online Harms White Paper[6], produced under the then Secretary of State for Digital, Culture, Media and Sport, Jeremy Wright, proposed a statutory duty of care enforced by a regulator in a variant of the Carnegie model. France[7], and apparently the European Commission, are now considering duty of care models for online harms. Our proposals for a systemic duty of care bites at the design level of platforms and services, rather than at the level of individual pieces of content. When considering the reduction of the reasonably foreseeable risk of harm to the electoral process or the functioning of democracy this would, for instance, place a responsibility on platforms to ensure the reliability of information and timely fact checking but it might also require the risk assessment of their systems and processes for the promotion and amplification of content, the use of microtargeting of individuals in relation to the content that they see and the basis for their audience segmentation.


  1. As the nature of communications technology changes more and more of the Electoral Commission’s work will involve issues that arise in online environments. The recent experience of the election in the USA is informative as the major platforms applied new features and interventions in real-time to address mis- and disinformation relating to the democratic process and the reporting of the election result. [8]


  1. We have long held that ‘electoral harms’ should be included in the scope of the online harms regime. We worked with Lord McNally in drafting his Private Bill (The Online Harm Reduction Regulator (Report) Bill)[9], in which we sought to define electoral harms as ‘threats which impede or prejudice the integrity and probity of the electoral process’. This is consistent with the scope of existing regulatory approaches. In our opinion the online harms regulator should act in concert with sector-specific regulators such as the Electoral Commission in carrying out its duties. The online harms regulator specialises in preventing harm arising from companies in its scope, the Electoral Commission in addressing harms to the electoral process – the two should interlock in the public interest. We set out an approach to regulatory interlock in a blog post earlier this autumn[10], which we have reproduced at Annex A.


December 2020

ANNEX A: Online Harms - Interlocking Regulation – draft for discussion[11]


The statutory duty of care approach to online harms gives regulators a new route to protect the vulnerable and make markets work better where social media might have caused harm. The statutory duty approach focuses on systemic issues with social media services rather than individual complaints or breaches.  Many regulators encounter the use of social media to breach rules they administer and these could also constitute breaches of the statutory duty of care at a systemic level.  The Online Harms White Paper uses the example of social media to sell knives or other age-restricted goods to minors – and the problems might be more effectively dealt with through that route. While action on individual user complaints remains with these regulators, enforcement of the statutory duty of care lies with OFCOM, the government’s likely online harm regulator.  To avoid overburdening OFCOM, provide a simple path for the other regulators and certainty for companies and victims, some sort of process is needed to manage the interlocking of regulatory regimes.


This blog post proposes a system of regulatory interlock based on existing principles of regulatory co-operation, which is light touch and maintains focus on systemic issues not individual cases. Over the operational lifetime of Online Harms legislation there will be pressure to add (and possibly remove) issues to the scope of regulation – describing a system at the outset provides for orderly growth or shrinkage. We also suggest issues that require further thought which we will deal with in subsequent posts.


Online Harms Regime

We refer in this document to the regime we set out with Carnegie UK Trust in a draft Bill in late 2019 which proposed amendments to the Communications Act 2003. The Carnegie model is of online platform services subject to a statutory duty of care to prevent reasonably foreseeable harms arising to people as a result of the operation of those services. OFCOM is the regulator we proposed to enforce this systemic regime. A systemic regime does not generally deal with individual cases but looks at service design and business operation. The government’s Online Harms White Paper sets out a similar regime.


All of life (if not everyone in the world) is on the Internet, and even more specifically on social media platforms.  “In real life” Parliament has created many specialist regulatory systems to prevent harm to people and make markets work better in many sectors – for example, food, trading standards, financial services, or elections. These specialist regimes complement and sometimes engage both the criminal and civil law.  The way social media have been designed or are operated sometimes make it hard for these regulatory regimes to protect people and allow markets to function.  Regulators complain that they do not have a route to influence social media platforms in relation to the operation of their regimes[i] even when harm appears to be being caused by the operation of social media services. The e-commerce directive (which the UK appears likely to retain after Brexit) specifically allows duties of care to be imposed upon service providers.

How does a general online harm regulator like Ofcom and the companies subject to the duty, address harms evidenced or foreseen in other specialist regulatory regimes? We know from Ofcom research[ii] that the top four online harms experienced by adults (spam, fraud/scams, hacking/security, data/privacy) all are addressed in part by other regimes (criminal, regulatory and civil). This data suggests that those regimes are not working well.

Conversely, how can OFCOM, the general online harm regulator tap into the expertise of specialist regulators?  How do the specialist regulators identifying online harms tap into the statutory duty of care? There are of course precedents from other areas of regulation for systems of nominating lead regulators and regulators working together[iii] and OFCOM already works with many other regulators in many different ways from full concurrency to simple co-ordination. From these examples, we have developed a model for interlocking regulation.


Interlocking regulation

We propose a mechanism that allows or requires regulators to work together on issues that fall within a specialist regime but also constitute or contribute to harm within the online harms regime. Allowing formal ‘interlocking regulation’ would help both victims and social media companies have more certainty about how regimes work than an ad hoc approach.  In such a system, OFCOM should only be considering evidence of systemic harms presented by another regulator, not adjudicating an individual fraud, scam or other case, which remains the responsibility of a specialist regulator.


As part of the online harms, legislation Ofcom would be obliged to consider complaints about systemic issues from regulators designated in legislation. 

National regulators such as the Financial Conduct Authority or the Food Standards Agency would likely be on the list which could be added to from time to time by statutory instrument. Where many regulators operate in parallel at a local level, such as Trading Standards Services (TSS) then OFCOM could follow a process analogous to the Regulatory Enforcement and Sanctions Act (RESA) and ask Trading Standards services to nominate a body to raise systemic issues on behalf of all TSSs. Some regulators do have the competence to act systemically on social media services – notably the ICO and CMA. These regulators would not need to use this mechanism because of this competence; the existing mechanisms for cooperation/concurrence will continue unaffected.


Ofcom would be empowered to determine the details of the process, including format of a complaint process, supporting evidence etc. The essential elements of any such process would indicate the nature of the problem, together with evidence of level of incidence and how it arises. It may be that the specialist regulator could suggest which elements of the regulated service are contributing to the problem, but that determination lies in OFCOM’s remit. The specialist regulator should demonstrate evidence of dialogue with the regulated service, even if that is one-sided, and set out an assertion of the systemic issue enabling the harm to happen.

The process could work like this: a local TSS identifies cases of a type of scam perpetrated repeatedly using a social media platform that has led to complaints through their regulatory regime and harm to customers. The TSS has raised this with the online harms regulated platform but the scam continues; insofar as the platform has made any attempt to deal with the issue it has been unsuccessful. The TSS suspects that there is a systemic failure to prevent harm, perhaps with weak KYC allowing repeat offences from scammers (as they republish on the platform under another name after being shut down) or a perpetrator is using the system as intended and causing harm (e.g. targeting ads to vulnerable groups), but the platform operator had not thought this through at the design stage. The TSS presents this in a dossier to OFCOM through the nominated route to begin a dialogue with OFCOM about the alleged systemic problems leading to harm. OFCOM’s role is to examine this systemic issue only. OFCOM does not adjudicate individual cases/instances of harm. The burden is on the specialist regulator to present the case clearly enough with sufficient evidence that OFCOM can assess the strength of the case and have enough information to understand the nature of the problem.


An effective interlocking regulatory approach reduces the load on OFCOM – and they would not have to maintain a standing force of experts in areas covered by other regulatory regimes as they might under concurrency of powers. It provides a manageable route for OFCOM to work with other regulators building on its track record of regulatory co-operation. Current OFCOM enforcement guidelines allow it to launch investigations on receipt of information from other regulators and even to consider whether other regulators should do an investigation instead.[iv] OFCOM also has concurrent Competition Act powers over postal and communications markets and experience liaising with the CMA. The new Digital Regulation Cooperation Forum arising from the CMA report into digital advertising markets, heralds a new, substantial area of regulatory co-operation. The regulatory interlock process could feed into the new Forum or vice versa.


This regulatory interlock approach would fit into the Carnegie draft Bill. The draft Bill does not limit the scope of harms.  To allow for regulatory interlock on systemic issues as above requires a clause that requires OFCOM to define after consultation a process to receive and assess evidence from regulators established in law of systemic harms arising from the operation of regulated services.  This could be similar to draft clause 8 where a ‘super complaint’ process is described.  While the super complaint mechanism is different from regulatory interlock, it does provide a point where recognised civil society actors can formally interact with the regulatory system. In this, there are similarities between the mechanisms.

The government suggests in the White Paper and the interim response that ‘consumer’ harms would be excluded. It has been explained to us that this is due to perceived complexity of OFCOM’s task. We suggest that the model above removes concerns about complexity and would fit well with the government’s commitment to a systemic approach which we understand might not define harms on the face of the Bill. If there is no limitation on scope of harms, then a clause as described above could enable interlock in a manageable way.

This raises the following issues for future blog posts:

Regulators - which regulators should be in an interlock process, how does the relationship between regulators with ‘digital competence’ overlapping or proximate powers to OFCOM’s core function work (ICO, CMA, BBFC) and industry bodies such as Internet Watch Foundation.

Complaints processes – systemic v individual (e.g. when an individual complaint is plainly a systemic issue), how does or even can evidence gathered under one regime be furnished to another? Does a systemic complaint dossier from another regulator actually shorten a process for OFCOM or would it have to ‘re-do’ an investigation anyway?

Rights – regulators should not be allowed to regime-shop in order to undermine rights of individuals or companies that deliberately inhibit the regulator in their home territory. Is there genuine double-jeopardy here?

Pre-existing regulatory tools – such as orders under the Regulatory Enforcement and Sanctions Act to nominate a leading regulator or is it more appropriate to extend Online Interface Orders to other regulators to give them hard backstop powers to prevent harm.

[1] Our work, including blogs, papers and submissions to Parliamentary Committees and consultations, can be found here: https://www.carnegieuktrust.org.uk/project/harm-reduction-in-social-media/

[2] See https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/04/08091652/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf

[3] https://publications.parliament.uk/pa/ld201719/ldselect/ldcomuni/299/29902.htm

[4] https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/822/82202.htm

[5] For example, NSPCC: https://www.nspcc.org.uk/globalassets/ documents/news/taming-the-wild-west-web-regulate-social-networks.pdf; Children’s Commissioner: https://www.childrenscommissioner.gov.uk/2019/02/06/childrens-commissioner-publishes-astatutory-duty-of-care-for-online-service-providers/; Royal Society for Public Health:  https://www.rsph.org.uk/our-work/policy/wellbeing/new-filters.html

[6] https://www.gov.uk/government/consultations/online-harms-white-paper

[7] French-Framework-for-Social-Media-Platforms.pdf (thecre.com)

[8] https://www.nytimes.com/2020/11/04/technology/social-media-companies-election-misinformation.html

[9] https://services.parliament.uk/bills/2019-21/onlineharmsreductionregulatorreportbill.html

[10] https://www.carnegieuktrust.org.uk/blog/online-harms-interlocking-regulation/

[11] The online versio of this blog post can be found here: https://www.carnegieuktrust.org.uk/blog/online-harms-interlocking-regulation/

[i] For example, see evidence given to the Home Affairs Select Committee on 3 June 2020 by the City of London Police and the National Economic Crime Centre https://www.parliamentlive.tv/Event/Index/6f8da59b-0daf-473d-90f7-4dde9509dfc7

[ii] See page 45 of chart pack https://www.ofcom.org.uk/research-and-data/internet-and-on-demand-research/internet-use-and-attitudes/internet-users-experience-of-harm-online

[iii] See for instance http://www.legislation.gov.uk/ukpga/2008/13/contents

[iv]   OFCOM ‘Enforcement Guidelines for Regulatory Investigation’ includes references to other regulators as being alternative routes of action and sources of information that might trigger an investigation ‘whether there are other alternative proceedings that are likely to achieve the same ends, or deal with the same issues, as the potential investigation. This could include, for example, whether other agencies may be better placed to investigate the complaint or whether planned market reviews may address the potential harm;’ …

‘and in response to information provided to us by other bodies (for example, where other regulatory bodies, MPs, consumer organisations or the press draw our attention to complaints they have received about a particular issue). https://www.ofcom.org.uk/__data/assets/pdf_file/0015/102516/Enforcement-guidelines-for-regulatory-investigations.pdf Recently the ICO, CMA and Ofcom have announced a Digital Regulation Co-operation Forum (https://www.gov.uk/government/publications/digital-regulation-cooperation-forum)