Written evidence submitted by Carnegie UK Trust


1.       We welcome the Committee’s inquiry into Economic Crime and the opportunity to submit evidence. Our response is limited to the questions around consumers and economic crime, in particular the impact of online fraud and how the Government’s proposals for Online Harm reduction may address some of the challenges emerging here. Our submission also sets out the background to our work on the development of a statutory duty of care for online harms reduction and provides references to relevant material.

2.       We would be happy to provide further information on our work in writing or to discuss it with Committee members at a future evidence session.


About our work


3.       The Carnegie UK Trust was set up in 1913 by Scottish-American philanthropist Andrew Carnegie to improve the wellbeing of the people of the United Kingdom and Ireland. Our founding deed gave the Trust a mandate to reinterpret our broad mission over the passage of time, to respond accordingly to the most pressing issues of the day and we have worked on digital policy issues for a number of years.


4.       In early 2018, Professor Lorna Woods (Professor of Internet Law at the University of Essex) and former civil servant William Perrin started work to develop a model to reduce online harms through a statutory duty of care, enforced by a regulator. The proposals were published in a series of blogs and publications for Carnegie and developed further in evidence to Parliamentary Committees[1]. The Lords Communications Committee[2] and the Commons Science and Technology Committee[3] both endorsed the Carnegie model, as have a number of civil society organisations[4]. In April 2019, the government’s Online Harms White Paper[5], produced under the then Secretary of State for Digital, Culture, Media and Sport, Jeremy Wright, proposed a statutory duty of care enforced by a regulator in a variant of the Carnegie model. France[6], and apparently the European Commission, are now considering duty of care models for online harms.


5.       In December 2019, while waiting for the Government to bring forward its own legislative plans, we published a draft bill[7] to implement a statutory duty of care regime, based upon our full policy document of the previous April[8]. We are also supporting Lord McNally on his Private Bill (The Online Harm Reduction Regulator (Report) Bill)[9], introduced into the House of Lords on 14 January 2020, which would provide an opportunity for full Parliamentary debate on the nature of the regulatory regime and, if passed, empower OFCOM to prepare for its introduction.


Emerging trends in consumer-facing economic crime as a result of the COVID crisis


6.       Government action on online harms is delayed, despite promises in the 2017 and 2019 manifestos and the two 2019 Queen’s speeches. With rolling policy crises and a changing cast of Ministers, this is unsurprising. The Interim Response[10] to the White Paper consultation earlier this year was helpful, but still leaves many questions unanswered. DCMS Ministers continue to promise the full response will be published “later this year” with a Bill to follow “early next year”. Decisions on whether this will be subject to pre-legislative scrutiny or be introduced directly into the House have apparently not yet been made.


7.       We regret the delays to the Government proposals, particularly given the evidence of an upsurge in many of the harms that the Bill would cover. Many of the harms in scope of the proposed Online Harms legislation – in particular, child sexual abuse and exploitation (which are criminal offences) – have been exacerbated by the Covid19 crisis as more and more time is spent online at home and children and young people’s unsupervised social media activity increases.


8.       However, there is a significant swathe of online harms, outwith the current scope of the Government’s regulatory proposals, that have also being exacerbated during this time, including fraud, scams and wider economic crime. The National Crime Agency and Victim Support have both reported on a surge in online scams targeting vulnerable or self-isolating people during the lockdown[11]. Evidence is emerging of widespread fraudulent activity on eBay; for example, a recent report charted significant scams relating to the sale of vehicles during lockdown[12].


9.       Online scams are Britain’s biggest category of property crime, but we understand will not be in the scope of the Government’s online harm proposals. It is notable that, in their evidence to the Home Affairs Committee hearings on Covid 19 preparedness, both Commander Karen Baxter (Head of Economic Crime, City of London Police) and Graeme Biggar (Director General, National Economic Crime Centre) unequivocally called for economic crime to be within scope of the Online Harms Bill:


Simon Fell MP: Do you think online fraud should sit within the proposed Online Harms White Paper?


Commander Baxter: Without a shadow of a doubt. We see the harm that fraud causes to people, individuals, families and the wider community. It absolutely needs to be in that paper. We were disappointed it was not included.


Graeme Biggar: I absolutely agree with that as well. We lobbied hard for that. We understand the challenges DCMS and the Home Office face. It is now a matter for Parliament, but we would strongly support that.[13]


10.   We have also joined Which? and UK Finance to write to the Digital Minister, Caroline Dinenage, to make the case for fraud and scams to be included in a systemic duty of care  Many online scams involve user-generated content on large platforms but the platforms themselves have very limited responsibilities for preventing their users being exposed to fraudulent or scam content and the weaponization of targeted advertising exacerbates the problem. There is currently no adequate framework in place to ensure people are protected from online scams, leaving growing numbers of people exposed to criminals. It is vital that the responsibilities of online platforms reflect their unique position within the markets in which they operate.


11.   Despite the strength of the evidence put forward by these four expert organisations (including one, the National Economic Crime Centre, that is a non-Ministerial Department of the Home Office), the Government does not appear to accept that the Online Harms Bill is a vehicle to address the scale of the harm that they deal with, despite the Government’s emphasis on tackling illegal content (which would also pertain to fraud). In response to our letter, Ms Dinenage referred to the need for a “proportionate” approach that would not duplicate or conflict with the existing work of government, regulators and other bodies and referred to “the Home Office’s activity with law enforcement to tackle fraud and HM Treasury’s work with the financial sector on tackling economic crime.”


12.   We agree that the regulatory approach to tackle the spread of harm online needs to be coherent and proportionate and we are also mindful that an unwieldy Bill would not just face problems in its Parliamentary passage but also potentially overwhelm Ofcom, if confirmed as the Online Harms regulator. However, the present situation does not protect consumers nor the many financial sector organisations that are often left to pick up the pieces when users are defrauded or scammed online. That is why we have set out a proposal for “regulatory interlock”: if any competent regulator or other statutory body identifies a new vector for online harm that breaches their own specialist regulatory regime they should be able to hand a dossier to OFCOM to assess and, if appropriate, process in the online harms regime. Such interlocking regulation would protect consumers and increase the effectiveness of regulators such as the Financial Conduct Authority that find it hard to get purchase with online companies. We have set out how this approach should work in a recent blog post and would be happy to talk to Committee members further about its application to economic crime and consumer harms specifically[14].



Steps that could be taken to mitigate these concerns


13.   The systems, processes, design and business model that facilitates the spread of viral misinformation and disinformation is the same that facilitates the spread of fraud and scams online, as well as the targeting of vulnerable users who may be susceptible to such approaches. Social media companies are not doing enough to address this.


14.   We set out in more detail below how our proposal for a systemic duty of care, enforced by a regulator, enables regulation to bite at a platform design level – tackling these information flow issues –  and requires risk mitigation rather than regulating individual pieces of content. Such a systemic approach should cover economic or consumer harms (including online scams, fraud and the sale of unsafe products). As Professor Woods has argued in her comprehensive paper on the subject, this approach is entirely consistent with the protection of people’s fundamental rights, including the right to freedom of expression[15].


15.   Until the Government publishes its full response to the White Paper and its own Bill, we have significant reservations about the adequacy of its proposals to deliver on its frequently restated ambition to “make the UK the safest place in the world to be online”. Last year’s White Paper described a regime that was largely framed around types of content, supported by a series of codes of practice addressing broad categories of harms. The Interim Response in February has gone some way to describing a “systems-based” regime but Ministers’ recent evidence has not provided any further clarity on its nature.


16.   The Carnegie April 2019 policy document[16] ‘Online harm reduction – a statutory duty of care and regulator’ and our response[17] to the Government’s White Paper discuss the arguments for a systemic approach at length, building on a “precautionary principle” that places responsibility for the management and mitigation of the risk of harm - harms which they have had a role in creating or exacerbating - on the tech companies themselves. In summary:


“At the heart of the new regime would be a ‘duty of care’ set out by Parliament in statute. This statutory duty of care would require most companies that provide social media or online messaging services used in the UK to protect people in the UK from reasonably foreseeable harms that might arise from use of those services. This approach is risk-based and outcomes-focused. A regulator would have sufficient powers to ensure that companies delivered on their statutory duty of care. …


“Everything that happens on a social media or messaging service is a result of corporate decisions: about the terms of service, the software deployed and the resources put into enforcing the terms of service and maintaining the software. These design choices are not neutral: they may encourage or discourage certain behaviours by the users of the service … A statutory duty of care is simple, broadly based and largely future-proof. For instance, the duties of care in the Health and Safety at Work Act 1974 still work well today, enforced and with their application kept up to date by a competent regulator.

“A statutory duty of care focuses on the objective – harm reduction – and leaves the detail of the means to those best placed to come up with solutions in context: the companies who are subject to the duty of care. A statutory duty of care returns the cost of harms to those responsible for them, an application of the micro-economically efficient ‘polluter pays’ principle … The continual evolution of online services, where software is updated almost continuously makes traditional evidence gathering such as long-term randomised control trials problematic. New services adopted rapidly that potentially cause harm illustrate long standing tensions between science and public policy. For decades scientists and politicians have wrestled with commercial actions for which there is emergent evidence of harms: genetically modified foods, human fertilisation and embryology, mammalian cloning, nanotechnologies, mobile phone electromagnetic radiation, pesticides, bovine spongiform encephalopathy. In 2002, risk management specialists reached a balanced definition of the precautionary principle that allows economic development to proceed at risk in areas where there is emergent evidence of harms but scientific certainty is lacking within the time frame for decision making.[18]

“Emergent evidence of harm caused by online services poses many questions: whether bullying of children is widespread or whether such behaviour harms the victim; whether rape and death threats to women in public life has any real impact on them, or society; or whether the use of devices with screens in itself causes problems. The precautionary principle provides the basis for policymaking in this field, where evidence of harm may be evident, but not conclusive of causation. Companies should embrace the precautionary principle as it protects them from requirements to ban particular types of content or speakers by politicians who may over-react in the face of moral panic. Parliament should guide the regulator with a non-exclusive list of harms for it to focus upon. Parliament has created regulators before that have had few problems in arbitrating complex social issues; these harms should not be beyond the capacity of a competent and independent regulator. Some companies would welcome the guidance. [19]




17.   The Government’s long-held ambition to “make the UK the safest place to be online and the best place to start a business” is increasingly being replaced with a Ministerial mantra that its online harms regulation must be right from the perspective of businesses as well as protecting freedom of speech.


18.   With this in mind, we wish to conclude by drawing the Committee’s attention to our letter – written pre-Covid19 – to the Home Secretary and the DCMS Secretary of State on the opportunities for the UK Government in developing a “British model for regulation”, which would set the pace and direction for the rest of the world to follow and which would provide a “gold standard” of digital regulation for businesses operating here.[20] The UK’s leadership on action to address illegal activity online, whether the Internet Watch Foundation’s work on child sexual exploitation and abuse, or our influence within the 5 Eyes community on dealing with terrorism and extremist content online, can be built on. Moreover, we believe that there is Parliamentary support and enthusiasm for such a proportionate, systemic and world-leading approach.



December 2020


[1] Our work, including blogs, papers and submissions to Parliamentary Committees and consultations, can be found here:



[4] For example, NSPCC: documents/news/taming-the-wild-west-web-regulate-social-networks.pdf; Children’s Commissioner:; Royal Society for Public Health:


[6] French-Framework-for-Social-Media-Platforms.pdf (










[16] See