Carnegie UK – Written evidence (FDF0060)

 

  1. We welcome the Committee's Inquiry into tackling the increase in cases of fraud, and Professor Lorna Woods’ opportunity to provide oral evidence to the Committee on 17th March 2022.
  2. Professor Lorna Woods (Professor of Internet Law at the University of Essex), who - along with Carnegie UK Trustee, William Perrin - is one of the co-creators of the "duty of care" approach to social media regulation, gave evidence just before the final Online Safety Bill was published. This submission allows us to supplement her evidence with further recommendations on the ways in which that Bill can better tackle online fraud as well as to propose some structural changes within the Bill, which we believe would greatly support the Committee's objectives. Given the complexity of the Online Safety Bill, however, further thought is still required in places.
  3. We provide material below to answer the questions from the call for evidence that are closest to our areas of interest and expertise. Our primary recommendation with regards to fraud and the Online Safety Bill is that the scope and effect of the new fraudulent advertising powers in the Bill is too limited. Fraudulent advertising has the same impact on victims wherever they encounter it: the new powers should make all companies run an effective system to mitigate fraud taking into account the risk posed by the regulated services' ad delivery systems and processes.
  4. For reference our full initial analysis of the Online Safety Bill can be found here.
  5. We would be happy to provide further information, in writing or in person, as the inquiry progresses.

 

Background

  1. Carnegie UK was established over 100 years ago with a focus on improving wellbeing and tackling harms to our collective wellbeing.
  2. Since early 2018, Carnegie UK, led by the work of Professor Lorna Woods (Professor of Internet Law at the University of Essex) and William Perrin (Carnegie UK Trustee), has shaped the debate in the UK on reduction of online harm through the development of a proposal to introduce a statutory duty of care, enforced by an independent regulator. Our work has also influenced other international proposals and is reflected in the approach being adopted the European Union’s Digital Services Act.
  3. We have previously provided evidence, both written and oral, to a range of inquiries and consultations, including setting out our views on online fraud and scams to the Treasury Select Committee's Call for Evidence on Economic Crime.
  4. We have also been part of a broad coalition organisations, including Which?, UK Finance and MoneySavingExpert, which has successfully called for changes to the way in which the proposed Online Safety regime treats fraud.
  5. In the draft version of the Online Safety Bill, paid-for ads were excluded from the scope of the regime. On the 8th March 2022, the Government announced changes to the Online Safety Bill to include a new fraudulent advertising duty. This was confirmed by the publication of the Bill on 17th March 2022 and is a welcome concession.

 

2. What future economic and technological developments are likely to impact how fraudsters seek to commit crime over the next five to ten years, and how might these be prepared for and mitigated? What role can technology and tech companies play in combatting fraud across this timescale?

  1. We have routinely seen the types of fraud and scams committed online evolve, with fraudsters taking advantage of new products or services and poor enforcements of terms of service or understanding of the unintended consequences they create.
  2. We are concerned that this trend will continue into new forms of technology such as the metaverse. Based on our analysis, it seems likely that the proposed user-to-user regime envisaged in the Online Safety Bill would encompass the metaverse – indeed, the Government, most recently in the Second Reading debate on the Bill in the Commons on 19th April, has confirmed  that is the case. But, given the Bill’s current focus on content such as text and images, it is unclear how this may apply in predominantly livestreaming virtual environments and therefore the knock-on implications for how fraudulent advertising activities may be identified and prevented.
  3. Risk assessments are important and effective tools to mitigate harm across different sectors. They support the identification of the types of harms that companies should pay attention to in the design and risk mitigation on their service. It is this ability to support the anticipation of harms occurring that make them highly valuable in spaces of rapid development. Despite the many unknowns of the metaverse, there are safety features that are still likely to have a beneficial impact on user safety regarding fraud that services should consider as part of their risk assessment, namely enforcement of terms of service, identity checks such as “Know Your Client” checks on advertising, and targeting criteria for advertising (e.g. what characteristics are utilised, who is paid for which ads)- however that may manifest in the metaverse.
  4. Effective co-operation of regulators will also be critical in the mitigation of future harms from fraud given the often cross-cutting and technical nature of the incidents, requiring involvement of a range of specialised expert regulators. Professor Woods gave evidence to this in her Committee appearance and we return to that below (para 28-32), in the light of the final text of the Online Safety Bill.

 

11. Is existing legislation effective in tackling the increase in modern forms of fraud? If not, is there a legislative remedy, or should fraud be addressed primarily through implementation of existing provisions? Answers may refer to existing mechanisms such as increasing the scope and powers of regulators. You may refer to any legislation and are not limited to the Fraud Act 2006

  1. Our following response relates to the improvements we believe should be made to the Online Safety Bill to reduce the prevalence of, and harm caused by, online fraud.

 

Fraudulent Advertising

  1. In terms of the approach that the Online Safety Bill takes to tackling fraud, the new rules on fraudulent advertising are good as far as they go. However, we believe there is a problem with scope, as fundamentally, we do not believe the current distinctions between which platforms these rules apply to are justified, given the significance of the harm felt by a fraud victim regardless of platform. As the Bill currently stands, measures will not apply to all online advertising providers. Rules will only apply to services defined by the government as ‘Category 1’ or ‘Category 2a’ — which currently excludes the smaller, ‘user-to-user’ websites that host adverts. To note, the list of companies that would qualify in each of the categories is currently unknown, this is to be developed in secondary legislation with the Secretary of State publishing definitions on thresholds after which OFCOM publishes the register of categories (cl 81). There is a risk that scammers will target consumers through paid-for content on these sites. The new fraudulent advertising powers should apply evenly to all regulated user-to-user companies and be just as strong and systemic as those for illegal content.
  2. The legal requirement for search engines to tackle scam adverts seems less onerous than for social media platforms. All the way through the Bill, search engines are subject to different, less onerous rules than user-to user services, especially those in ‘Category 1’ — which are large user-to-user sites such as Twitter and Facebook. Arguably, search engines play a specific role in users' right to search for and to receive information that could explain this difference; this point is less convincing, however, as regards adverts that the search engines also serve up. While this difference can be seen here too but, on top of that, the obligations on Category 2a services with regard to fraudulent ads are thinner than their obligations under the search engine's illegal content safety duty, which would seem an obvious point of comparison. Specifically for fraudulent advertising, there is no equivalent of cl 24(4) which requires Category 2a services to look at their design and the role that plays in harm, as well as staff policies and practices and risk management arrangements. This raises concerns about whether the legal duty for search engines in relation to fraudulent ads is stringent enough - even if we accept that there is a justifiable difference between search and user-to-user services.
  3. Moreover, the consequences of the boundary between advertising and other content being removed is not clear. If adverts fall within the definition of user-generated content (and it seems that they would, per clause 181 and 49(3)), then adverts are regulated content and the machinery behind advert delivery comes within scope where the content is either criminal or harmful to children or to adults (noting that economic/financial loss is not a relevant harm). This inclusion is likely to be a step forward though there will be awkward boundaries to navigate, especially given the special regime for fraudulent ads is applicable only to some service providers. Fraudulent advertising can have significant impact on the victim regardless of where they are harmed by it. Regarding user-to-user content, powers on fraudulent advertising should apply across all categories of service recognising that there might be proportionality impacts in terms of risk and size or resources insofar as what they would be required to do to satisfy the duty.
  4. We will continue to develop our thinking in this area.

 

Complexity of the Bill

  1. Taking a step back from the specific duties, the overall complexity of the Online Safety Bill is a key concern. The more challenging a Bill is to navigate, the more difficult it will make comprehension and compliance, increasing the regulatory burden. The Online Safety Bill’s structure remains complex and opaque. As a recent concession and addition to the Bill, the coverage of fraud feels like a bolt on. It has been introduced as a new section and the impact of this addition (along with the other new duties) is unclear on the overall schema.
  2. While we have made suggestions about applying the same rules across all categories of service providers (para 16-18), we feel that there is a simpler, more fundamental solution to the issue. We continue to recommend that categories of providers are removed, and risk assessment duties apply across the board with mitigation obligations reflecting the risk posed. Removal of the categorisation would not only improve the workability of the bill and support the reduction of harm it would also reduce issues slipping through the gaps. The current regime, for example, would not catch harms on fast-growing platforms, a key issue for fraud which relies heavily on the evolution of methods. 

 

Advertising delivery models

  1. Another helpful clarification would be to explicitly bring the characteristics of advertising delivery systems into scope for risk assessment. We have continually argued that the systems, design features and processes should be the focus of regulation rather than content, as is established approach in many other sectors.
  2. This includes examining business models that often drive design and operational decision-making and can therefore be instrumental in continuing or amplifying certain types of problematic harms, and specific relevant the role of advertising funding.
  3. The exclusion of “paid-for advertisements” found in the draft Bill has been removed. This suggests that the ad delivery systems in general could be relevant to the risk assessment and risk mitigation duties, in addition to the specific provisions on fraudulent ads (which seem to relate to specific content-based rules).
  4. This we feel to be potentially an important positive step, but, clarification on the treatment of advertising business models within the new regime is sought.

 

Proactive measures

  1.                     In her oral evidence session Prof Woods was asked her view on the effectiveness of requirements on platforms around proactive measures to prevent people being exposed to fraud the first place, whether she thought they went far enough when we are looking at facilitated fraud, or whether more should be done to try to make the platforms accountable to deal with the problem?
  2. At this time, the final Bill had not yet been published so Prof Woods could not comment on the detail. Having undertaken our initial analysis of the Bill, however, there is little more we can say with certainty in terms of the impact of proactive measures. There are pre-emptive obligations on both Cat 1 and Cat 2A service providers in relation fraudulent ads; but there is no detail as to what these might be, and further detail will be provided by OFCOM in its code of practice required by cl 37(4), which will take time to develop.

 

15. Can you suggest one policy recommendation that the Committee should make to the Government?

  1. In providing her oral evidence to the Committee, Prof Woods noted the recommendation to look at how the regulators work together to make sure that they have the appropriate competences and powers, as well as resources.
  2. To expand on that point and reiterate evidence given, we have made a recommendation in our analysis that the Bill should contain a requirement on OFCOM to define the terms of its relationships with other regulators and include the powers, if needed, to get them to work effectively together. We firmly believe in the effectiveness of interlocking matrix regulation: a mechanism that allows or requires regulators to work together on issues that fall within a specialist regime, but also constitute or contribute to harm within the online harms regime. This approach is preferable to other concepts such as a super-regulator, which we consider to be unworkable.
  3. An effective interlocking regulatory approach makes for more powerful and effective enforcement. It reduces the load on OFCOM – as they would not have to maintain a standing force of experts in areas covered by other regulatory regimes. It provides a manageable route for OFCOM to work with other regulators building on its track record of regulatory co-operation. All regulators would be able to employ their own specialisms, OFCOM would only be considering evidence of systemic harms presented by another regulator, with other regulators retaining their power and capabilities on specific issues. A formal ‘interlocking regulation’ would help both victims and regulated services have more certainty about how regimes work than an ad hoc approach and likely enable more harms to be identified.
  4. While OFCOM may not need a new “co-designation” power, it will need to work with other regulators. Regarding fraud specifically, how will the relationship work between OFCOM in their role enforcing their new duty on fraudulent advertising and for example, the FCA? The clarity, formalisation and robustness of these working relationships has consequences not just for - e.g. sharing of information between the regulator or ensuring clear lines of responsibility and cooperation in relation to evidence-gathering, horizon-scanning or enforcement - but also for upstream policy oversight. For example, which department will be ultimately be responsible for  the policy oversight and related Ministerial advice on the implementation of the duty on fraudulent ads? DCMS, as the sponsor of OFCOM? Or HMT, as the sponsor of the FCA? Or Home Office, as the department with the policy responsibility for combatting fraud?
  5. We recommend seeking much greater clarity down to a much more granular level of expectations on regulators on co-operation, decision making, responsibilities, accountability, and information sharing.

 

25 April 2022