Written Evidence Submitted by Carnegie UK

(GAI0041)

 

  1. We welcome the opportunity to submit to the Committee’s inquiry. In recent years, Carnegie UK has been at the forefront of the debate in the UK on the governance of social media platforms. We created the approach that underpins the Online Safety Bill, which is shortly to return to Parliament: a statutory duty of care which focuses on systems and processes and is enforced by a regulator.  Since 2018 we have been working with parliamentarians, civil society and the government to develop and refine proposals for online harm reduction. We have given evidence on the Bill to both the Public Bill Committee and the DCMS Select Committee in the past two months and our team was cited over a dozen times in the OSB Public Bill Committee and in many Select Committee reports. [1]

 

  1. Our submission focuses on the fourth of the questions posed in your Terms of Reference - “How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?” - and draws on the concepts we have developed for regulation of social media platforms. We would be happy to provide further information, either in writing or at an oral evidence session, as your inquiry progresses.
     
  2. The Committee has asked about AI regulation, which implies a presumption that AI might require regulation. In our wide-ranging work on regulation of software in social media we always go back to fundamentals. A need for regulation implies that there are costs arising from the use of AI in a production decision which do not fall on the company but on wider society – such as workers, customers and third parties. This causes allocative inefficiency or individual or social harm. There are many regulatory mechanisms for regulation of external costs – these should be explored before reaching for new uncertain models.
     
  3. A common and highly successful approach to returning external costs to the production decision is risk-based, proportionate regulation or self -regulation focused on the outcomes of a company process. Such a model should operate whether or not AI is involved. A strong and effective example of this model is the Health and Safety at Work Act 1974. A statutory duty of care enforced by a regulator has proven effective and futureproofed. The Act firmly applies to AI in the workplace as we describe below. This approach is flexible and future-proofed, focusing as it does on the outcomes that arise from service design - the company systems and processes, rather than the specifics of the technology that underpins them. One advantage with this approach is that it could apply as a base-level principle across the use of AI generally, but could also be deployed in sector-specific contexts, so as to allow the particular characteristics and risks of those contexts to be taken into account. Indeed, insofar as the Online Safety Bill looks at the operation of filters, algorithmic recommendation tools and automated content moderation, it could be said to be a sector-specific example of Ai regulation. Having a common approach potentially allows the interconnection between sector specific rules and general AI rules to occur.
     
  4. Our proposal for a statutory duty of care for online harm reduction which has been adopted, in part, by the Government in the Online Safety Bill, drew on the approach that, for nearly 50 years, has underpinned Health and Safety legislation in the UK[2]. We are of the view that this approach is applicable to AI and its application in many different industrial sectors, albeit that some of those sectors may require additional considerations or refinements to be added to the regulatory framework to take account of their specific risks and potential harms.
     
  5. The Government, it would seem, agrees with our view. In 2018, we worked with Lord Stevenson of Balmacara on PQ UIN HL8200, tabled on 23 May 2018, about whether the Health and Safety at Work etc. Act 1974 applied to ‘artificial intelligence’ and algorithms used in the workplace (which might cause safety concerns).[3] We set out the Government's answer in full here:

Section 6 of the Health and Safety at Work etc. Act 1974 places duties on any person who designs, manufacturers, imports or supplies any article for use at work to ensure that it will be safe and without risks to health, which applies to artificial intelligence and machine learning software. Section 6(1)(b) requires such testing and examination as may be necessary to ensure that any article for use at work is safe and without risks but does not specify specific testing regimes. It is for the designer, manufacturer, importer or supplier to develop tests that are sufficient to demonstrate that their product is safe.

The Health and Safety Executive's (HSE) Foresight Centre monitors developments in artificial intelligence to identify potential health and safety implications for the workplace over the next decade. The Centre reports that there are likely to be increasing numbers of automated systems in the workplace, including robots and artificial intelligence. HSE will continue to monitor the technology as it develops and will respond appropriately on the basis of risk.
 

  1. This fact that HSAW74 applies to AI has languished in obscurity. It might suit some to ignore it while others prefer speculating about new AI laws rather than applying existing law, which remains fit for purpose across so many other sectors. 

 

  1. Similarly, the existing data protection regime applies to AI. (The same is true of data protection as it is for user safety; the Information Commissioner has set out clearly how “the underlying data protection questions for even the most complex AI project are much the same as with any new project. Is data being used fairly, lawfully and transparently? Do people understand how their data is being used? How is data being kept secure?”)[4]

 

  1. With regard to the second part of the question, as to which body or bodies should provide regulatory oversight, we would suggest that – in keeping with our contention that existing legal and regulatory frameworks are sufficient for the governance of AI – the regulatory bodies with oversight of individual industrial sectors should retain the lead in the oversight of how industries and companies within those sectors are using AI. This might involve, for example, scrutiny of the risk assessment and mitigation processes in place for the development and updating of new industrial techniques using AI software; risk assessments of (a) the potential for harm arising from the operation of AI, ML and other software that controls systems and processes and (b) monitoring and performance management of staff.[5] 

 

  1. If the widespread deployment of AI in an industrial sector requires more resources and skills for the regulator then, on the polluter pays principle, these should be raised from the regulated or the government if regulation is direct funded. A mechanism for the regulators in different sectors to cooperate with each other, share insights and information and undertake horizon-scanning would be also be advisable, as is the case in the digital field more generally. [6]

 

  1. Using HSAW74 as the baseline regulation simplifies the regulatory approach and prevents companies using arguments that AI-driven technology is somehow novel or special in order to avoid scrutiny or oversight. It avoids the need for AI-specific primary legislation and/or multiple sector-specific regulatory frameworks or different compliance requirements. Existing expert regulators in each sector will have the freedom as well as the authority to ensure that their oversight of industries within their remit can keep up with the pace of technological development, with the onus being firmly on the regulated industries to design in AI risk-assessments and safety testing alongside their existing regulatory compliance duties rather than as an add-on or afterthought.

 

 

(November 2022)


[1] An introductory brief on our work and its impact can be found here: https://d1ssu070pg2v9i.cloudfront.net/pex/pex_carnegie2021/2022/05/17162436/TACKLING-ONLINE-HARM-AND-THE-ONLINE-SAFETY-BILL-INTRODUCING-CARNEGIE-UK-1.pdf

[2] https://www.legislation.gov.uk/ukpga/1974/37/contents

[3] https://questions-statements.parliament.uk/written-questions/detail/2018-05-23/HL8200

[4] https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-artificial-intelligence-and-data-protection/

[5] 

[6] https://www.gov.uk/government/collections/the-digital-regulation-cooperation-forum