Written Evidence Submitted by The Institution of Engineering and Technology (IET)

(GAI0021)

About the Institution of Engineering and Technology (IET)

The IET is a trusted adviser of independent, impartial evidence-based engineering and technology expertise. We are a registered charity and one of the world’s leading professional societies for the engineering and technology community with over 155,000 members worldwide in 148 countries.

Our strength is in working collaboratively with government, industry and academia to engineer solutions for our greatest societal challenges. Digital Futures is one of the IET’s societal challenges, and underpins much of our work. We believe that professional guidance, especially in highly technological areas, is critical to good policy making. The IET’s panels bring together a number of experts in AI across multiple industries, and have recently published reports about the use of AI in digital, healthcare, and safety-critical applications. This inquiry response has been informed by previous IET research, along with direct input from AI experts in the IET’s membership and networks.

Introduction

The IET welcomes the opportunity to respond to this inquiry on the governance of AI. Innovations in AI have the potential to charge productivity and economic growth. In addition, embracing AI could reduce the depth and length of the recession the UK is currently facing. However, in order to harness the economic benefits of AI, and ensure its safe and ethical use, the government needs to facilitate more governance-related legislation and better guidance. The evidence presented in this submission focuses on Questions 1-2 and Questions 4-6. We would be happy to provide you with further detail on these findings. Please contact Stephanie Baxter (SBaxter@theiet.org; 07702-332303) accordingly. More information can also be found in the reports listed in the footnotes.

Executive Summary:

 

  1. How effective is current governance of AI in the UK?
    1. What are the current strengths and weaknesses of current arrangements, including for research?

The UK’s current approach to AI governance is still in its infancy, with little in terms of dedicated legalisation, and regulatory guidance only just starting to emerge. While the National AI Strategy represents a step in the right direction, there remain gaps in the government’s strategic approach. In addition, there is a lack of clarity about how some extant guidance and laws – written for conventional systems – apply to AI systems.

The UK’s emerging approach acknowledges two characteristics of AI systems that distinguish them from conventional ones – ‘adaptiveness’ and ‘autonomy’.[1] There are challenges with respect to both characteristics that the current approach does not address sufficiently:

Adaptiveness refers to the fact that AI systems are ‘trained by’ data to solve problems, in ways which are not always intuitive to humans. Therefore, a major regulatory challenge is ensuring the ethical and safe use of data to train, test, and operate AI systems:

A major weakness of the current approach is that many organisations do not understand their legal obligations in terms of permission to use data, or taking decisions based on that data. This lack of clarity hinders research and innovation. While there is much openly accessible data that could be used to ‘train’ and test algorithms in their early stages, it is unclear the extent to which such data is legally allowed to be used, especially when the data is not accompanied by explicit terms and conditions.

‘Autonomy’ refers to the fact that AI systems can operate independently of human oversight. Therefore, there is a challenge around who / which party is legally responsible if an AI system causes damages:

In human-operated systems, someone can often be held accountable for negligence in these situations. However, in AI systems this is not always the case – the overseeing operator may not have been negligible. Therefore, victims of damage have fewer obvious paths to legal action.

 

  1. What measures could make the use of AI more transparent and explainable to the public?

Transparency and explainability require that AI systems be designed and implemented to allow for oversight, including – 1) the translation of their operations into intelligible outputs, and 2) the provision of information about where, when, and how they are being used. The rationale and benefits of using AI should be made clear where it is employed.

These requirements are especially important to maintain trust if an AI system is deciding about the availability / outcome of a public service. For example, if an AI system denies someone access to a service, then one way of increasing the provision of information could be to tell that person the smallest difference to their circumstances that would allow approval.

The government should also maintain an active, open dialogue with the public about their approach to AI. Public trust in the safety of AI is paramount, and the government should make it clear that regulations are in place to ensure it cannot be compromised. The government should also emphasise to the public the economic and social benefits that AI can bring.

  1. How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

The UK’s emerging approach to governance of AI has already highlighted a series of ‘early’ cross-regulatory principles. Safety has correctly been identified as the foremost principle, with regulators encouraged to take an approach proportional to risk.

Ensuring the safe and secure use of AI requires a regulatory oversight body that co-ordinates guidance on good practice, and delivers sanctions where misuse has occurred. One option to deliver this is to set-up and fund a regulatory oversight body within the Health and Safety Executive (HSE), which has a track record of excellence, impartiality, credibility and accountability. This is necessary to ensure AI is used safely and help prevent incidents from occurringthis is fundamental to maintaining public trust, which underpins the economic and social benefits AI can bring.

  1. To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?
    1. Is more legislation or better guidance required?

Both more governance-related legislation and better guidance are required.

Legislation should clarify responsibility and accountability for the safety and security of AI systems, and outline powers to sanction misuse. Greater clarity is also required on the legal use of data in the research and development of AI systems. (Refer to Q1.)

Guidance is needed to ensure the development of safe and ethical AI systems. The development of regulatory guidance should be a holistic, cross-sector process that accounts for perspectives in industry, academia, professional organisations, and the general public. The government should initiate and fund the development of guidance for the safe use of AI via a trusted institution – such as the BSI or the HSE. This guidance could take the form of British Standards Institute (BSI) standards or HSE Approved Codes of Practice (ACOPs). Whilst not legally binding, both approaches would provide robust benchmarks for users to adhere to.

A starting point for such guidance could be the IET’s AI-Safety policy position, which sets out ten key pillars that support the safe development and operation of AI systems in safety-critical applications:[2]

 

  1. What lessons, if any, can the UK learn from other countries on AI governance?

Some lessons can be learned from the emerging AI strategies of other countries which share high ambitions for the technology:

 

(November 2022)


[1] DCMS, Establishing a pro-innovation approach to regulating AI: An overview of the UK’s emerging approach, 2022

[2] The IET, Artificial intelligence and functional safety, 2022 (https://www.theiet.org/media/10033/artificial-intelligence-and-functional-safety.pdf).

* A more detailed version of this guidance with specific recommendations will be published soon.

[3] UAE National Strategy for Artificial Intelligence 2031 (https://ai.gov.ae/wp-content/uploads/2021/07/UAE-National-Strategy-for-Artificial-Intelligence-2031.pdf)

[4] National Artificial Intelligence Initiative: Advancing Trustworthy AI (https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/)