Written evidence submitted by the
Department for Digital, Culture, Media and Sport and the Department for Business, Energy and Industrial Strategy
(GAI0107)
Introduction
1. DCMS and BEIS welcome the opportunity to respond to the Select Committee’s Call for Evidence to examine the effectiveness of AI governance and the Government’s ongoing policy analysis on this topic.
2. Artificial Intelligence (AI) has the potential to transform all areas of life, rewrite the rules of entire industries, and recalibrate the balance of power on the world stage. The UK is consistently ranked one of the best places in the world to start an AI business.[1] The AI market has grown rapidly, with annual investment in UK AI companies increasing from £252 million in 2015 to over £2.8 billion in 2021.[2]
3. The rapid growth of such transformative technologies brings huge opportunities but also new and accelerated risks. We need to make sure that our regulatory approach keeps pace with AI so that businesses have the clarity they need to develop and deploy AI and the public can trust the adoption and use of AI in order to fully realise its benefits. This is key to driving growth and innovation.
4. We set out our early proposals to establish a pro-innovation regulatory approach through a policy paper in July 2022.[3] Our approach is in line with the principles of the Plan for Digital Regulation[4], the findings from the independent Taskforce on Innovation, Growth and Regulatory Reform[5], and the government’s broader commitment - reaffirmed in the Chancellor’s Autumn Statement - to improve regulation of emerging technologies, enabling their rapid, safe introduction.[6] It will strengthen the UK’s position as a global leader in AI, by ensuring the UK is the best place to develop and use AI technologies. We are currently analysing stakeholder responses ahead of publishing a White Paper setting out more detail on the Government’s approach. We welcome insights from the Science and Technology Committee in shaping our proposals.
Question 1: How effective is current governance of AI in the UK? What are the current strengths and weaknesses of current arrangements, including for research?
5. The UK has a world leading regulatory regime - but we must take action to ensure it remains fit for purpose to address the unique opportunities and challenges that AI brings. There are currently no UK laws designed specifically to regulate AI, however, this does not mean that AI is therefore unregulated. AI is currently regulated through a complex patchwork of legal and regulatory requirements.[7]
6. To date, our regulatory framework has in part been responsible for supporting the success of the UK’s AI ecosystem. The UK has a world leading regulatory regime - known for its effective rule of law and support for innovation. However, as AI continues to evolve and play an ever more important role across our economy and society, we need to consider how our regulatory approach should adapt.
7. Our regulators are already taking steps to respond to the implications of AI. Regulators are collaborating in innovative ways to share knowledge and best practice in ways that are relevant to the regulation of AI. For example:
a. A partnership of UK regulators are working together to understand the impact AI technologies could have on our economy and society through the Digital Regulation Cooperation Forum (DRCF).[8] The DRCF recently published the outputs of its first two research projects looking at the harms and benefits posed by algorithmic processing (including the use of AI), and at the merits of algorithmic auditing.[9]
b. The Bank of England and the Financial Conduct Authority (FCA) established the Artificial Intelligence Public-Private Forum (AIPPF) to report on AI innovation in financial services between the public and private sectors.[10]
c. The FCA and ICO have also been developing regulatory sandboxes to enable responsible innovation across a range of areas.[11]
8. In addition, the UK is home to a range of institutions that have significant expertise in AI governance issues and/or play a role in coordinating the regulatory landscape, including the Ada Lovelace Institute, the Alan Turing Institute, the Centre for Governance of AI and the Leverhulme Centre for the Future of Intelligence.[12]
9. Despite these strengths, there are a number of challenges with the existing AI regulatory landscape. Stakeholders have noted that:
10. Building on the policy paper we published in July 2022, we are looking at how best to address these challenges through our forthcoming White Paper on AI regulation.
11. We have also taken important steps to support the UK’s AI assurance market - which will support how we govern AI. The Centre for Data Ethics and Innovation (CDEI) promotes cutting-edge AI assurance mechanisms to ensure effective AI governance. Since the publication of the CDEI’s AI Assurance Roadmap[13], the CDEI has been engaging with industry to improve these mechanisms to support our wider pro-growth, risk-based approach to AI governance (findings to be published December 2022).
12. Alongside this, we have established the groundbreaking AI Standards Hub to convene UK stakeholders across industry, academia, regulators, and civil society to advance trustworthy and responsible AI. It will do this by enabling access to the practical tools and knowledge stakeholders need to effectively engage with the development and use of AI standards. Technical standards can play a key role in supporting AI developers and operators, and providing regulators and assurers with benchmarks against which to assess products and services.
Question 2: To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose? Is more legislation or better guidance required?
13. A range of legal frameworks already apply to AI - however, work is needed to assess and clarify their application.
14. AI is partially regulated through a range of legal and regulatory requirements that were typically not designed with AI in mind. This includes UK data protection law, equality legislation as well as sector specific regulation such as for medical research and financial services. We need to ensure there are no important gaps, overlaps or ambiguities in and between these frameworks.
15. Within their respective remits, UK regulators are taking action to support the responsible use of AI by clarifying the obligations of actors in the AI ecosystem. For example:
16. Despite this ongoing work, there are still important opportunities to assess and clarify legal frameworks as well as improving regulatory coordination. Our approach to remedying these issues, while minimising disruption to the AI ecosystem is outlined below, in our response to Question 4.
Question 3: What lessons, if any, can the UK learn from other countries on AI governance?
17. Working closely with international partners on AI governance is critical to establishing an effective approach. Digital supply chains and AI research collaborations often cross borders, and many UK AI companies are reliant on international trade. Ensuring international interoperability across AI governance mechanisms is essential to prevent unnecessary barriers to trade, enable commercial opportunities for UK businesses, and promote the responsible development of AI internationally.
18. Countries around the world are moving quickly to set the rules that govern AI and the UK is therefore working closely with like minded partners to share best practice, influence the international conversation on AI and ensure coherence between international governance regimes. For example, through the UK's world-leading work on AI Assurance.
19. As a respected global leader in AI, there is significant international interest in the UK approach and we are playing a leading role in international discussions on AI governance. We are closely involved in collaborative conversations on this topic, including those at the Council of Europe, OECD, UNESCO, and Global Partnership on AI. The Government will continue to work with global partners to shape international norms and standards relating to AI, including those developed by multilateral and multistakeholder bodies at global and regional levels.
20. While our AI governance regime must be tailored to the particulars of the UK AI ecosystem, we will draw on international good practice to inform our approach. There is an encouraging degree of commonality between the UK’s approach and those being developed internationally. Based on the OECD AI Principles,[20] the cross-cutting principles at the heart of our framework (see Question 4) are closely comparable to those adopted internationally.
21. Our global approach to shaping technical standards reflects our commitment to international leadership and cooperation. Our work includes multilateral and bilateral collaboration with international partners and direct engagement in Standards Development Organisations (SDOs).[21]
22. Our approach to AI regulation will have at its heart a set of principles that will apply to the use of AI in all sectors. We will implement these principles in a manner that allows us to adapt them should we need to, as we learn from working with regulators on implementation. The principles will complement existing regulation, with our vision being to increase clarity and reduce friction for businesses operating in the AI lifecycle. We have proposed the following principles:
23. Rather than create an extensive new framework of rights and obligations, the principles describe what we think well governed AI use should look like on a cross-cutting basis as part of our broader context-based, pro-innovation approach. We are engaging regulators and industry stakeholders on how best to put our approach into practice, and will set out further details through the forthcoming White Paper.
24. We will adopt a context-sensitive, adaptable, ‘test and learn’ approach to regulatory implementation. This involves working closely with regulators and the central government departments that sponsor them, in the piloting and implementation phases to assess whether further powers, capabilities and coordination are required, then developing solutions in a way that is sensitive to the particulars of the regulatory context in which these requirements arise. Stakeholders have expressed strong support for this approach, which allows for continuous improvement, encourages agile response by government to emerging unacceptable risks, and minimises the risk of disruption to a complex, nascent AI ecosystem.
25. To support the effective implementation of our approach, we are currently exploring how best to:
26. The requirement for appropriate transparency and explainability is one of the six principles at the heart of our AI governance framework.[23] Not all decisions (whether made using AI or not) can or need to be transparent or explainable, but our principles recognise that in some contexts transparency and explainability are a core part of good AI governance. Government will be developing and supporting the development of practical tools, standards and guidance to support implementation of transparency and explainability in such contexts.
27. Transparency is a key driver of public trust and there is strong evidence that public trust is a highly significant driver of AI adoption.[24],[25] In polling commissioned by CDEI about government data sharing, 78% of the public reported that it was important that they had the option to see a detailed description of how their personal information is shared, compared to just 7% who responded that it didn’t matter.[26] Explainability of AI algorithms is a key enabler to such transparency.
28. Taking into account considerations of the need to protect confidential information and intellectual property rights, in the July 2022 paper we suggested that example transparency and explainability requirements could include requirements to proactively or retrospectively provide information. The policy paper suggested that this could relate to the nature and purpose of the AI in question including information relating to any specific outcome, the data being used and information relating to training data or the logic and process used and where relevant information to support explainability of decision making and outcomes. It could also relate to accountability for the AI and any specific outcomes.
29. We are currently refining the transparency and explainability principles and will set out fuller details through the forthcoming White Paper. We recognise the need to provide tools to enable compliance as well as setting clear requirements.
30. We are also looking at how AI standards can support our work to improve the explainability and transparency of AI. The newly established AI Standards Hub aims to increase UK contributions to the development of these standards from a wide range of stakeholders including SMEs, regulators, civil society and academia. The UK Government has also published an internationally renowned algorithmic transparency standard[27] which provides government departments and public bodies with a standardised process for informing the public about their use of algorithmic tools in decision-making processes. It provides meaningful transparency into algorithm-assisted decision processes by presenting details about how the algorithmic tool works and the context within which it operates.Technical standards for AI explainability and transparency are also being developed at the international level.[28]
31. As well as contributing to the development of technical standards, the UK Government seeks to set an example on explainability and transparency by publishing information on the use of algorithmic tools in the public sector proactively and publishing guidance for transparent, explainable use of AI in the public sector.[29],[30] Greater transparency will also promote trustworthy innovation by providing better visibility of the use of algorithms across the public sector, and enabling unintended consequences to be mitigated early on.
32. Effective scrutiny requires clarity about which parties are responsible for compliance with regulatory requirements and about what those requirements are, while transparency and explainability promote the availability of the information on which proper scrutiny depends. AI supply and usage ecosystems are complex - often involving a large number of actors operating across sectors and jurisdictions and effective decision scrutiny can be difficult to achieve.
33. Key existing mechanisms for scrutinising and challenging the use of AI include statutory requirements in data protection legislation (where AI involves personal data, and in specific circumstances)[31] and technical standards can play an important supporting role. The UK’s approach to AI governance will ensure coherence between these requirements and our overarching principles, which are designed to set out the conditions under which effective scrutiny can be achieved.
34. Through our work to establish a new regulatory framework for AI, we will also consider how we need to complement existing mechanisms to support AI developers to achieve compliance with our regulatory framework. We will develop and pilot tools and guidance to help overcome this challenge.
35. Standards are one such tool, playing a crucial role ensuring proper scrutiny by setting out best practice benchmarks against which regulators and assurers can assess systems falling within their remit. For example, the Algorithmic Transparency Recording Standard (see Question 5, paragraph 30) looks to establish a common approach to publishing information about the use of algorithmic tools in decision-making, focusing on the public sector. In addition, formal technical standards addressing management and governance and scrutiny of decision making involving AI are being developed at the international level.[32] The UK Government will continue to ensure that our work on AI regulation is closely aligned with and complemented by our work on AI standards.
(November 2022)
[1] The Global AI Index, Tortoise Media (2021)
[2] Beauhurst data, DCMS analysis [Accessed 27/01/2022]
[3]A Pro-Innovation Approach to Regulating AI, DCMS (2022)
[4] Plan for Digital Regulation, DCMS (2021)
[5] The Taskforce on Innovation, Growth and Regulatory Reform independent report, 10 Downing Street (2021). The report argues for UK regulation that is: proportionate, forward-looking, outcome-focussed, collaborative, experimental and responsive.
[6]The Autumn Statement 2022, HM Treasury (2022)
[7] We provide further detail on the legal frameworks that apply to AI in response to Question 2.
[8] Digital regulation cooperation forum, 2021
[9] Findings from the DRCF algorithmic processing workstream - Spring 2022, DRCF (April 2022)
[10] The AI Public-Private Forum: Final Report, Bank of England and Financial Conduct Authority’s Artificial Intelligence Public-Private Forum (February 2022)
[11] ‘Regulatory Sandbox’, FCA (Accessed 2022), ‘Regulatory Sandbox’, ICO (Accessed 2022)
[12] https://www.adalovelaceinstitute.org/, https://www.turing.ac.uk/, https://www.governance.ai/, https://www.leverhulme.ac.uk/leverhulme-research-centres/leverhulme-centre-future-intelligence
[13] AI Assurance Roadmap, CDEI (2021)
[14] For example, Guidance on AI and data protection, Explaining decisions made with AI, How to use AI and personal data appropriately and lawfully, AI and data protection risk toolkit, ICO
[15] Strategic Plan 2022-2025, EHRC (March 2022)
[16] Artificial intelligence in public services, EHRC
[17] Software and AI as a medical device change programme, MHRA (September 2021)
[18] Consultation (now closed) on the future regulation of medical devices in the UK, MHRA (October/November 2021)
[19] Science and Evidence Delivery Plan 2020-2023, HSE
[20]The OECD AI principles, adopted 2019.
[21] The UK’s global approach to AI standardisation is exemplified by our leadership in the International Organisation for Standardisation and International Electrotechnical Commission (ISO/IEC) on four active AI projects, as well as the UK’s initiation of and strong engagement in the Industry Specification Group on Securing AI at the European Telecommunications Standards Institute (ETSI).
[22] As set out in Establishing a pro-innovation approach to AI, July 2022. They are based on the OECD AI Principles, adopted 2019.
[23] Establishing a pro-innovation approach to AI, July 2022
[24] Trust in Artificial Intelligence: a five country study, KPMG 2021
[25] Alternative AI regulatory frameworks for the UK: preliminary impact assessment, Frontier Economics, 2022.
[26] BritainThinks: Complete transparency, complete simplicity, CDEI and CDDO, 2021.
[27] Algorithmic Transparency Recording Standard, CDEI & CDDO, 2021
[28] See ISO/IEC TS 6254 Information Technology- Artificial Intelligence- Objectives and approaches for explainability of ML models and AI systems, IEEE P7001 - Standard for Transparency of Autonomous Systems. ISO/IEC 12792-Information technology- Artificial Intelligence-Transparency taxonomy of AI systems is also under development.
[29] Data ethics Framework, 2018
[30]A guide to using AI in the public sector, OAI and CDDO 2021
[31] Article 22 UK GDPR, which provides specific safeguards for ‘solely automated decision-making with legal or similarly significant effects’.
[32] ISO/IEC DIS 42001- Information Technology- Artificial Intelligence- Management system (under development), ISO/IEC 38507:2022- Governance implications of the use of AI by organisations (Published), ISO/IEC DIS 23894-Information Technology- Artificial Intelligence- Risk Management (under development)