Written Evidence Submitted by

Professor David Leslie, Director of Ethics and Responsible Innovation Research, The Alan Turing Institute Professor of Ethics, Technology and Society, Queen Mary University of London

(GAI0113)

The UK’s AI governance ecosystem:

At the beginning of the journey, but still ahead of the game

 

 

Executive Summary

The United Kingdom is in a remarkable position. Amidst the growing clamour of geopolitical uncertainty and international economic destabilisation, the UK continues to take substantial steps forward in maintaining its position as a global pacesetter and thought leader on the advancement of responsible AI innovation and on AI governance. The maturity of the UK’s current approach to governing AI has, in no small measure, resulted from several related factors. These include its visionary early focus on developing a robust and principles-led AI governance ecosystem, its continuous dedication to building a well-resourced research and collaboration infrastructure to generate innovative and internationally pioneering AI policy, standards, and best practices frameworks, and its pathbreaking role in championing an interdisciplinary and sociotechnical turn in AI standards development, governance, and regulation that is now influencing technology policy circles the world over.

 

The UK is, however, still at the beginning of its AI governance journey. Though its far-sighted early investments in, and dedication to, responsible AI have so far placed it at the top of the league in the global AI governance ecosystem, the UK’s early sprint of technology policy innovation and principles development has ever more turned into a marathon of policy implementation, standards codification, and regulatory intervention. In the present era, this longer-term endeavour of “putting principles into practice” has exposed several challenges to the UK’s ability to realise effective AI governance. This evidence submission explores these challenges in detail, first elaborating on their causes and character and then presenting some suggestions about possible paths forward to overcoming them. Here is a summary of these challenges and recommendations:

 

Challenge to the UK’s governance ecosystem

Possible Paths Forward

Readiness and capacity gaps at local and regional levels that have impaired public trust in, and community ownership over, the responsible AI innovation agenda pursued by central government and that have been exacerbated by digital accessibility and literacy issues

  • A proactive pursuit of the levelling-up agenda by central government in the area AI and digital transformation
  • A cultivation and expansion of community-led and locally based approaches to building public consent and social license into AI governance and policy
  • An active national effort to overcome digital divides and digital accessibility and literacy issues

Readiness and capacity gaps in the regulatory environment that have created common obstacles across regulators to fulfilling their AI-related regulatory remits. Such obstacles include limitations in knowledge and skills, insufficient coordination between regulators, issues of leadership and management of

  • Dedicating more resources to building readiness within and between regulatory bodies and to supporting cross-regulator initiatives
  • Evaluating, strengthening, and renewing regulatory collaborations

 


organisational and attitudinal change, and resource constraints.

  • Promoting strategies to increase organisational agility, adaptivity, and ingenuity
  • Pursuing an inclusive and participatory approach that includes civil society
  • Creation of an AI and Regulation Common Capacity Hub (ARCCH), convened by an independent and authoritative body in AI that would provide a trusted platform for the collaborative pursuit of common capacity while consolidating existing initiatives and avoiding unnecessary additional crowding of the landscape.

A widespread disjointedness and siloing of public sector agencies and departments that has obstructed the development and adoption of effective cross- government standards, structures, and processes to secure responsible AI innovation and procurement practices

  • Actively advancing work to set up clear cross- government accountability, governance, and funding infrastructure for AI innovation initiatives to support delivery of the National AI Strategy and the AI Roadmap
  • Actively advancing work, building on NAO recommendations, toward the development and adoption of effective cross-government standards, structures, and processes to secure responsible AI innovation and procurement practices
  • Greatly increasing the government’s dedication of policy energies and resources to meet the complex logistical, operational, and cultural challenges that must be confronted to join up the myriad moving public sector parts of the UK’s digital technology governance ecosystem

A lack of overall policy coherence across the UK’s AI governance ecosystem that has produced divergences between national frameworks and devolved local frameworks and that has generated uncertainty among private, public, and third sector organisations about what lawful, responsible, and ethical AI research and innovation looks like

  • Central government taking the lead in a concerted effort to establish common normative and policy vocabularies (building on the national public sector AI ethics and safety guidance) across national frameworks and devolved local frameworks and across private, public, and third sectors
  • Joining up adjacent policy frameworks (e.g., data governance frameworks, data protection frameworks, cybersecurity frameworks, online safety frameworks, and AI ethics frameworks) so that innovators, policymakers, and technology procurers and users can better grasp the requirements of good practice
  • Making the regulation whitepaper’s cross- sectoral principles as clear and decisive as possible so that they align with widely understood and well-established policy vocabularies and standards as well as national and international frameworks


Difficulties in bootstrapping AI governance protocols and regulatory requirements from existing legal and regulatory frameworks due to the inapplicability of existing statutory definitions and specifications to the novel issues raised by AI technologies

  • Taking a more considered position on establishing appropriate legal measures to combat the novel issues that AI poses to the UK’s governance and regulatory landscape
  • Scoping and working towards algorithmic transparency and accountability legislation that includes statutory duties for public consultation, stronger transparency requirements for the production and use of AI systems, mandated impact assessment, and the establishment of end-to-end chains of human accountability

 

 

Introduction

In this evidence submission we will explore the strengths and weaknesses of the UK’s AI governance ecosystem. We will begin by looking at how the UK’s position as a global thought leader on responsible AI has resulted from early and steady dedication of significant human, financial, and organisational resources to policy issues surrounding the ethical design, development, and deployment of AI systems. We will then examine how current legal, political, cultural, and institutional barriers to “putting AI principles into practice” may ever more be signalling the need for new pathways of support and redress to maintain the UK’s global leadership in AI governance and responsible AI research and innovation.

 

The UK’s head start

The UK’s current position as ‘a global leader in good governance of AI’1 has depended upon two supporting determinants. First, Great Britain’s longstanding tradition of leadership in effective regulation, good governance, and best practices has motivated an orientation to well-governed and responsible research and innovation practices in the development of AI technology policy. This high level of maturity in the governance and regulatory environment—recognised as best in the world in the OECD’s 2018 Regulatory Policy Outlook2—has meant that the novel societal challenges presented by AI technologies have been met with a well-informed commitment to collaborative knowledge-building, multi-stakeholder exploration of policy transformation, and evidence-based reform.

 

Second, owing to this robust culture of good governance, early initiatives to steward the acceleration of AI innovation in the UK (for instance, the government’s 2012 identification of ‘robotics and autonomous systems’ (RAS) as one the ‘Eight Great Technologies’3) have, from the start, included considerations of their ‘social, ethical and legal implications’.4 This has positioned the UK as a first entrant in the global effort to develop a robust AI governance ecosystem and a ‘secure regulatory environment’ that streamlines responsible technological advancement while simultaneously ‘building


1 The UK AI Council, AI Roadmap. 2021. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/949539/AI

_Council_AI_Roadmap.pdf

2 OECD Regulatory Policy Outlook 2018; BEIS, Regulation and the Fourth Industrial Revolution. 2019.

3 UK Intellectual Property Office Informatics Team, Eight Great Technologies: Robotics and Autonomous Systems. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/318236/Ro botics_Autonomous.pdf

4 House of Commons Science and Technology Committee, Robotics and artificial intelligence. Fifth Report of Session 2016–17.


public trust’.5 As Figure. 1 illustrates, the UK’s prescient initial investment of time, labour, and resources in the establishment of a policy and governance infrastructure that could support a responsible AI ecosystem has led to immense downstream benefits and positive impacts in the UK’s AI governance ecosystem that are now having ripple effects internationally.

 

For instance, major policy contributions—such as the 2016 House of Commons Select committee on robotics, the 2017 House of Lords Select committee inquiry on AI, the 2017 Hall and Presenti Review, the 2018 AI Sector Deal, and the government’s 2019 white paper on Regulation for the Fourth Industrial Revolution—placed emphasis on the importance of building a governance infrastructure in the UK that could facilitate the responsible production and use of AI. Such policy interventions led to the launch of the Office for AI (OAI), the Centre for Data Ethics and Innovation (CDEI), the UK AI Council, and the Regulatory Horizons Council (RHC) between 2018 and 2019. National initiatives such as Project ExplAIn6 and Using AI in the Public Sector7 soon followed, yielding the world’s first, and now most cited, national public sector guidance on AI ethics, Understanding artificial intelligence ethics and safety (2019) and the world’s first, and now most cited, national guidance on AI explainability, Explaining decision made with AI (2020). Alongside these pathbreaking AI ethics and governance frameworks, the CDEI published the landmark study, Review into bias in algorithmic decision-making in 2020, and, in collaboration with the CDDO, launched the algorithmic transparency standard for all government and public departments in 2021. In a more forward-thinking vein, the UK AI Council published its AI Roadmap in 2021. It championed the importance of ‘best-in-class data governance standards’ and stressed the need for universal digital and data literacy so that mechanism for generating social licence and informed public consent could be built into the UK’s AI ecosystem. In the same year, HM Government produced the National AI Strategy (2021), which has since spurred a collaborative effort between DCMS, the British Standard Institution (BSI), the National Physical Laboratory (NPL), and The Alan Turing Institute to pilot the world’s first AI Standards Hub (2022).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


5 Ibid.

6 Project ExplAIn is a collaboration between the Information Commissioner’s Office and The Alan Turing Institute that was recommended in the Hall and Presenti Review and supported by funding from the EPSRC from 2018- 2023.

7 This initiative was led by the Office for AI and culminated in a collection of guides on the use of AI in the public sector. As part of this, the OAI and the Government Digital Service (GDS) partnered with The Alan Turing Institute’s public policy programme to produce Understanding Artificial Intelligence Ethics and Safety in 2019.


Figure 1. The development of the UK’s AI governance ecosystem: A selected timeline

 


 

 

RAS ‘Special Interest Group’ (SIG),

comprising academics and industrialists is                           

established with support from Innovate UK. The SIG make ethical and responsible RAS a focus of their work.

The Alan Turing Institute is established.

UK House of Commons Select committee inquiry on robotics and AI calls for ‘careful


2013

2014

 

2015


The government identifies Robotics and autonomous systems’ (RAS) as one the ‘Eight Great Technologies’.

                            In March, the Government announces the dedication of £42 million to establish The Alan Turing Institute as the national institute for data science. Artificial intelligence is added to the Institute’s remit in 2017.


scrutiny of the ethical, legal and societal                           

dimensions of artificially intelligent systems’ in advance of the establishment of regulatory regimes.

The Hall and Pesenti review, Growing the Artificial Intelligence industry in the UK, is published.

The AI Sector Deal is published alongside the launch of the Office for AI. This is part of a

£20 million pound investment in ‘making the UK a global centre for AI’ with a focus on the responsible and ethical production and use of AI.

UK AI Council is established to provide a big picture vision of how the UK could continue to advance as a global leader in responsible AI.

Following the direction of the government’s white paper on Regulation for the Fourth Industrial Revolution, the Regulatory Horizons Council (RHC) is established ‘to identify the implications of technological innovation and advise the government on regulatory reform


2016                           

 

2017

 

2018

 

2019


The British Standards Institute publishes BS8611, ‘Guide to the ethical design and application of robots and robotic systems.

House of Lords Select committee inquiry on AI publishes AI in the UK: ready, willing and able?. It urges the establishment of ‘a cross-sector ethical code of conduct, or “AI code”, suitable for implementation across public and private sector organisations which are developing or adopting AI’.

The Centre for Data Ethics and Innovation (CDEI) is set up as an independent advisory body tasked by the UK Government to investigate and advise on how we maximise the benefits of AI.

In collaboration with the Office for AI and the Government Digital Service, The Alan Turing Institute publishes the world’s first, and now most cited, public sector guidance on AI ethics and safety, Understanding artificial intelligence ethics and safety.

The Competition and Markets Authority (CMA), the Information Commissioner’s Office

(ICO) and the Office of Communications (Ofcom)


needed to support its rapid and safe introduction’.

In collaboration with the ICO, The Alan Turing Institute co-publishes Explaining decision made with AI, the world’s first, and now most cited, national guidance on AI explainability.


2020


              form the Digital Regulation Cooperation Forum (DRCF). The Financial Conduct Authority (FCA) joins the DRCF the following year.

              The CDEI publishes its Review into bias in algorithmic decision-making.

Department for Digital, Culture, Media, & Sport


HM Government publishes the National AI Strategy. The third pillar focuses on ‘Governing AI effectively’.

The Central Digital and Data Office (CDDO), in collaboration with the CDEI, launches its algorithmic transparency standard for all


2021                           

 

2022


(DCMS) publishes the National Data Strategy.

UK AI Council produces its AI Roadmap. It makes recommendations across four areas: research, development and innovation; skills and diversity; data, infrastructure and public trust; and national, cross-sector adoption.


government and public departments.                           

The Alan Turing Institute and The Open              

Data Institute collaborate with the Global Partnership on AI to produce research and global policy guidance on AI governance (with a focus on data trusts and data justice). BEIS invests £1 million to support this internationally impactful work.


The Alan Turing Institute, in collaboration with DCMS, BSI, and the National Physical Laboratory (NPL), launches the AI Standards Hub to coordinate UK engagement in AI standardisation globally, and explore with stakeholders the development of an AI standards engagement toolkit to support the AI ecosystem to engage in the global AI standardisation landscape.


Just as important as these cascading AI policy efforts to the UK’s global leadership in AI governance has been its early, steady, and well-informed Research Council funding of programmes in responsible AI research and innovation. For over a decade, the UK has shown a continuous dedication to building a research and collaboration infrastructure that facilitates cutting-edge insights, partnerships across academia, industry, and the public sector, and network-building both nationally and internationally. Following the identification of RAS as one the Eight Great Technologies, the UK government allocated a £25 million capital investment to support the establishment of the EPSRC UK Robotics and Autonomous Systems Network. This UK-RAS Network aimed ‘to provide academic leadership in RAS, expand collaboration with industry and integrate and coordinate activities at eight EPSRC funded RAS capital facilities, four Centres of Doctoral Training (CDTs) and with 35 partner universities across the UK’.8 From 2017 to 2022, UKRI also dedicated £112 million to the ‘Robots for a safer world’ challenge—an investment that funded four robotics hubs and over 160 projects including the development of smart robotics systems that assist with nuclear decommissioning and wind turbine maintenance.9 In 2014 the government announced support of £42 million to establish the Alan Turing Institute as the national institute for data science and AI. This was soon followed by Strategic Priority Fund (SPF)investments in AI and Data Science for Science, Engineering, Health and Government ( a

£38.8 million Turing-led programme ‘supporting the use of artificial intelligence and data science in priority areas of the UK economy, including engineering and urban planning, health, physical and life sciences, and criminal justice’) and Living with Machines (a £9.2 million Turing-led programme ‘helping data scientists, historians, computational linguists, geographers, and archivists to work together to better understand the social and cultural impact of the mechanisation of work during the first industrial revolution’).10 The SPF has also invested in the UKRI’s Trustworthy Autonomous Systems (TAS) programme (a £33 million initiative that ‘will undertake fundamental, creative and multidisciplinary research in various areas key to ensure autonomous systems can be built in a way society can trust and use’). Other significant funding initiatives for responsible AI range from ESRC’s partnership with Administrative Data Research UK (ADR UK) (a £105 million programme running from 2021 to 2026 that aims to ‘facilitate safe and secure access for accredited researchers to newly joined-up and de- identified datasets to enable better informed policy decisions that improve people’s lives’) to AHRC’s

£8.5 million programme to transform AI ethics and regulation and EPSRC’s £25-30 million Trustworthy and Responsible AI programme, which will run from 2023 to 2028.11

 

These concerted efforts to support a robust AI research and collaboration infrastructure have had numerous practical benefits for the development of the UK’s AI governance ecosystem. This has been due, in no small measure, to the close relationship in the UK’s unique research ecology between policy development spheres and basic academic research on responsible and ethical AI. Aside from the constructive policy impacts mentioned above, one of the subtler but principal advancements spurred by the UK’s proactive support of its research community has been the instigation of an interdisciplinary and sociotechnical turn in AI standards development, governance, and regulation.

 

Already in the early 2010’s, the UK Research Councils initiated critical reflection on what responsible research and innovation (RRI) should look like for high impact scientific areas like geoengineering and AI. During this time, ESRC and EPSRC funding supported the development of EPSRC’s 2013 AREA framework for RRI (which was built around four pillars: Anticipate, Reflect, Engage, Act). From this RRI perspective, enabling responsible research and innovation practices involves starting from the crucial


8 https://www.ukras.org.uk/about/uk-ras

9  https://www.ukri.org/what-we-offer/browse-our-areas-of-investment-and-support/robots-for-a-safer-world/

10 https://www.ukri.org/what-we-offer/our-main-funds/strategic-priorities-fund/

11 https://www.ukri.org/what-we-offer/browse-our-areas-of-investment-and-support/administrative-data- research-uk-adr-uk/; https://www.ukri.org/news/8-5-million-programme-to-transform-ai-ethics-and-regulation/

; https://www.ukri.org/opportunity/responsible-and-trustworthy-artificial- intelligence/?utm_medium=email&utm_source=govdelivery


awareness that all processes of scientific discovery and problem-solving possess sociotechnical aspects and ethical stakes. Rather than conceiving of research and innovation as independent from human values, RRI regards these activities as morally implicated social practices that are duly charged with a responsibility for critical self-reflection about the role that such values play in discovery, engineering, and design processes and in consideration of the real-world effects of the insights and technologies that these processes yield. For this reason, the RRI viewpoint operationalised in the AREA framework enjoins an interdisciplinary approach to scientific research and innovation which prioritises the integration of knowledge from ethics, philosophy, the humanities, and the social sciences with mathematical, engineering, and natural scientific insights. In particular, the AREA framework emphasises the importance of anticipating the societal risks and benefits of research and innovation through open, multidisciplinary, and inclusive dialogue, of engaging with affected stakeholders as a means to critique and co-creation at all stages of the design, development, and deployment of emerging technologies, and of ensuring that innovation processes, products, and outcomes are made transparent, accountable, and accessible through robust governance and reporting mechanisms.12

 

Not only has this RRI view of research and innovation as ‘science with and for society’ heavily influenced academic research on the responsible design, development, and deployment of AI systems, it has, over the past several years, directly spurred a sea-change in the governance of robotics and AI technologies, directly informing the UK’s visionary approach to AI policy, standards, and regulation. This has been especially evident in the domain of standards development. Whereas standards for (and the certification of) digital and software-driven innovation had previously concentrated on the technical dynamics and specifications necessary to assure the performance and safety of these sorts of technologies, new concerns about the social shaping and ethical consequences of the processes and products of AI and robotics research and innovation began to come to the forefront. In April 2016, the UK’s national standards body, BSI, published BS8611, its innovative ‘Guide to the ethical design and application of robots and robotic systems’.13 That same year, the Institute of Electrical and Electronics Engineers (IEEE), whose direction of travel is significantly shaped by the UK’s active participation, published the first version of its Ethically Aligned Design (EAD) guidance.14 The EAD guidance has formed the knowledge basis for the launch of the IEEE’s P7000 series of standards and certifications (now in production), which cover areas ranging from data governance, algorithmic bias, and AI nudging to the ‘transparency of autonomous systems’, personalised AI agents, and ‘wellbeing metrics for ethical AI’.15 Finally, the International Standards Organisation (ISO), in collaboration with the International Electrotechnical Commission (IEC), has recently formed the ISO/IEC JTC 1 SC42/WG3, a working group on the Trustworthiness of Artificial Intelligence, which has begun to think through how to construct standards for ‘characteristics of trustworthiness, such as accountability, bias, controllability, explainability, privacy, robustness, resilience, safety and security’.16 Apart from proactively shaping ISO/IEC JTC 1 SC42 as the UK’s representative at ISO/IEC, BSI has formed a mirror committee of stakeholders from civil society, academia, and industry to advance further policy insights in this area of sociotechnical standards. The Turing’s AI Standards Hub has also launched a work programme on

 

 

 


12 Owen, R., Macnaghten, P., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 39(6), 751–760. https://doi.org/10.1093

/scipol/scs093; Owen, R. (2014). The UK Engineering and Physical Sciences Research Council’s commitment to a framework for responsible innovation. Journal of Responsible Innovation, 1(1), 113–117. https://doi.org/10.1080/23299460.2014.882065

13 https://knowledge.bsigroup.com/products/robots-and-robotic-devices-guide-to-the-ethical-design-and- application-of-robots-and-robotic-systems/standard

14 https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v2.pdf

15 https://ethicsinaction.ieee.org/p7000/

16  https://standards.iteh.ai/catalog/tc/iso/2554a560-8d3b-4560-b9ba-ecd985ed1c64/iso-iec-jtc-1-sc-42-wg-3


Trustworthy AI as the first horizontal theme area on which its efforts to build knowledge, community, and collaboration will focus.17

 

Taken together, Great Britain’s early ground-breaking efforts to build the best practices frameworks and knowledge infrastructure needed to enable a robust and socio-technically anchored AI governance ecosystem, and its global leadership in the international regulatory and standards development landscape, have provisionally put the UK in a good place to lead the world in responsible AI. However, as we will explore in the next section, AI policymakers, regulators, researchers, and innovators in the UK now face significant social, cultural, economic, political, and legal challenges as they endeavour to figure out how best to operationalise these frameworks and insights into practicable governance processes and actionable controls that are agile and adaptable enough to support rapid technological advancement while vigorous enough to safeguard the public interest and to promote the common good.

 

Challenges faced by the UK’s AI governance ecosystem

Much of the UK’s early efforts at technology policy innovation and principles development focused on formulating appropriate and broadly accepted normative vocabularies that could be operationalised across AI innovation lifecycles in contextually responsive and domain specific ways. Nevertheless, growing consensus, over the past couple of years, on relevant AI principles and top-level normative goals (such as safety, sustainability, accountability, fairness, explainability, transparency, and data privacy, protection, quality, and integrity) has meant that ever more attention could be paid to “putting principles into practice” through effective governance regimes and cogent innovation assurance protocols. The initiation of this longer-term labour of policy implementation, assurance management, standards codification, and regulatory intervention has, however, laid bare several barriers to the UK’s capacity to realise effective AI governance across its complex social and technological landscape.

 

In this section, we will explore these barriers in more detail, first elaborating on their causes and character and then presenting some suggestions about how to overcome them. Such barriers include:

 

 

 

 


17 https://aistandardshub.org/introducing-the-hubs-work-programme-on-trustworthy-ai/


These barriers will be considered in turn.

 

Readiness and capacity gaps at local and regional levels

 

In its 2020 publication, AI in the UK: No room for complacency, the House of Lords Liaison Committee noted that, in the wake of the increased reliance on technology prompted by the COVID-19 pandemic, there was an urgent need for greater public understanding of what AI technologies are and how they use personal data. Only with expanded public knowledge, the Committee report argued, could there be wider adoption, increased trust, and pathways to actionable recourse in cases where AI systems were ethically unsound.18 A 2021 report from the All-Party Parliamentary Group on Data Analytics (APPGDA) built on this observation, noting that the AI- and data-related public trust deficits that became apparent during COVID-19 (and that obstructed the effective use of digital technologies in the public interest) also reflected deeper readiness deficiencies and capacity gaps at the local level, i.e. within local authorities which lacked transparent governance frameworks and confidence-building mechanism to connect with citizens and local businesses. The APPGDA concluded that ‘the balance of citizen involvement on data development and trust needs shifting to the local...What is needed is diverse efforts all around the country, encouraged by central government’.19

 

The APPGDA’s call for a ‘shift to the local’ signals an area of critical weakness in the UK’s current AI governance ecosystem. At present, there is a dearth of place-based and community-driven initiatives to build public understanding of responsible AI and data use. There is also a scarcity of bottom-up initiatives to create trust-building networks of local authorities, local businesses, and local residents, which can collaboratively take ownership over AI innovation and policy agendas.20 Resource asymmetries and imbalances between different local authorities, and regions, complicate such readiness deficits even further inasmuch as the lack of a level playing field among authorities and regions exacerbates capacity shortfalls, draws limited resources away from digital initiatives, and hence increases the difficulty of scaling the distance needed for the remediation of readiness and capacity gaps. Underlying all of this—and conditioning possibilities for any kind of effective ‘shift to the local’— are the structural inequalities and digital divides that lead to digital accessibility and literacy issues within and among local authorities and regions.

 

Among possible paths forward to address readiness and capacity gaps at local and regional levels are:

 


18 House of Lords Liaison Committee, AI in the UK: No room for complacency. December, 2020. https://publications.parliament.uk/pa/ld5801/ldselect/ldliaison/196/196.pdf

19 All-Party Parliamentary Group on Data Analytics, Our Place Our Data: Involving Local People in Data and AI Based Recovery. March, 2021. https://www.policyconnect.org.uk/research/our-place-our-data-involving-local- people-data-and-ai-based-recovery

20As the APPGDA report well expressed: ‘trust in data and AI requires the involvement of local people representative of their communities. Simon Madden, NHSX recommended mapping “Islands of Trust” - building trust in a smaller group and then working out from that. Other contributors suggested that such mapping could work effectively on both a sectoral and geographical basis, and pointed to the importance of local beliefs, culture and values, which are relevant to place and population. Trust frameworks developed nationally will not capture and respond to local values. For all these reasons it is regions and localities who need to lead on engaging citizens, albeit enabled by the strengthened governance and support described above’ (p. 30).


agendas and the support of the meaningful involvement of local businesses and residents through skills building and training programmes. As the APPGDA concluded in its 2021 report, this would also require an improved communication infrastructure to connect government Ministers with local areas: ‘A more equal partnership between central government and local areas towards a strong digital economy should include robust mechanisms for Ministers to hear from those making things happen on the ground, so the government better knows what it can do to help’. More concerted and deliberate Ministerial involvement would, on this view, generate greater public trust by ‘ensuring that data ethics is fully considered through the lens of “place”’.

 


21 The importance of robust multi-stakeholder and public engagement led approaches to AI governance, policy, and regulation has been stressed by multiple studies and policy papers. See, for instance, BEIS, Closing the gap: getting from principles to practice for innovation friendly regulation. 2022. https://www.gov.uk/government/publications/closing-the-gap-getting-from-principles-to-practice-for- innovation-friendly-regulation; NESTA, Renewing regulation ‘Anticipatory regulation’ in an age of disruption.

March 2019. https://media.nesta.org.uk/documents/Renewing_regulation_v3.pdf; and The World Economic Forum, Agile Regulation for the Fourth Industrial Revolution A Toolkit for Regulators. 2020. https://www3.weforum.org/docs/WEF_Agile_Regulation_for_the_Fourth_Industrial_Revolution_2020.pdf  22 https://www.turing.ac.uk/news/camden-publishes-data-charter-promising-safer-and-ethical-use-data; https://involve.org.uk/resources/blog/news/developing-data-charter-residents; https://www.camden.gov.uk/data-charter


the use of AI technologies as confident and informed producers and consumers. In accordance with the AI Council’s 2021 recommendation, there should be a whole-of-UK effort to address AI and data literacy gaps, with The Alan Turing Institute actively engaging localities and regions to help steward their AI readiness journeys. As a preparation for this, the Turing’s ethics and skills teams have begun piloting The Turing Commons, an upskilling and training platform, which delivers a wide curriculum that covers AI ethics and governance, responsible research and innovation, and data science and public engagement, among other areas.23

 

Readiness and capacity gaps in the regulatory environment

 

In 2021, the Office for AI commissioned The Alan Turing Institute’s Public Policy Programme to engage in research into how regulators could meet the challenge of regulating activities transformed by AI and maximise the potential of AI for regulatory innovation. The resulting report, Common regulatory capacity for AI (published in July 2022), also investigated whether regulators perceived a need for common capacity in AI—mechanisms and structures that enable coordination, knowledge sharing, and resource pooling—to advance AI readiness across the UK’s regulatory landscape.24 A central finding of this research was that readiness and capacity gaps in the regulatory environment have created common obstacles across regulators both to fulfilling their AI-related regulatory remits and to marshalling AI technologies to carry out regulatory functions.

 

The report focused primarily on the concept of readiness as a way to understand what regulators might need to do in order to effectively adapt to the novel challenges posed to the regulatory environment by the emerging omnipresence of AI technologies across sectors. Readiness, more generally, refers to an individual’s, an organisation’s, or a larger system’s degree of preparedness to meet novel challenges or to successfully navigate change. In the context of AI and regulation, it refers to the conditions of preparedness — at the participant, organisational, and system levels — that enable the effective integration of AI technology and technology policy into the regulatory environment.25 More specifically, regulatory readiness in the context of AI must be interrogated and scrutinised at three distinctive levels:

 

1.       The readiness of individual people—namely, the motivational, attitudinal, and psychological antecedents of the successful adoption of new technologies or technology policy innovations. At this participant level, readiness involves the attitudes, perceptions, cognitive abilities, skills, and investments that enable individuals to embrace and integrate AI innovation and AI- prompted policy change.

2.       The readiness of organisations—namely, the institutional, cultural, and policy level antecedents of the successful adoption of new technologies or technology policy innovations by organisations and institutions. At this organisational level, readiness involves the way that the institutional culture, the availability of resources, and the environment of policies,

 


23 https://www.turing.ac.uk/research/research-projects/turing-commons; https://github.com/alan-turing- institute/turing-commons

24 Aitken, M., Leslie, D., Ostmann, F., Pratt, J., Margetts, H., & Dorobantu, C. (2022). Common Regulatory Capacity for AI. The Alan Turing Institute. https://doi.org/10.5281/zenodo.6838946. This section draws directly from this paper.

25 The report stressed that, in order to understand the essential determinants of readiness, we need to gain a full view of how the barriers and enablers of these kinds of innovation are situated in broader system-level, organisational, and motivational contexts and how these determinants are interrelated. From such a wide- angled standpoint, beyond considering any particular obstacle or catalyst to innovation in isolation, attention must also be paid to how such obstacles and catalysts are embedded in the broader social, cultural, economic, legal, political, and psychological contexts of regulation and regulatory practice.


procedures, and collective learning. facilitates the uptake of AI innovation and AI-prompted policy change.

3.       The readiness of wider systems—namely, the socio-economic, political, and interorganisational circumstances and the general legal, regulatory, and policy surroundings that operate as preconditions of the successful adoption of new technologies or technology policy innovations among organisations and wider social institutions. At the system-level, readiness involves the way that structural factors such as educational infrastructure and mechanisms of inter- organisational cooperation and multi-stakeholder coordination allow organisations and people to adopt and integrate AI innovation and AI-prompted policy change.

 

As part of the research for Common regulatory capacity for AI, 28 individuals representing

different roles and levels of seniority at regulatory bodies were interviewed (15 from large-sized regulators, 10 from medium-sized regulators, and 3 from small-sized regulators).26 These interviews revealed concerns that spanned all aspects of readiness:

 


26 Details of the methodology can be found on pp. 11-13 of the report.

27 Absorptive capacity has to do with an organisation’s capacity to grow with new knowledge introduced by a technology or technology policy innovation: The success of a technology or technology policy innovation will be affected by the degree to which an organisation is able to build upon a strong knowledge and skills base and assimilate new knowledge into existing practices and capabilities. This is often supported by established mechanisms for sharing and disseminating knowledge throughout the organisation.

28 The success of a technology or technology policy innovation will be affected by the degree to which an organisation’s members share confidence in their efficacy to implement change, value change as important and beneficial, reject institutional inertia, and share a resolve to initiate, persist, and cooperate in carrying out innovation. This is change readiness.

29 Receptive context is about an organisational culture’s openness to ingenuity. The success of a technology or technology policy innovation will be affected by the degree to which the norms and shared expectations of an organisation create conditions of openness to change and lower the burdens of compliance and opposing demands. A receptive context is enabled in organisational environments that encourage ingenuity, demonstrate tolerance to novel or unconventional ideas, and accept conceptual risk-taking.


 

The Common regulatory capacity for AI research explored possible paths forward to address readiness and capacity gaps in the regulatory environment—taking into account, in particular, the readiness deficits that were emphasised by the interviewees who took part. These paths forward for government and regulatory bodies included:

 

 

Aside from highlighting these high-level goals, the report identified the most promising avenue towards building common capacity as the creation of an AI and Regulation Common Capacity Hub (ARCCH), convened by an independent and authoritative body in AI. The Hub would provide a trusted platform for the collaborative pursuit of common capacity while consolidating existing initiatives and avoiding unnecessary additional crowding of the landscape. To act as a trusted partner for regulatory bodies, ARRCH would have its home at a politically independent institution, established as a centre of excellence in AI, drawing on multidisciplinary knowledge and expertise from across the national and international research community. The newly created hub would:

 


A widespread disjointedness and siloing of public sector agencies and departments

 

The UK’s AI governance ecosystem is currently weakened by a lack of coherence and communication between multiple central government departments, subnational public sector agencies and authorities, standards bodies, and regulators working in the same space or closely related spaces. This has tended to obstruct the development and adoption of effective cross-government standards, structures, and processes to secure responsible AI innovation and procurement practices. Such a widespread disjointedness of public sector agencies and departments has been exacerbated by the prevalence of narrowly sector-driven approaches to AI governance which are out-of-step with the way that AI systems, as general-purpose technologies, cut across sectoral boundaries and create horizontal governance issues that demand cross-sectoral and cross-departmental cooperation and response.

 

Acknowledging the challenges that this tendency to cross-governmental disconnection and siloing poses to good AI governance, the OECD, in its 2021 paper, ‘Recommendation of the Council for Agile Regulatory Governance to Harness Innovation’, called for governments to ‘lay institutional foundations to enable co-operation and joined-up approaches within and across jurisdictions by: (1) Strengthening co-operation across policy-making departments and regulatory agencies as well as between national and sub-national levels of government; and (2) Stepping up bilateral, regional, and multilateral regulatory co-operation to address the transboundary policy implications of innovation.’30 Likewise, the World Economic Forum, in its 2020 report, ‘Agile Regulation for the 4th Industrial Revolution’, emphasised the need for a ‘joined-up’, ‘whole-of-government’ approach to governance and regulation:

 

The Fourth Industrial Revolution is characterized by technological innovations that straddle sectors and institutions alike. Businesses can often find themselves navigating a patchwork of regulation whose complexity can deter them from introducing new ideas, products and business models. In one UK study, 69% of the businesses surveyed felt that regulators did not work closely enough with each other. A “whole-of-government” approach is needed to seize the opportunities and manage the risks of the Fourth Industrial Revolution.31

 

To address the disjointedness and siloing of public sector agencies and departments in the AI governance context, time, resources and labour need to be dedicated to proactively pursuing a whole- of-government approach to the establishment of an optimally agile AI policy and governance infrastructure in the UK. This would involve both joining up relevant bodies, agencies, and departments within and between national governments and developing a robust communication and knowledge- exchange infrastructure that could connect such a network with local and regional bodies, agencies, and departments. It would also involve undertaking deliberate efforts, at the central government level, to stitch together related, adjacent, or directly connected areas of technology policy and governance (like policy areas surrounding data quality, data integrity, and data infrastructure, cloud computing and compute infrastructure, online environments, cyberphysical systems and the IoT, informational privacy and data protection, and cybersecurity), so that coherence can be hardwired into the UK’s approach to wider digital transformation.

 

Some notable efforts toward a more joined-up AI governance ecosystem have already been underway in this connection. In the context of data governance, the National Audit Office (NAO) has made


30 OECD, Recommendation of the Council for Agile Regulatory Governance to Harness Innovation. October 2021. p. 3. https://www.oecd.org/mcm/Recommendation-for-Agile-Regulatory-Governance-to-Harness- Innovation.pdf

31 The World Economic Forum, Agile Regulation for the Fourth Industrial Revolution A Toolkit for Regulators. p.

38. https://www3.weforum.org/docs/WEF_Agile_Regulation_for_the_Fourth_Industrial_Revolution_2020.pdf


advances in bringing more coherence to the government’s approach to data stewardship and responsible data management in support of the advancement of the National Data Strategy. In its 2019 report, ‘Challenges in using data across government’, the NAO made two recommendations specifically aimed at improving the operational coherence of the government’s approach to the governance of data practices:

 

1.       Set up clear cross-government accountability, governance and funding for data to support delivery of the data strategy. Joint working and cross-government groups need to have clearly assigned responsibilities that are aligned with the levers available including funding, controls and operational resources. These arrangements should be clearly communicated across government to alleviate confusion of where responsibilities lie.

2.       Develop cross-government rules, standards and common ways to collect, store, record and manage data. Where multiple standards are used, government should develop a consistent approach to balancing competing demands between standardisation and local requirements, including implications for future decision-making and costs. This should include a regular review of departments to ensure that they are applying these standards and principles to their data collection.32

 

In good practice guidance for senior leadership issued in July 2022, the NAO notes that the first of these recommendations has been implemented by DCMS and the Cabinet Office as of March 2020 and that the second recommendation is still a ‘work in progress’.33 There is a need to vigorously build on and extend these efforts into the area of AI governance, and, as the work of the CDDO and CDEI on establishing a cross-government algorithmic transparency standard demonstrates, significant steps are being taken toward this end. Be that as it may, the government will have to greatly increase its dedication of policy energies and resources to meet the complex logistical, operational, and cultural challenges that must be confronted to join up the myriad moving public sector parts of the UK’s digital technology governance ecosystem.

 

 

A lack of overall policy coherence across the UK’s AI governance ecosystem

 

In both the House of Lord’s Liaison Committee 2020 report and the APPGDA’s 2021 paper, it was noted that a lack of overall policy coherence across the UK’s AI governance ecosystem has produced divergences between national frameworks and devolved local frameworks and has generated


32 National Audit Office, Challenges in using data across government. June 2019. p. 12. https://www.nao.org.uk/wp-content/uploads/2019/06/Challenges-in-using-data-across-government.pdf 33 National Audit Office, Improving government data: a summary guide for senior leaders. July 2022. https://www.nao.org.uk/wp-content/uploads/2022/07/Improving-government-data-a-guide-for-senior-

leaders.pdf. It should also be noted that there have been other central government initiatives to undertaking more joined-up and cross-government approaches to data and AI. For instance, the Department for Work and Pensions, building on the work of the Open Data Institute (ODI) on Data Trusts and the Alan Turing Institute’s work on Data Safe Havens, has undertaken the establishment of a Labour Market Data Trust pilot that ‘has enabled [them] to explore improved data accessibility between the four government departments closest to everyday Labour Market activity, namely the Department for Work and Pensions (DWP), the Department for Education (DfE), the Department for Business, Energy and Industrial Strategy (BEIS) and Her Majesty’s Revenue and Customs (HMRC).’ https://dwpdigital.blog.gov.uk/2022/02/17/dwp-digital-is-improving-cross-government- data-sharing/. Likewise, the Ministry of Justice, has initiated the Data First project (supported by ADR UK) that ‘aims to unlock the potential of the wealth of data already created by the Ministry of Justice (MOJ), by linking administrative datasets from across the justice system and enabling accredited researchers, from within government and academia, to access the data in an ethical and responsible way.’ https://www.gov.uk/guidance/ministry-of-justice-data-first


uncertainty in among private, public, and third sector organisations about what lawful, responsible, and ethical AI research and innovation looks like. In the data governance context, the 2021 APPGDA paper called for a ‘coherent ethics framework for devolved public service delivery’ and observed that the ‘overwhelming message from businesses, academia and local public services during this inquiry was that there is already a plethora of ethical policies and frameworks at the national and international levels. If the government is serious – as it indicates in the draft National Data Strategy – about “the development of a clearer policy framework to identify where greater data access and availability across and with the economy can and should support growth and innovation”, it must simplify the regulatory and policy “thicket” which is impossible to navigate by individuals and SMEs.’ Similarly, the UK AI Council, in its AI Roadmap, called for the UK to lead in developing appropriate standards to frame the future governance of data in order to create an agile national data infrastructure that abides by the FAIR data principles and that is supported by ‘best-in-class data governance standards’.

 

These views on the necessity for a coherent national data policy framework extend to the UK’s AI governance ecosystem. Building a coherent AI ethics and policy framework that cuts across national governments and local authorities as well as private and public sectors is a precondition of creating the kind of commercial and regulatory certainty that can optimally support growth and innovation in the AI environment. While the UK has been moving towards this goal over the past several years, much more needs to be done to bring coherence and consistency across its AI ethics and governance landscape.

 

Already in 2018, the House of Lords Select committee inquiry on AI recommended the establishment of ‘a cross-sector ethical code of conduct, or “AI code”, suitable for implementation across public and private sector organisations which are developing or adopting AI’. The following year, the government published its public sector guidance on AI ethics and safety, which included such a cross-sector AI ethics code that championed principles such as fairness, accountability, sustainability, and transparency as well as ethical values supportive of individual dignity, social connectedness, beneficence, justice, and the public interest.34 In 2020, the CDDO’s Data Ethics Framework was updated and aligned with the public sector AI ethics guidance, and the ICO and the Turing Institute co-published the national AI explainability guidance, which was likewise deliberately aligned with the public sector AI ethics code but additionally meant to apply across public, private, and third sectors. In its report, Artificial Intelligence and Public Standards, the Committee on Standards in Public Life urged the government to help the public better understand high-level ethical principles, to make the public sector guidance on AI ethics and safety more usable, and to ‘promote it extensively’.35 In response to this and as recommended in the National AI strategy, the Office for AI and The Alan Turing Institute have been developing a series of eight workbooks on AI ethics and governance in practice, which are now being piloted with various government departments and regulatory bodies and will be published in 2023.

 

Another government initiative to build coherence and consistency into the UK’s AI governance ecosystem has emerged in the upcoming regulation whitepaper. The preliminary policy statement on the whitepaper stresses that cross-sectoral principles should be put into place to create coherence:

 

 


34 https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety. The code had been developed in collaboration with UK civil servants and included a set of ethical values that were meant to provide an accessible framework for consideration of the moral scope of the social and ethical impacts of AI projects and to establish well-defined criteria to evaluate their ethical permissibility’. These ‘SUM values’ were: Respect the dignity of individual persons; Connect with each other sincerely, openly, and inclusively; Care for the wellbeing of each and all; and Protect the priorities of social values, justice, and public interest. The code also included ‘a set of actionable principles that were meant to ensure AI projects were bias-mitigating, non- discriminatory, and fair, and to safeguard public trust in every project’s capacity to deliver safe and reliable AI innovation’. These ‘FAST Track’ principles included Fairness; Accountability; Sustainability; and Transparency.

35 Committee on Standards in Public Life, Artificial Intelligence and Public Standards. February 2020.


‘’We propose to establish a set of cross-sectoral principles tailored to the distinct characteristics of AI…and to achieve coherence and support innovation by making the framework as easy as possible to navigate’…’ we will ensure the system is simple, clear, predictable and stable.’36 The paper presents the principles as follows:

 

These principles (simplified as: safety, security, reliability, transparency and explainability, fairness, and accountability) are broadly aligned with the government’s public sector AI ethics guidance. They are meant to subsist as high-level principles that regulators across the UK then tailor to the specific needs and requirements of their remits. While this element of the whitepaper is still under development, the establishment of such cross-sectoral principles would improve the overall policy coherence across the UK’s AI governance ecosystem—though notable statutory deficits that are associated with this approach will be explored below.

 

Moving forward, there are several additional steps that must be taken to improve the consistency and coherence of the UK’s ethics and governance policies and frameworks:

 

 

Safety and Sustainability: social and environmental sustainability, security, robustness, reliability, accuracy and performance


36 https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating- ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement


Accountability: Traceability, answerability, auditability, clear data provenance and lineage, accessibility, reproducibility

Fairness: Bias mitigation, diversity and inclusiveness, non-discrimination, equality

Explainability and Transparency: Interpretability, responsible model selection, accessible rational explanation, implementation and user training

Data quality, integrity, protection, and privacy: Source integrity and measurement accuracy, relevance, appropriateness, and domain knowledge, attributability, consistency, completeness

 

Difficulties in bootstrapping AI governance protocols and regulatory requirements from existing legal and regulatory frameworks

 

With the accelerating spread of AI technologies across sectors and domains of human experience, difficulties have emerged in applying existing laws to the design and use of AI technologies due to the inapplicability of existing statutory definitions and specifications to the novel issues raised by AI technologies. A recent study commissioned by the Office for Product Safety and Standards highlights that the use of AI in consumer products is ‘challenging the regulatory framework for both product safety and liability’:

 

The characteristics of more complex AI systems, in concert with general technological trends, pose challenges across all elements of the regulatory regime, including product safety and liability- related legislation, market surveillance regimes, standardisation, accreditation and conformity assessment. The key characteristics of AI systems…include mutability, opacity, data needs and autonomy. The general market of trends of relevance include: the blurring of the lines between products and services; the increasing ability for consumer products to cause immaterial as well as material harm; the increasing complexity of the supply chains for consumer products; and issues related to built-in obsolescence and maintenance throughout a product’s lifecycle.37

 

The study points out that the novel problems introduced by AI-enabled products are challenging legal definitions of ‘product’, ‘producer’, and ‘placing on the market’. Emergent harms are being produced by the products themselves, because they can act ‘autonomously’ and change with changing environments, thereby nullifying the possibility of conformity constraints defined by design-time specifications. Moreover, downstream liability gaps are arising when the activities of employees and service providers are replaced by those of autonomously functioning AI-enabled products, which are not legal persons and hence can neither commit nor be held liable for a tort. This means that ‘normal conditions are not met for manufacturers, operators, or users to be liable to pay compensation to those injured by an autonomous system.’38 A source of this breach in legal responsibility is the ambiguous status of an autonomous system as both a manufactured product and a vicarious agent (which acts on behalf of a legal person). The current state of statutory unclarity with regard to the legal status of autonomous systems creates uncertainty as to whether and how forms of no-fault manufacturer liability, negligence liability, or vicarious liability may be applied when autonomous systems produce wrongful behaviour. Additionally, even under enforceable product liability regimes, the opaque and dynamic character of these systems make it extremely difficult to establish defects and to pinpoint the source of a harm or damage that has resulted from machine behaviour.


37 Office for Product Safety and Standards, Study on the Impact of Artificial Intelligence on Product Safety. December 2021. pp. 7-8. https://www.gov.uk/government/publications/study-on-the-impact-of-artificial- intelligence-on-product-safety

38 Burton, S., et al. (2019a). Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective. Artificial Intelligence, 279.


Beyond this aspect of liability, wider accountability gaps in the design, development, and deployment of AI systems are putting additional pressures on existing legal and regulatory frameworks. Automated, AI-enabled decisions and “autonomous” AI behaviours and are not self-justifiable. Whereas human agents can be called to account for their judgements and decisions in instances where those judgments and decisions affect the interests of others, the statistical models and underlying hardware that compose AI systems are neither responsible nor accountable in the same morally relevant sense. This creates accountability gaps that must be addressed so that clear and imputable sources of human answerability can be attached to decisions assisted or produced by AI systems. However, establishing human answerability is not a simple matter when it comes to the design, development, and deployment of AI systems. This is due to the complexity and multi-agent or distributed character of the production and use of these systems. Typically, AI project delivery workflows include department and delivery leads, technical experts, data procurement and preparation personnel, policy and domain experts, implementers, and others. Due to this production complexity, it may become difficult to answer the question of who among these parties involved in the production of AI systems should bear responsibility if these systems’ uses have negative consequences and impacts.

 

These accountability gaps and aspects of fraught answerability are putting unprecedented pressures on the legal sphere to adjust so that sufficient and juridically codified mechanisms of algorithmic accountability and transparency can compensate for the new cracks in the UK’s governance ecosystem that have been introduced by AI technologies. Calls for legally mandated public transparency and end- to-end human chains of accountability are now arising on both sides of the Atlantic and in the EU. For instance, Lord Tim Clement-Jones has introduced a private member’s bill (The Public Authority Algorithm Bill) which, in the public sector context, would also require Algorithmic Impact Assessment and contain transparency, system logging, and mandatory staff training mechanisms. The Institute for the Future of Work has likewise proposed an Accountability for Algorithms Act that includes new statutory duties for public consultation and stronger transparency requirements for the production and use of AI systems.39 Legislators in the United States have also now introduced a newly updated 2022 Algorithmic Accountability Act that would mandate impact assessments for automated systems that make critical decisions. It would also create a public repository at the Federal Trade Commission of these systems to ensure a degree of public transparency.

 

The current position of the regulation whitepaper is so far sufficiently responsive neither to the set of challenges to the efficacy of existing legal and regulatory regimes posed by AI systems (as outlined in this section) nor to possible statutory paths forward to ensure sufficient transparency and accountability across sectors. Moving forward, the UK should take a more considered position on establishing appropriate legal measures to combat the novel issues that AI poses to its governance and regulatory landscape. Only in this way will it be able to quell the high degrees of legal and regulatory incertitude that are hampering the confident pursuit, among both public and private sector organisations, of opportunities to develop AI technologies that could potentially have tremendous benefits for society. Without an effective, well-capacitated, and robust UK legal and regulatory regime, there will continue to be high levels of uncertainty and hesitation and a lack of verified public trust in the AI innovation domain. This could obstruct the advancement of a responsible AI ecosystem and trigger a race-to-the-bottom vis-à-vis minimalist compliance and gap exploitation, creating path dependencies of bad behaviour that have devasting long-term impacts on UK’s stature as a global pacesetter in the domain of responsible innovation.

 

(December 2022)

 

 


39  https://www.ifow.org/publications/mind-the-gap-the-final-report-of-the-equality-task-force