WRITTEN EVIDENCE SUBMITTED BY ADA LOVELACE INSTITUTE
GAI0086
We welcome the Committee’s investigation of the current state of AI governance in the UK. The framing of the questions could suggest that there exists a clear and coherent framework of AI governance in the UK which can be meaningfully critiqued. We do not see that to be the case.
Rather than critique the current complex ecosystem, our response seeks to provide more proactive feedback to the Committee, presenting a bird’s eye view of the challenges at play. Our extensive research has equipped us with insights from experts on what the key components to governance of AI could be. Taking this research and evidence we provide a response which outlines thinking about what is needed to develop coherent governance and regulation of AI across the value chain. We outline some of the components of AI, the tools for governing that will be required, potential mechanisms for enforcement and what the regulatory oversight might look like. In doing so we provide answers to questions 1 through 5 of the Committee’s call for evidence. In response to question 6 we have provided an oversight of international approaches to thinking about AI governance and regulation and areas that could be explored further by the UK.
The Ada Lovelace Institute has undertaken a wide range of research and thinking considering the impact of AI and where governance and regulation ought to be developed.
Supplementary to our response we encourage the Committee to read our research in the field of AI. As ever, we are keen to support the Government in navigating the challenging question of getting AI right for people and society, and would be happy to discuss these points in further detail.
Our work includes:
Furthermore our work in Europe has concentrated on the EU AI Act and includes:
And forthcoming research on participation on standards bodies.
(Answer to question 1. How effective is current governance of AI in the UK? What are the current strengths and weaknesses of current arrangements, including for research?)
There are numerous examples of non-statutory guidance from regulators, corporate bodies and Government. These cover varying elements of AI such as AI and data protection guidance[1] or guidance on guidelines for AI procurement.[2] Within industry, there has been a proliferation of self-regulatory approaches, from ethics principles, such as Google’s AI principles[3] and the BBC’s Machine Learning Engine Principles and checklists[4], to the establishment of ethics board, from policing technology company Axon[5] to Microsoft’s Aether committee.[6] Some development of technical standards for AI has begun, in particular from the Turing Institute[7], CDEI and CDDO[8] which we welcome. We also welcome the consideration given to what regulation is needed for AI, published by relevant regulators (including recent papers from the Digital Regulation Cooperation Forum[9]) academics, lawyers and civil society.
These initiatives and attempts to fill legislative gaps with guidance have provided elements of governance in the AI value chain, but the ecosystem is complex and lacks coherent joined up thinking. Combine this with the paucity of specific policy and regulatory or legislative proposals to engage with emerging AI specifically, and the challenge of effective governance and oversight becomes clear.
In terms of existing legislation which speaks to the governance of AI , the UK GDPR is a powerful regulatory strategy which applies to a broad range of data-driven processes (such as AI which today largely includes machine learning models and pattern recognition systems such as for example facial recognition systems, fraud detection), as well as to the context in which AI can be used. For example, data that has been collected for one purpose, cannot be used for another purpose if legal requirements are not met (e.g. inconsistent with the original purpose, lack of explicit and informed consent). This means data collected for one purpose cannot then be reused to train machine learning models, if legal obligations are not met. Data protection law regulates AI at its foundations: data.
The current version of the Data Protection and Digital Innovation (DPDI) Bill however misses the opportunity to strengthen accountability rules and rights over how data is used in data processing, be it automated or not.[10] The Bill is currently on pause for apparent further consultation. We welcome this, as the Bill in its current iteration would do little to enable development of meaningful protections and opportunities for innovation for AI governance and regulation going forward. The UK needs to be clear and strong on accountability, transparency and safeguards in order for data use and AI innovation to flourish.[11]
(Answers to questions 2, 3, 4 and 5.
2. What measures could make the use of AI more transparent and explainable to the public?
3. How should decisions involving AI be reviewed and scrutinised in both public and private sectors? Are current options for challenging the use of AI adequate and, if not, how can they be improved?
4. How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?
5. To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose? Is more legislation or better guidance required?)
In the public discourse, AI is understood as covering a range of developing and evolving systems such as machine learning, robotics, automation and algorithms. In regulatory terms, ‘regulating AI’ means addressing issues such as data-driven or algorithmic social scoring, biometric identification and the use of AI systems in law enforcement, education and employment. In other words, making technological systems accountable for the significant impact they have on our lives and on society. Regulatory efforts are intensifying all over the world, with over 60 countries releasing AI documents. Some, such as the EU, are developing a risk-based approach resting on the protection of fundamental rights; others, such as the United States, have elaborated a set of high-level civil rights-based considerations to guide the activities of Government departments and regulators. China’s approach has been to implement a range of frameworks and rules emphasising the importance of social stability while in parallel promoting AI innovation in strategic areas[12].
The UK approach, trailed in the recent Establishing a pro-innovation approach to regulating AI[13] policy paper from DCMS, appears to be a high-level blend of three entry points: an overriding pro-innovation framing; some mild risk reduction measures geared primarily towards business efficiencies; and an intention to protect people’s rights including ensuring consideration is given to “fairness and transparency”. The paper’s high level approach however only highlights the lack of values, the lack of clear vision for what AI should do and how it should shape British lives and society, and a pragmatism about the inevitable trade-offs that will be required for a successful governance regime.
The current emphasis by Government on light touch regulation is a concern. Accountability and transparency have been persistently positioned as inhibitors to innovation, while the precautionary principle and existing data protections are cast as creating unnecessary burdensome red tape designed to restrict growth. While these views may have resonance across some established sectors, the rapid rise of harm flourishing online makes clear that a light touch regulatory approach to data-driven technologies, such as AI, is naive at best, dangerous at worst. Indeed, it runs counter to the expectations of leading AI businesses who see responsible regulation as offering certainty and a useful protective against potential backlash[14].
Regulation should be a positive mechanism for shaping society and industry, for defining what is important to society and for creating incentives to achieve it. We would caution therefore against viewing regulation, one of the tools for governing AI, as solely a tool for managing risk or limiting the worst business practices. The development of AI regulation will not only encourage oversight and enforcement of AI systems, requiring them to be transparent, explainable and accountable, but will encourage policy makers and regulators to consider what role AI should play in developing society.
Underpinning this should be a clear vision and set of values, rights and principles steering the development, deployment and use of AI systems in the UK, which align to emerging international approaches. Developers, regulators, government and the public need clarity on what societal value AI innovation should have beyond being a driver and enabler of growth. The impact from AI technologies and systems to influence and in some cases fundamentally change society and people’s lives has to be taken into consideration. The opportunities from AI should benefit society holistically. The expectation that unleashing AI will enable ‘trickle-down’ benefits is inadequate to this challenge.
There are, of course, well established AI principles in operation elsewhere, which the UK has been involved in developing. The OECD AI Principles[15] provide five values-based principles to promote use of AI that is innovative and trustworthy and that respects human rights and democratic values, namely:
Dame Wendy Hall and Jerome Pesenti in their report Growing the Artificial Intelligence Industry in the UK [16] suggested that the overarching principle that should guide the development of systems of data governance should be “the promotion of human flourishing”. The practical support for this, they suggest should be that data is managed and used in a way that:
● Protects individual and collectives rights and interests
● Ensures that trade-offs affected by data management and data use are made transparently, accountably and inclusively
● Seeks out good practices and learns from success and failure
● Enhances existing democratic governance.[17]
The House of Lords Select Committee report AI in the UK: ready, willing and able?[18] suggested five overarching principles for an AI Code which mirrored in part the principles proposed by the OECD and Hall and Pesenti. The recommendation in the report makes reference to AI being for the common good, operating on principles of intelligibility and fairness, and ensuring that AI is not used to diminish data rights or privacy.[19]
There can be a tendency to cast AI governance as being the sole responsibility of a certain set of actors in the AI value chain (often the developers or providers of AI products and services), or as being located in only one part of the ecosystem (for example in health technologies). The diversity of the AI value chain and the actors involved means that it is more likely that a range of governance interventions or regulations will need to be considered and adhered to throughout.
The components of the AI value chain that will need to be governed can loosely be divided into four stages - research and development, commercialisation, deployment and post-deployment. We emphasise this isn’t a universally adopted rubric nor necessarily a perfectly characterised one; we use it here as a tool to assist thinking about the actors and activities that require governing across the AI ecosystem. In practice, the route to an AI product or service being put on the market and used may be much more circuitous than the linear process we describe below.
The development of AI systems, tools and services happens largely in the private sector rather than in academia or the public sector; this is particularly the case for large-scale AI (such as large language models) where close to 0% of development happens in academia.[20] However, it is important to emphasise there is no one path for AI development; while large models developed in private sector research institutions such as DeepMind or OpenAI receive a lot of attention in the public discourse, it is important to consider the many routes for AI tool development and implementation by SMEs, start ups, academic partnerships, and public sector entities.
The overriding imperative for governance at this stage of the value chain should be to create an AI ecosystem that enables diversity and competition in AI development, ensuring that the AI products and services which emerge reflect as broad as possible range of aspirations for how emerging technologies can better society.
Governing AI development is a function of governing and shaping access to the three main elements of AI development and research: computing power, expertise, and data. Big Tech dominates AI development because it has the most unbounded access to all three elements - it invests in large computing systems, draws academic talent from universities through competitive salaries and benefits, and has access to monumental stores of data through the data-driven nature of services it offers.
Governance interventions such as industrial strategy, state research and development investment, skills development, and legislative tools may all have the potential to change the proportion of AI development that happens in Big Tech versus that which happens in academia, start ups or SMEs. For example, legislative interventions to mandate research access to Big Tech data stores for public sector AI development, or competition regulation to minimise the monopolistic nature of Big Tech could be upstream interventions which ultimately support a more diverse AI development ecosystem. R&D investment in universities to support the acquisition of more computing resources may also have the benefit of retaining expertise in academic settings, rather than AI researchers being attracted into the private sector by the superior access to compute and data it provides.
Key to governing AI research and development is considering the data pipeline for AI, and to what extent AI research and development itself incentivises harmful data collection and retention practices, or rests upon exploitative practices that incentivise and normalise surveillance. Data protection regulation is therefore a critical component of AI governance, both to ensure that the data used to train AI models is of good quality, accurate and up to date, and to limit incentives to collect and retain data indefinitely, which in itself creates concerns not only from the perspective of human rights but also in terms of information security.
We use the term ‘commercialisation’ to refer to the point in the AI value chain where AI developers shape systems into tools, products or services for externalisation. While interventions necessary at the R&D stage might be less onerous and should be designed to enable innovation, preparing a product or service for market implies a much greater responsibility to consider the direct and societal impacts thereof.
“Commercialisation” may not be the best term for this stage as it encompasses the preparation of an AI product or service for use outside the research setting, even where that product or service will be provided for free or on an open source basis (as is the case for some large language models).
The driving imperative behind governance of the commercialisation stage should be to ensure AI products and services are aligned with articulated values, ethics and rights. Articulating those values, ethics and rights is a matter of political leadership and policy ambition, and should be enshrined in law. Ensuring compliance with those values, ethics and rights can be achieved through a number of different tools and mechanisms, for example:
● Certain technical aspects of AI products and systems can be required to comply with technical specifications developed by technical standards bodies;
● Data governance mechanisms such as dataset documentation and model cards, can be imposed to require AI developers to record and disclose factors such as data provenance, enabling auditing and compliance checks at a later point in the ecosystem
● AI developers can be required to undergo risk assessments and impact assessments in order to foresee, consider, mitigate and account for the potential direct and societal benefits of their products and services;
● Transparency mechanisms can be imposed to require the developers of products and systems to build in features of explainability, confidence assessment etc.
● Human oversight features can be required in the technical build to ensure that meaningful human oversight is both possible and necessary within the system
Ex ante regulatory oversight could be located at the commercialisation stage of the AI value chain. Developers could be required to get regulatory approval for products or services prior to their release into the market, such as is required of medical devices under the United States regulatory system overseen by the Food and Drug Administration (FDA).
We use “deployment” to consider the stage at which an AI product or system is procured, purchased or implemented by a provider, developer or user. Governance of this component of the AI value chain should be focussed on ensuring deployment of an AI system is appropriate, transparent, accountable and contestable, safe and secure, and consistent with human rights and other societal considerations. Obligations at this stage should rest on a range of actors:
● the developers of the tool or system, who should be obliged to support its deployment in a safe and secure way and to respond to modifications;
● providers of the tool or system, including intermediaries;
● and users of the system, who have an obligation to assess and understand the impacts of the system on individuals, groups and societies; to maintain the system and address domain drift; and who may have additional obligations regarding data used for transfer learning or fine-tuning the AI models, or data on which the AI systems operate.
Regulation should speak to ensuring the outcomes of an AI product or service are consistent with articulated values, ethics and rights, as well as setting and shaping the structures for oversight, remedy, complaint and liability.
A continuation of deployment stage, post-deployment of an AI product or system is an iterative process of continual assurance, monitoring and reporting, particularly for systems which are continually retrained on new data that arises from the use of the product or service, and thus evolves over time. This stage should imply obligations for the developers, providers and users of the system to assess the system, its functioning as well as its impacts, which may change over time and context. Facilities to permit public contestation and complaint about AI tools and products, to address misuse, market and systemic problems which may arise, and to look at unintended consequences and effects, must be built into governance mechanisms.
Existing laws and regulations will provide the backbone needed for regulation and governance of the components outlined. For example:
Data protection law - Data protection law regulates AI at its foundations and applies to a broad range of data-driven systems. Adherence to data protection regimes will be a fundamental tool for governing the development and deployment of AI. It will be vital that any iterations or development of data protection law establishes a strong regime of oversight, transparency and accountability, this must include algorithmic impact assessments (detailed below). A light touch data regulation for AI will not suffice.
Competition law - Access to data and use of data is not just a data protection concern but a competition issue. Development and deployment of AI systems will need to ensure that future risks and/or existing harm from potential competition issues such as anti-competitive behaviour or market dominance are identified and addressed in advance and monitored throughout the systems lifespan. Section 1 of our Rethinking data and rebalancing digital power[21] report considers interoperability measures as a way to correct market imbalances in digital markets, support alternative services for core platform functionalities and offer more choice to users.
Human rights and equality law - The risk to human rights from AI systems is a serious and widespread concern. Governance of AI must ensure systems protect people’s rights and fundamental freedoms by adhering to the Human Rights Act. AI has the potential to create or exacerbate inequality for groups across society. The risk of bias being built, knowingly or otherwise, into models or outcomes from AI systems leading to discrimination will need to be tackled. Governance will need to include consideration and adherence to equality law and guidance on collection and use of protected characteristic data.
Consumer law - Consumers are currently protected from unsafe and faulty products including digital content or unsafe digital products. The introduction of AI into consumer products however brings a new set of harms. Consumers must be protected from bias and/or discrimination from algorithms and automated decision making in consumer services and products that use AI. Consumer protection and law will need to be developed to ensure it is fit for purpose.
This set of tools will need to be applied and enforced in coordination and comprehensively, using all the regulatory measures available.
Furthermore, as AI systems are dual use, there is potential for authoritarian states and hostile foreign actors to deploy AI for malicious and harmful purposes in military and law enforcement domains. AI systems available commercially could therefore be repurposed in ways that pose national security concerns. Therefore, consideration will need to be given by Governments in the development of AI regulation about how foreign investment screening regimes and export controls can be applied to companies developing and deploying AI models, in order to prevent unwanted proliferation of capabilities to malicious actors.[22] Similarly, the development and deployment of AI systems depends upon access to compute hardware foreign investment screening regimes and export controls to the intellectual property, designs, manufacturing equipment used to create semiconductor chips, and export of the chips themselves will need to be considered.
Legislative tools are one element of governance but the tool box should also include technical and procedural tools such as standards and impact assessments and ensuring public participation and oversight bodies are involved in the development, deployment and governance of AI. We describe these tools in more detail:
Standards - Standards hold an important role in any potential regulatory regime for AI. Standards have the potential to improve transparency and explainability of AI systems to detail data provenance and improve procurement requirements.
Explainability would also be supported by standards that support reproducibility (being able to recreate a given result) and data versioning whereby a snapshot can be taken of the AI in a specific state to enable a record of which input led to which output. It should be noted that explainability may not always be an option due to the opaque nature of some AI systems which places a hard limit on the pursuit of explainability and transparency for example, popular approaches to AI, such as deep neural networks and reinforcement learning, often lack transparency and explainability, and thus it is difficult to audit the inner workings of these systems for robustness and trace a path from inputs to outputs.
There is an emerging body of work on AI interpretability, even at the individual ‘neuron’ level, and companies, like Anthropic AI, which are investing heavily in developing methods and tools that would allow for greater explainability.[23] However, at present, these methods will need time before they mature into effective regulatory tools which can be standardised across industry. In the meantime, experts[24], have suggested that a regulatory system should place more emphasis on methods that sidestep the problem of explainability, looking at the outcomes of AI systems, rather than the processes by which those outcomes are achieved.
Furthermore, while standards are useful for safety, quality and security of products they are not especially well-suited to dealing with considerations of important and commonly held values such as agency, democracy, the rule of law, equality and privacy.
We welcome the work that is currently being done by the Alan Turing Institute on the AI Standards Hub[25]. On a less positive note we are disappointed that the transparency standard developed by the Central Digital and Data Office (CDDO)[26] to assist with public-sector use of data for algorithms has been excluded from the DPDI Bill. While the standard is in its infancy, the exclusion of any form of transparency mechanism from the Bill is short sighted. As the Bill is iterated we would like to see inclusion of this standard with the view to iteration and development over time. Furthermore, we would like to see the inclusion of a transparency register specifically for public-sector organisations developing and deploying algorithmic tools. Such an approach would demonstrate that the Government takes transparency standards in the data and AI space seriously.
Impact assessments - Impact assessments (IA) are a well-established method used to assess human rights[27], equalities, data protection, financial and environmental impacts of a policy or technology ex ante. The process of an IA in itself is a critical tool for transparency. Public registers of IAs and the publication of IAs is also vital, providing the means for people, investigative journalists, researchers and civil society organisations representing groups within society, to scrutinise what an AI system is doing, why and how and what data is in use. Research by the Centre for Data Ethics and Innovation described different ‘tiers’ of transparency for different stakeholders.[28]
Algorithmic impact assessments (AIA) are being explored in a number of jurisdictions, notably Canada (see below), and across different sectors. Our report Algorithmic impact assessment: a case study in healthcare[29] makes recommendations to NHS AI Lab on how to use AIAs to ensure uses of public-sector data are evaluated and governed. This process should produce benefits across all of society; from the people affected by the technology and outcomes, to those developing and regulating it.
To support the development of algorithmic IAs we would call on Government to seriously reconsider the approach being taken in the current iteration of the DPDI Bill. The current review and consultation of the Bill we hope will see the Government roll back its proposal for change in relation to IAs and retain and strengthen the UK GDPR’s approach to accountability. For example, data IAs should be undertaken by all organisations who may be considering undertaking data processing that is likely to result in a risk to the the rights and freedoms of data subjects, not just by organisations that employ more than 250 individuals, as problematic applications of data-driven technologies/AI can equally come from small teams of innovators
Public participation and external oversight bodies - involving public voices and external oversight bodies can bring more diverse perspectives to the development, deployment and governance of AI. Examples of approaches to take include:
Public participation and civic engagement in the governance of algorithmic systems assists with the development of systems that will benefit society and serve a necessary purpose. Furthermore, engagement with communities which will be affected by the deployment of an AI system for example within healthcare, law enforcement or financial services will assist with ensuring diversity of views, expertise and experience can be considered in order to challenge bias or discrimination. We worked with NHSX to describe how such public participation can play a valuable and practical role in assessing the potential impacts of algorithms, published in our report Algorithmic Impact Assessment.[30] For more detail and thought we recommend our report Algorithmic Accountability for the Public Sector[31] and considering the work of the Data Justice Lab[32].
Participatory data stewardship, namely involving people in the decision making and governance of how personal data is used within specific sectoral systems and for specific uses has the potential to support good use of data for innovative AI systems. Drawing on different established models of public participation, we describe how this could work for data governance in our report Participatory data stewardship.[33]
External oversight bodies can be beneficial as part of a regulatory framework in scrutinising AI systems that are of specific relevance to their sector or domain. For example, the West Midlands Police Department uses an external ethics committee[34] to scrutinise departments procurement of AI-based technologies. A further benefit of external oversight bodies is the opportunity to develop public engagement and community participation.
Bringing people into the conversation, hearing their thoughts and working with them to develop better or different approaches and outcomes will not only create a more diverse oversight process, but will aid transparency and education of how private or public sector systems are being used in the community.
Ensuring compliance with, and enforcement of the tools for governance, will require auditing and regulator inspection.
We are still learning about the opportunities and risks of AI systems. The complexity of ensuring transparency of the system, the inability to guarantee explainability, the need to ensure streamlined ways for contesting and seeking redress, the issues with embedded bias, and the challenge of predicting or determining the outcome are all areas that will continue to bring challenge for the foreseeable future. The need for strong oversight mechanisms across the board - not just for self-determined ‘high risk’ AI - is therefore rational, logical and necessary in order to ensure that the ongoing development of AI systems doesn’t fall into the trap of ‘move fast and break things’ which can lead to serious harm to society and in the long term to growth and innovation.
The recent harms we have seen from algorithmic decision making tools such as the Ofqual grading algorithm[35] used to determine A-level grades during the Covid-19 pandemic in 2020, and the problems surrounding the Home Office’s visa streaming tool,[36] distinctly demonstrate that harms can appear even when the intentions are good. The requirement to undertake an impact assessment will enable ex ante review of an AI system, while auditing and regulatory inspection will provide an assessment of an AI system’s behaviour and impact ex post and over time.
Our report Regulate to Innovate[37] recommends that for an inspection to be meaningful regulators will need to access policies, processes and outcomes. This will enable review and monitoring of:
● the goals of the AI system,
● what the system seeks to achieve and where potential weaknesses lie,
● assessment of a company’s process for creating the system including the evaluation metrics used
● assessment of the outcomes of the systems on a range of different users
Regulators will need access to specific technical infrastructures, code and data which underlie the platform or algorithmic system. Ensuring they are given the necessary statutory powers to enable access to these systems will be vital. While some UK regulators already have powers to inspect AI systems developed by regulated entities, the inspection of systems becomes much more difficult when those systems are provided by third parties, as much of information required for inspection is proprietary, and AI developers and tech companies are often unwilling to share information that they see as integral to their business model. Indeed, many prominent developers of AI systems have cited intellectual property and trade secrets as reasons to actively disrupt or prevent attempts to audit or assess their systems. With this in mind it will be paramount to ensure that access to third party systems and information is included in any statutory regulation.
Understanding what is happening within many AI systems in order to understand the decisions being made is complex. The black box problem is well known. It is possible to review and scrutinise the data entering the system but the process of machine learning that takes place in order to present a decision is often opaque at best. Research is still developing as to what approach should be taken. Our report Examining the Black Box[38] identifies four approaches to assessing algorithms (one of the many areas covered by the term AI) via two methodologies, namely algorithm audit and algorithmic impact assessment. This dual approach would enable scrutiny of a system’s compliance through every stage of use, before, during and after.
Regulators - Our report Regulate to Innovate[39] proposed a route to regulation based on workshops with experts developing thinking based on the National AI Strategy. The report made a number of recommendations regarding regulatory capacity and coordination based on the current relevant regulators who may be called upon to consider AI within their specific regulatory responsibilities. The view of the experts in that report recommended:
● expanded funding for regulators to help them deal with analytical and enforcement challenges posed by AI systems
● expanded funding and support for regulatory experimentation and the development of anticipatory and participatory capacity within individual regulators
● the development of formal structures for capacity sharing, coordination and intelligence sharing between regulators dealing with AI systems
● consideration of what additional powers regulators may need to enable them to make use of a greater variety of regulatory mechanisms.
Considering the approach needed to support existing sectoral regulators is one approach. There may also be benefit in looking at establishing an entirely new cross-cutting regulatory body that takes on the ex ante requirements of regulation, providing oversight, certification and registration support of general purpose AI. Enabling the sectoral regulators to concentrate on and provide necessary oversight, guidance and enforcement when specific sectoral issues arise.
There are a number of models for ex ante regulators which could be applied. US regulator the FDA already applied ex ante oversight to AI systems which form part of the class of 'medical devices' to which it applies a risk-based classification and ex ante oversight scheme. The UK’s Medicines and Healthcare Products Regulation Authority (MHRA) similarly have begun to develop regulation and standards (alongside the British Standards Institute) for software and AI as a medical device[40]. The approach being taken by these bodies could serve as a model for regulatory oversight of general purpose (non-sector specific) AI systems, services and products to be developed:
● Overseeing foundational AI models and general purpose AI systems used across a range of sectors.
● Setting standards and rules about how AI products should be designed and deployed
● Defining classes of products and classification of risks with specific ex ante oversight requirements and processes.
● Certifying products and systems before they go to market
● Receiving and investigating complaints
● Having statutory enforcement powers
● Responsible for managing a publicly available register of ADM systems[41] and a register of organisations ensuring a named person is identified and verified (this may require Know Your Customer style responsibilities/processes).
We do not believe that one overarching regulator for general and sector specific AI will be beneficial. But a combination of cross cutting and sectoral regulators regulating horizontally and vertically is essential.
Regardless of the approach taken, there will need to be substantial investment to ensure the right staffing, experience, knowledge and training is established. It is not just technical or digital skills that will be required, but an array of expertise in areas such as law, ethics, philosophy, mathematics, science, competition, markets, politics and psychology will also be needed. Furthermore, clarity will be needed on concurrency for decision making and enforcement should there be a need for cross sectoral regulatory involvement.
Ultimately legislation to determine regulators and regulatory involvement will be necessary to best serve the innovators who are seeking to develop AI in their domain. Regulation of AI is not an enemy of innovation, rather it provides a necessary set of guardrails, guidance and support to enable the best ideas and opportunities to flourish to create a better society.
(Answer to question 6. What lessons, if any, can the UK learn from other countries on AI governance?)
Countries across the globe are in varying different stages of developing AI governance and primary legislation. There is much that can be learned from the differing approaches being taken.
European Union
The European Commission presented its proposal for an AI Act in April 2021. Its risk based approach:
● bans certain AI systems (“prohibited” systems),
● gives certain obligations to systems classified as “high risk” based on the sector they are deployed in
● sets some minimal transparency requirements on “limited risk” systems (e.g. chatbots)
● suggests minimal risk AI systems follow code of conducts.
Prohibited systems include, for example, AI capable of manipulation through subliminal techniques and some forms of remote biometric identification. High risk systems are set based on risk to health, safety, and/or fundamental rights (e.g. for recruitment or credit scoring), or that are used in risky areas (e.g. law enforcement). The bulk of the regulation is therefore focused on high-risk systems.
Obligations for these systems include compelling the provider to:
● have a risk management system in place
● follow data governance requirements
● keep technical documentation
● enable human oversight of the system
● test for accuracy, robustness and cybersecurity.
These obligations will be operationalised into technical standards by European standards bodies. Complying with these standards will mean the AI system is deemed compliant with the AI Act.
The approach taken by the EU’s to high risk obligations however is seen by some to be “pretty light-weight”[42] essentially just compelling due diligence via self-assessment by the companies themselves. This is expected to occur in roughly 90% of use cases. The possibility of third party checks by regulatory authorities would usually only come further down the line.
There are lessons to be learned from the EU’s approach. We have published papers and for improvement and strengthening the proposals[43] [44] [45], including eighteen recommendations revolving around three areas:
Other experts in the field have commented on the lack of “public engagement when formulating policy”[46] most evident in the use of standards setting organisations to operationalise the AI Act’s requirements: these bodies are strongly dominated by industry players[47], who lack the ethical and legal expertise to tackle the fundamental rights and societal-level implications of AI systems. Pursuing a more holistic, socio-technical approach is recommended potentially by establishing a standing panel of representative users – a type of ‘citizens assembly’ - which has a say in ex ante and ex post requirements for AI systems, and that has representation in standards setting organisations.
Greater consideration must also be given to systemic risks. It is interesting that the AI Act does not follow the approach taken in the EU’s Digital Services Act (DSA), which compels very large online platforms to conduct risk assessments about systemic risks (Article 26) and take steps to mitigate those risks (Article 27). We would recommend that a future governance regime for AI considered this risk explicitly, as the threat of widespread manipulation or discrimination is one of the main threats posed by AI systems.
We would also suggest that future AI regulation learns from the EU’s experience the importance of considering the AI value chain. The European Commission’s proposal did not consider ‘general purpose’ AI systems, or explicitly address regulation of open source AI models. Both of these have become one of the most contentious points throughout the legislative process, as there is recognition of the risks they pose, but the complexity of the AI value chain makes it difficult to cleanly allocate responsibility: for example, who should be responsible for data governance requirements under the AI Act: the developer (“provider” in EU parlance) or the deployer (the “user”)?.
The AI Act, as proposed by the European Commission, does not impose any transparency or accountability requirements on systems that pose less than high risk (with the exception of AI system that may deceive or confuse consumers), which include the dominant commercial business-to-consumer (B2C) services (e.g. search engines, social media, some recommendation systems, health monitoring apps, insurance and payment services). Regardless of the type of risk (high-risk or limited-risk), this approach leaves a significant gap in accountability requirements for both large and small players that could be responsible for creating unfair AI systems. Responsibility measures should aim both at regulating the infrastructural power of large technology companies that supply most of the tools for ‘building AI’ (such as large language models, cloud computing power, text and speech generation and translation), as well as at creating responsibility requirements for smaller downstream providers who make use of these tools to construct their underlying services.
And should open source models comply with these obligations, if potentially used in high-risk areas? These questions are in the process of being answered through the EU’s ordinary legislative process, but a key lesson is to think about this from the outset. Similarly, if and how research and development should be regulated is similarly worth considering: the EU will not include R&D rules as EU ‘product legislation’ only captures products that go to market, making R&D exempt. This poses some particular challenges for AI, which is not a traditional product: for example, some AI systems can arguably always be in the development stage as they are updated possibly hundreds of times a day. In addition, many of the most powerful AI systems are used for R&D and not placed on the market, but should that mean they are forever out of scope? These are some of the questions worth considering for future governance of cutting edge AI systems.
United States
An Algorithmic Accountability Act (US AAA) was introduced in February 2022, and in a similar manner to the EU, it suggests a risk-based approach; namely in proposing organisations deploying such systems must take several concrete steps to identify and mitigate the social, ethical, and legal risks. However, the Act has yet to win support in the Senate or the House. It is still worth reflecting briefly on its approach, which is a very top-level one: it defines critical terms, but then delegates implementation to the Federal Trade Commission (FTC). There are some similarities between the US AAA and the approach employed by the EU: the AAA would compel impact assessments, considering effects both before and after deployment, thus mirroring the conformity assessments and post-market monitoring obligations in the EU AI Act mandated by the EU AIA.
One lesson to be learned when comparing the US AAA with the EU AI AIA relates to how it addresses the definition of AI. This has been a very contentious point in the EU, and there is yet to be a satisfactory answer. The US AAA’s “primary merit”[48] is that it is framed in terms of automated decision systems (ADS), rather than ‘AI systems’, thus avoiding the tricky question of what an AI system is, and instead accounting for the fact that “the level of automation, in decision-making processes, is best understood as a difference of degree on a spectrum.” This according to some academics[49] gives the US AAA the benefit of being more future-proof, than the EU’s definition based on a list of techniques (as proposed by the Commission), which runs the risk of future systems being out of scope given that the state of the art in AI development may change over time. The US AAA avoids this by focusing just on automated decisions.
The other key development from the USA is the recent blueprint Bill of Rights, which is a set of non-binding principles, outlined by the White House. Each have some vital elements that we support, including:
● Consultation through public participation with those affected by the utilisation and deployment of AI systems,
● Requirement for algorithmic impact assessments to be undertaken to protect against discrimination, the emphasis on consent and restrictions around continuous surveillance monitoring,
● Requirement for notice and explanation of when an automated system is being used and how a decision/outcome has been reached with the right to seek explanation in order to challenge or seek redress, and
● Right to opt out or have access to a person who can consider and remedy problems.
Many of these points are familiar to the UK already through the UK GDPR. It is disappointing that as the USA are beginning to focus and encourage a rights based approach, the UK is seeking to weaken citizens current rights and protections.
China
In July 2017, China released a national AI strategy: the New Generation Artificial Intelligence Development Plan (AIDP). As suggested by the name, the document sets developmental milestones, for 2020, 2025, and 2030. This strategy sets the ambition for China to be the world leader in AI and focuses more on innovation. In later years, this has been followed by ethical principles outlined by the National Governance Committee for the New Generation Artificial Intelligence, first in June 2019 and with a later ethics code in 2021, which looked at ensuring ethics across the “entire lifecycle of AI.” These soft measures are expected to be followed by harder ones, as China aims to codify ethical standards into law by 2025.
The Chinese strategy therefore clearly considers both innovation and ethical implications of AI. The latter have yet to be made into hard rules as of yet, and multiple bodies have set different guidelines. Therefore, one lesson that has been drawn is for the need to “delineate responsibility more clearly within government.”
Another lesson drawn by researchers[50] is that the Chinese approach is much less comprehensive than the EU’s, including because the use of AI by the public sector has in general not been covered (to the extent that it has in the EU and the US). Some of the most significant systemic-level risks posed by AI can come from public sector (mis-)use, so this is a glaring oversight which should not be replicated in UK AI governance.
Canada
In 2019, Canada released its Directive on Automated Decision-Making[51]. It is one of the first pieces of legislation to identify and mitigate risks from AI systems, considering how they impact individual rights, economic interests, health and well-being, and sustainability. It focuses largely on public sector usage, and came into effect in 2020. It includes the requirement to complete an impact assessment[52] prior to the development of any automated decision-making system. Depending on the risk identified in the impact assessment, the Directive sets increasingly rigorous mitigation requirements, such as extensive peer review, notice, human intervention, the provision of a “meaningful explanation”, or training for personnel.
While there has been criticism of the lack of enforcement and failure of agencies to complete the IAs The Government of Canada undertake an annual review[53] of the Directive and make recommendations for improvement.
In conclusion whatever path the UK takes to development of regulation and standards, we should ensure we are internationally aligned where possible. We want to see the UK continue to lead in innovative AI but the determination to make the UK a ‘superpower’ in AI will require clear regulation based on the precautionary principle. We are seeing across the world attempts to establish ex post regulation to protect society from harms embedded over the past two decades from unregulated technologies. The luxury of hindsight we now have should ensure that our quest for growth from AI is not built on the same insecure foundations.
As the world is now seeking to mend the damage done, a collaborative and harmonised approach with international partners; as opposed to a unique standalone light touch approach would be preferable. We believe this will better support domestically produced AI systems, give companies seeking to operate beyond the UK’s domestic AI sector the very best chance of success and opportunity for growth, and will enable citizens to participate in the design, development and deployment of systems that will impact all our lives. We hope the UK continues to be a flag bearer for human-centred values, fairness, transparency and well-constructed regulation.
(November 2022)
[1] Information Commissioner’s Office. Guidance on AI and data protection. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-ai-and-data-protection/
[2] Office for Artificial Intelligence, Department for Digital, Culture, Media & Sport, and Department for Business, Energy & Industrial Strategy. (2020). Guidelines for AI procurement. Available at: https://www.gov.uk/government/publications/guidelines-for-ai-procurement
[3] Google. Artificial Intelligence at Google: Our Principles. Available at: https://ai.google/principles/
[4] BBC. Machine Learning Engine Principles and checklist. Available at: https://www.bbc.co.uk/rd/projects/bbc-machine-learning-engine-principles
[5] NYU School of Law Policing Project. Axon ethics board. Available at: https://www.policingproject.org/axon-ethics-board
[6] Microsoft. How Microsoft drives responsible AI. Available at: https://www.microsoft.com/en-us/ai/our-approach?activetab=pivot1%3aprimaryr5
[7] The AI Standards Hub. About the AI Standards Hub. Available at: https://www.aistandardshub.org/the-ai-standards-hub/
[8] Central Digital and Data Office. (2022). Algorithmic Transparency Standard. Available at: https://www.gov.uk/government/collections/algorithmic-transparency-standard
[9] Competition and Markets Authority, Information Commissioner's Office, Ofcom, and Financial Conduct Authority. (2021). Digital Regulation Cooperation Forum. Available at: https://www.gov.uk/government/collections/the-digital-regulation-cooperation-forum
[10] AWO Agency. Data protection and digital information Bill - Impact on data rights. Available at: https://www.awo.agency/files/Briefing-Paper-3-Impact-on-Data-Rights.pdf
[11] Ada Lovelace Institute. (2021). Regulate to Innovate. Available at: https://www.adalovelaceinstitute.org/report/regulate-innovate/
[12] Digital Future Labs. Reforming AI Governance: Perspectives from Asia. Available at: https://assets.website-files.com/62c21546bfcfcd456b59ec8a/62fdf28844227200c89d3ffc_%E2%80%A2Reframining_AI_Governance-Perspectives_from_Asia.pdf
[13] Department for Digital, Culture, Media and Sport. (2022). Establishing a pro-innovation approach to regulating AI. Available at: https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement
[14] AI Applied. 2022. Davos Panel Explores Precision Regulation of AI & Emerging Technology. Available at: https://ai-applied.net/2020/01/22/davos-panel-explores-precision-regulation-of-ai-emerging-technology/
[15] OECD. OECD AI Principles overview. Available at: https://oecd.ai/en/ai-principles
[16] Hall and Pesenti. Growing the Artificial Intelligence Industry in the UK. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/652097/Growing_the_artificial_intelligence_industry_in_the_UK.pdf
[17] Ibid
[18] House of Lords Select Committee. AI in the UK: ready, willing and able?. Available at: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
[19] Ibid
[20] Beniach and Hogarth. (2022). State of AI Report. Available at: https://docs.google.com/presentation/d/1WrkeJ9-CjuotTXoa4ZZlB3UPBXpxe4B3FMs9R9tn34I/edit#slide=id.g164b1bac824_0_2748
[21] Ada Lovelace Institute. (2022) Rethinking data and rebalancing power. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/11/Ada-Lovelace-Institute-Rethinking-data-and-rebalancing-digital-power-FINAL.pdf
[22] Department for Business, Energy and Industrial Strategy. (2022). National Security and Investment Act: details of the 17 types of notifiable acquisitions. Available at: https://www.gov.uk/government/publications/national-security-and-investment-act-guidance-on-notifiable-acquisitions/national-security-and-investment-act-guidance-on-notifiable-acquisitions#ai
[23] Antropic AI. Available at: https://www.anthropic.com/#papers
[24] Ada Lovelace Institute. (2021). Regulate to Innovate. Available at: https://www.adalovelaceinstitute.org/report/regulate-innovate/
[25] AI Standards Hub AI Standards Hub. Available at: https://aistandardshub.org
[26] Central Digital and Data Office. Last updated 2022. Algorithmic Transparency Standard. Available at: https://www.gov.uk/government/collections/algorithmic-transparency-standard
[27] Access Now. Nonnecke, B., Dawson, P. Human rights impact assessments for AI: analysis and recommendations. Available at: https://www.accessnow.org/cms/assets/uploads/2022/11/Access-Now-Version-Human-Rights-Implications-of-Algorithmic-Impact-Assessments_-Priority-Recommendations-to-Guide-Effective-Development-and-Use.pdf
[28] Centre for Data Ethics and Innovation. (2021) Britain Thinks: Complete Transparency, Complete Simplicity. Available at: https://www.gov.uk/government/publications/cdei-publishes-commissioned-research-on-algorithmic-transparency-in-the-public-sector
[29] Ada Lovelace Institute. (2022) Algorithmic impact assessment: a case study in healthcare. Available at: https://www.adalovelaceinstitute.org/project/algorithmic-impact-assessment-healthcare/
[31] The Ada Lovelace Institute and Open Government Partnership. (2020). Algorithmic Accountability for the Public Sector. Available at: https://www.adalovelaceinstitute.org/project/algorithmic-accountability-public-sector/
[32] Data Justice Lab Exploring social justice in an age of datafication. Available at: https://datajusticelab.org
[33] The Ada Lovelace Institute. (2021). Participatory data stewardship. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2021/11/ADA_Participatory-Data-Stewardship.pdf
[34] West Midlands Police and Crime Commissioner. Information Ethics Committee. Available at: https://www.westmidlands-pcc.gov.uk/ethics-committee/
[35] Ada Lovelace Institute. Jones, E, Safak. C. (2020). Can algorithms ever make the grade? Available at: https://www.adalovelaceinstitute.org/blog/can-algorithms-ever-make-the-grade/
[36] The Guardian, McDonald, H. (2020). Home Office to scrap ‘racist algorithm’ for UK visa applicants. Available at: https://www.theguardian.com/uk-news/2020/aug/04/home-office-to-scrap-racist-algorithm-for-uk-visa-applicants
[37] Ada Lovelace Institute. (2021). Regulate to Innovate. Available at: https://www.adalovelaceinstitute.org/report/regulate-innovate/
[38] Ada Lovelace Institute. (2020). Examining the Black Box. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2020/04/Ada-Lovelace-Institute-DataKind-UK-Examining-the-Black-Box-Report-2020.pdf
[39] Ada Lovelace Institute. (2021). Regulate to Innovate. Available at: https://www.adalovelaceinstitute.org/report/regulate-innovate/
[40] Medicines and Healthcare products Regulatory Agency. Updated 17th October 2022. Software and AI as a Medical Device Change Programme – Roadmap. Available at: https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme-roadmap
[41] Ada Lovelace Institute, (2020) What forms of mandatory reporting can help achieve public-sector algorithmic accountability? Available at: https://www.adalovelaceinstitute.org/event/mandatory-reporting-public-sector-algorithmic-accountability/
[42] Sciences Pro. (2022). Interview The EU Artificial Intelligence Act: 4 Questions to Joanna Bryson. Available at: https://www.sciencespo.fr/public/chaire-numerique/en/2022/11/07/interview-the-eu-artificial-intelligence-act-4-questions-with-joanna-bryson/
[43] Ada Lovelace Institute, Lilian Edwards, (2022) Expert opinion: Regulating AI in Europe. Available at: https://www.adalovelaceinstitute.org/report/regulating-ai-in-europe/
[44] Ada Lovelace Institute, Alexandru Circiumaru, (2022). People, risk and the unique requirements of AI. Available at: https://www.adalovelaceinstitute.org/policy-briefing/eu-ai-act/
[45] Ada Lovelace Institute. (2022). Expert explainer: AI liability in Europe. Available at: https://www.adalovelaceinstitute.org/resource/ai-liability-in-europe/
[46] Huw Roberts, Josh Cowls, Emmie Hine, Jessica Morley, Vincent Wang, Mariarosaria Taddeo & Luciano Floridi. (2022).Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes, The Information Society, DOI: 10.1080/01972243.2022.2124565 Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3811034
[47] Politic. (2022). Harmful AI rules: Now brought to you by Europe & Co., Inc. Available at: https://www.politico.eu/article/harmful-ai-rules-european-union-corporate-influence/
[48] Mökander, J., Juneja, P., Watson, D.S. et al. The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: what can they learn from each other?. Minds & Machines (2022). Available at: https://doi.org/10.1007/s11023-022-09612-y
[49] Ibid
[50] Aneja, U (Ed.). (2022). Reframing AI Governance: Perspectives from Asia. Digital Futures Lab; Konrad-Adenauer-Stiftung. Available at: https://assets.website-files.com/62c21546bfcfcd456b59ec8a/62fdf28844227200c89d3ffc_•Reframining_AI_Governance-Perspectives_from_Asia.pdf
[51] Government of Canada Directive on Automated Decision Making. Available at: https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592§ion=html
[52] Government of Canada Basics of Impact Assessments. Available at: https://www.canada.ca/en/impact-assessment-agency/services/policy-guidance/basics-of-impact-assessments.html
[53] Government of Canada Wiki. Third Review of the Directive on Automated Decision-Making. Available at: https://wiki.gccollab.ca/Third_Review_of_the_Directive_on_Automated_Decision-Making