Written evidence Submitted by the LSE Law, Technology and Society Group (LTS)

(GAI0036)

Prepared by Dr. Giulia Gentile and Professor Andrew Murray

 

How effective is current governance of AI in the UK?

Currently, the development and deployment of AI in the UK is deregulated. There is no technologically-determined UK regulation. The development and deployment of AI is regulated, if at all, by its use or application. For example the use of AI in financial trading is regulated by the Financial Conduct Authority and the Bank of England. They are jointly consulting on AI in the financial sector in DP5/22.[1] In their consultation paper they suggest a lack of a coherent AI regulatory framework is holding back the adoption of AI in their sector - ‘one of the challenges to the adoption of AI in UK financial services is the lack of clarity surrounding the current rules, regulations, and principles, in particular, how these apply to AI and what that means for firms at a practical level.’ This is reflected across industry sectors with the NHS setting up the multi-agency advisory service (MAAS)[2] to share experience and develop ‘a clear set of rules around the evidence and safety standards that innovators need to meet when developing technologies like AI.’

 

This is not the only model. The EU has prepared the draft AI Act which takes a mostly technology-neutral, risk-regulation approach, with Canada taking a similar approach in their Artificial Intelligence and Data Act. With that said, how effective is the UK approach and what benefits or disbenefits does it bring? Its effectiveness is limited. The only enforcement route open to individuals who object to being profiled, or for their data to be used in training data sets or elsewhere is through UK GDPR. This is a generic framework designed to regulate use and misuse of personal data, not the specific application of AI systems. The only truly helpful provision is Art.22 which allows a user (if they know) to opt-out of automated processing. This does not regulate the use of personal data in training datasets or more widely the risk of error or bias in machine learning. The EU approach will provide greater protection to individuals through prohibition of certain systems and management of high-risk applications. Although as a model based on standards models, it is not perhaps the best model for regulation either.

 

The current UK regulatory model affords researchers considerable scope. There are almost no limitations, beyond the data protection framework, limiting investment in AI research. This creates a regulatory gap. Currently beyond data protection law rely on developers and researchers to apply ethical models. There is a strong movement in AI ethics but as LSE LTS members Julia Black and Andrew Murray wrote in 2019 this is a ‘retreat from regulation’ and risks ‘ethics washing’.[3] We note that ethics can never be a substitute for regulation. Universities and other research institutions that apply ethical values and employ ethics boards risk losing out to private sector researchers who may follow a profits-led approach. In an ethical model the risk is that good (ethical) research is beaten out by poor (non-ethical) research. We as university researchers are concerned that a reliance on ethical models may therefore be damaging to ethical research by UK university researchers. 

 

We believe the deregulated approach cannot flourish in the medium term. It risks harming ethical research, it risks harm to individuals both through use of their data and through bias an error in application and it perpetuates the myth of ethical compliance. 

 

What measures could make the use of AI more transparent and explainable to the public?

There are many reasons for enhancing AI transparency.[4] By strengthening the explainability of AI systems, transparency would accordingly increase. Transparency in AI decision-making procedures would consequently improve output procedural fairness and, by reflection, legitimacy of AI usages. AI explainability is also crucial in pursuing remedies against AI-based decisions, and, therefore, it supports the protection of rights of individuals. Additionally, explainable AI would aid the activities of judges and administrators involved in solving disputes on AI-based decisions. Scholars have long advanced demands for transparency in the field of AI.[5]

What is meant by ‘explainable AI’ is debated. This concept can be interpreted as concerning the explainability of AI decisional processes (procedural explainability), but also the full intelligibility of AI by operators and users (functional explainability).[6] While the first approach focuses on disentangling the AI decisional mechanisms, the second instead entails allowing the average user to understand the entire functioning of an AI system, from data selection to the production of output decisions. Whatever definition we choose, setting the right ‘amount’ of explainability remains challenging: users with different levels of understanding of the technology may be able to comprehend more or less about the AI processes and functioning.

A subsequent question concerns the quantum of the explanation. To give an example, a model predicting a patient having the flu may clarify that the symptoms of sneezing and headache contributed to the prediction. However, it is questionable whether such an explanation satisfies a doctor’s needs to understand the AI, or adds significant value to a clinical decision-support tool. Therefore, the nature and the ‘quantity’ of explanations is crucial and is ultimately context-dependent.

These complexities linked to the fuzziness of AI could be tackled by imposing a general legal duty to provide explainability techniques when designing and marketing AI systems. Nevertheless, due to the highly complex and fragmented AI landscape, clearly stipulating what techniques should be imposed on AI systems appears difficult, if not impossible. Different explainability techniques may be best suited for different AI usages.

For this reason, we suggest that a principle-based approach to the explainability of AI systems would prove more effective. On the one hand, it would offer a list of objectives guiding the design and operationalisation of explainable AI. On the other hand, such principles would not become obsolete with the advancement of technology and could adapt to emerging or future technologies.[7]

It has been suggested that by making the three levels of context in which AI operates, (1) technological, (2) decisional and (3) organisational visible, social explainability of AI would increase.[8] For instance, the technological context of AI could be made more visible by tracing the of AI’s past decision outputs and people’s interactions with these outputs, while the decision-making dimension could be made more visible by sharing the local context of past decisions and in-situ access to decision-related (crew) knowledge and the organisational context by sharing organisational meta-knowledge and practices.

It has further been suggested that, to make AI more explainable, the social context in which AI operates should shape the design of AI systems. As illustrated in this table, AI-systems should be designed bearing in mind the ‘4Ws’ questions, with the ‘What’ question ranking first in terms of importance.

Accordingly, we propose a series of principles that should guide the design of explainability techniques:[9]

  1. Explainability techniques should be effective in enhancing human understanding of the decision-making process used by the AI system, the training data deployed, and how the decision was reached.
  2. The datasets used to train AI systems should be made publicly available subject to request. While recognising the tensions of trade secrecy and investment in algorithms it is in the public interest that researchers, regulators and the public are aware of how algorithms are trained. 
  3. The algorithms embedded in AI-systems should be subject to transparency audits carried by the competent authority. While recognising the tensions of trade secrecy and investment in algorithms it is important that a competent audit programme be developed. The European AI Act allows for this, as a result AI developers will likely be ready to comply.
  4. The competent authority shall have the power to access the algorithms used in AI-systems. We suggest that this solution strikes a fair balance between the right of the public to obtain access to algorithms affecting their interests and the interest of market operators to protect their trade secrets.
  5. Explainability techniques should be operator- and user-centred. Explainable AI should not only be concerned with technical explainability, but also with the context in which the explanation will be used and the ability of AI operators and users to understand these explanations.
  6. Explainable techniques should feature operator- and user-friendly interfaces. The interface design should be easily understandable and accessible by AI operators, users and/or addressees of AI-decisions.
  7. Explainability techniques should be subject to risk assessment to minimise risks of errors. Research shows that users tend to misuse explainability techniques and blindly trust these systems.[10]
  8. Explainability techniques include the right to receive a statement of reason for an AI-based decision. This must be sufficiently clear to allow  parties to take further action, including to review the decision.
  9. Explainability techniques are required for both public- and private-sector use of AI.
  10. Explainability techniques should include the right to obtain independent expert advice on the AI-system. 

 

How should decisions involving AI be reviewed and scrutinised in both public and private sectors?

Currently, there is no uniform approach to the scrutiny or the review of decisions involving AI in both the private and public sectors. We see three possible regulatory options.

  1. A first approach is to subject decisions involving AI to the principles that exist for any other legal instrument or Act impacting the legal position of an individual. Under this option, the principles to be applied in reviewing AI-shaped decisions would depend on the area of law (e.g. employment, administrative law, etc) and the context (e.g. private or public) in which the decision operates. Such a path, which is the currently applicable one in the absence of specific AI regulation, may have the drawback of not capturing the specificities of AI when used in decision-making contexts.
  2. A second approach is to devise specific principles applicable to AI-shaped decisions. This more horizontal approach would specifically address the challenges posed by AI systems, but its effectiveness may prove limited due to the interaction between AI and the rules governing the substantive field of law in which AI operates. In other words, it may be challenging to devise horizontal principles applicable to AI-shaped decisions regardless of the area of law and the context in which these decisions are adopted. To give an example, AI used in the employment law field should take into account the rights and obligations of employees and employers, as well as all other rules governing the relationships between the two as provided under employment law.
  3. A third, final option is a hybrid model combining the first and the second options. A mix of specific and general principles applicable to AI-shaped decisions. There remain challenges, including identifying the AI-specific principles and ensuring that the latter operate coherently with those applicable to other legal acts.

We argue that option (c) is the most effective one: it best tackles the hybrid nature of AI-shaped decisions, which are different by origin and procedure but manifest their effects in the physical world as any other legal act.

The current options for challenging the use of AI in decision-making are determined by the substantive areas of law in which AI-systems operate. For instance, the use of facial recognition technology in the context of police operations, which may lead to arrests and monitoring, has been challenged under data protection rules, but also fundamental rights and equality law.[11] Moreover, the Brown v BCA Trading Ltd[12] case offers an instance of a judgment stemming from a challenge concerning the application of e-discovery rules. As a result, we submit that currently usages of AI in the public field may be subject to scrutiny, especially by courts. While this approach is valuable, one may wonder whether the involvement of administrative authorities as a preliminary stage before initiating litigation may be beneficial in decreasing the potential costs linked to challenge AI-shaped decisions.

The use of AI by private entities seems to be somewhat shielded from public scrutiny. The use of facial recognition, for instance, by supermarkets or other private entities, is prima facie less open to challenge because of the Data Protection Act 2018 rules on legitimate interest and legal obligation purposes.[13] Further, due to the lack of explainability requirements, AI tools employed in the private sector may lack transparency, and, as a result, individuals may not be able to understand their functioning and accordingly identify potential violations of individual rights.

We argue that the avenues for challenging AI-shaped decisions, both in the private and public sectors, could be improved in two ways. First, by introducing a duty of explainability, as discussed above. AI users would then be in a position to understand how AI tools have reached decisions and therefore identify issues with the AI decision-making process. Second, the creation of an administrative AI authority acting as an administrative body overseeing the usage of AI both in the public and private sector. This body could act as a first gateway for challenges to AI-shaped decisions, lessening the burden on courts and potential court fees for applicants.  

 

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

The UK’s National AI strategy proposes a pro-innovation, sectoral approach. This has benefits and disadvantages. As the strategy notes this allows sector regulators to map their regulatory response onto the application of AI technology (as the financial regulators and the NHS are doing). It allows tailored guidance to developers and creates a flexible regulatory environment. It fails though to address overlaps and gaps. Technology that cuts across sectors (say from the financial sector to employment) can be subject to un-coordinated regulation by different regulators. Worse, general purpose technology falls into a gap and emerges unregulated.

 

What is needed is (a) regulatory cooperation and (b) long stop regulation. Regulatory cooperation addresses the lack of coordination. This suggests the need for a forum similar to the Digital Regulation Cooperation Forum or a macro-regulator such as the AI Standards Hub or the Alan Turing Institute. The need for long-stop regulation suggests the need for a data protection-style regulator and framework to regulate for (non-personal) data abuses and data transfers (as the proposed EU Data Act does), to capture general purpose technology within a regulatory scheme, and to regulate the use of AI in public services. We propose an integrated regulatory model with sectoral regulators providing granular responses to the application of AI but that there be oversight and coordination by an AI regulator who might also provide long-stop and public sector regulation. 

Diagram

Description automatically generated

 

The above is a simplified model showing the sector-specific roles of the Financial Conduct Authority/Bank of England/Prudential Regulatory Authority for FinTech, the NHS and the Medicines and Healthcare products Regulatory Agency for HealthTech, and the Gambling Commission for gaming. We would add many more sectoral regulators such as the Solicitors Regulatory Authority, Bar Council and Law society for England and Wales for LawTech and the various institutes of taxation and accounting for ReportingTech.

 

There is clear overlap between say the Bank of England, the Solicitors Regulatory Authority and the Institute of Chartered Accountants for England and Wales in AI tools applicable to financial products, mergers and acquisitions. The AI regulator would have a responsibility, like the DRCF, to coordinate sectoral regulators where overlap occurs.

 

The AI Regulator should also take on responsibility for long-stop regulation of general AI tools and regulatory gaps. This can be done in two-phases. Firstly an immediate role as the oversight regulator for public applications of automated decision-making by AI tools. Currently there is no external oversight of the use of Automated Decision Making by government departments (except by ICO where UK GDPR is engaged). We believe there is a need for a specific oversight body. The second stage would be to develop a legal-regulatory ‘playbook’ for development and use of general purpose AI in the style of the FCA conduct rules, with a regulatory sandpit for developers. The AI regulator would be responsible for developing these rules in consultation with sector specific regulators. This would require a sufficiently funded regulator with enforcement powers. It could either be by extension of the role and authority of the ICO (as the Online Safety Bill has done with Ofcom) or by proposing a new regulatory authority.

 

What lessons, if any, can the UK learn from other countries on AI governance?

The proposed EU regulation on AI and the EU directive on AI civil liability[14] offer interesting examples from which the UK could learn. We start by discussing the EU regulation on AI and then move on to the EU directive on non-contractual civil liability for AI.

EU proposal for a regulation on AI

While there are issues with the current version of the proposal,[15] there are also interesting approaches and regulatory techniques. The proposal stresses the importance of ensuring the protection of fundamental rights in the field of AI regulation.[16] This respects human-centric attitudes towards the deployment of AI. Fundamental rights ensure that individuals remain centre stage in the face of technological advancement. Second, the proposal lays down human oversight requirements.[17] It details how such oversight is to be carried out, and ultimately protects individuals from AI-only-shaped decisions. The proposal offers an additional layer of protection for individuals, and enhances general trust towards the use of AI. Finally, the proposal identifies areas in which AI systems are to be prohibited.[18] The regulation prevents the use of AI systems for social scoring mechanisms and indiscriminate use of real time biometric identification systems. Both deployments of AI are deemed as contrary to values of democracy and fundamental rights.

EU proposal for a directive on AI civil liability

This proposal is of interest in so far as it provides evidentiary rules in the context of AI civil liability claims. It grants courts the power to obtain evidence that is necessary and proportionate to support a potential claim.[19] In deciding how disclosure should be carried out, courts should consider the interests of all parties, and if a defendant refuses to disclose evidence, the Directive imposes a presumption of non-compliance with a duty of care imposed on AI providers. The Directive also introduces a rebuttable presumption of causal link between the fault of the defendant who operated the AI system and the AI user.[20] The presumption will apply when a series of cumulative conditions are met. These include when it may be considered reasonably likely that the fault of the AI-system has influenced the output produced by the AI system or when the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.

 

 

 

 

(November 2022)

 


[1] https://www.bankofengland.co.uk/prudential-regulation/publication/2022/october/artificial-intelligence

[2] https://transform.england.nhs.uk/ai-lab/ai-lab-programmes/regulating-the-ai-ecosystem/

[3] Julia Black & Andrew Murray, ‘Regulating AI and Machine Learning: Setting the Regulatory Agenda’ 2019  10(3) European Journal of Law and Technology https://www.ejlt.org/index.php/ejlt/article/view/722

[4] Many AI systems embed machine learning or algorithmic tools acting as ‘black boxes’, in so far as they give a result or reach a decision without explaining or showing how they did so. Examples are AI systems relying on convolutional neural networks, for which understanding and explaining how the AI system processed data and reached the decision is currently impossible.

[5] Cary Coglianese & David Lehr, ‘Transparency and Algorithmic Governance’ (2019) 71 Administrative Law Review 1.

[6] A. Holzinger, ‘From Machine Learning to Explainable AI’ 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA), 2018, pp.55-66, doi:10.1109/DISA.2018.8490530.

[7] We should recall that data protection law pivots around principles that have survived the passage of time.

[8] U. Eshan et al., ‘Expanding Explainability: Towards Social Transparency in AI systems’, CHI ’21, May 8–13, 2021, Yokohama, Japan, https://dl.acm.org/doi/pdf/10.1145/3411764.3445188.

[9] These principles are drawn from literature highlighting the inefficiency of AI explainability measures, such as saliency maps, see A. Alqaraawi et al, ‘Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study’ (2020) https://arxiv.org/abs/2002.00772.

[10] S. Stumpt et al ‘Explanations Considered Harmful? User Interactions with Machine Learning Systems’ https://www.yumpu.com/cs/document/view/55465112/explanations-considered-harmful-user-interactions-with-machine-learning-systems.

[11] R. (Bridges) v Chief Constable of South Wales Police [2020] EWCA civ 1058.

[12] [2016] EWHC 1464 (Ch).

[13] Data Protection Act 2018, Chapter 2.

[14] https://ec.europa.eu/info/sites/default/files/1_1_197605_prop_dir_ai_en.pdf

[15] M Veale & F Borgesius, ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) 22(4) Computer Law Review International 97.

[16] See Articles 7 and 13.

[17] Article 14.

[18] Title 2.

[19] Article 3.

[20] Article 4.