AIG0007

 

Written evidence submitted by Mr Tom Whittaker (Director at Burges Salmon LLP); Professor Rebecca Williams (Professor of Public Law and Criminal Law at University of Oxford); Mr Azeem Suterwalla (Barrister at Monckton Chambers); Mr Will Perry (Barrister at Monckton Chambers)

 

 

The authors of this evidence submission are the authors of the chapter public law and procurement law and artificial intelligence (AI) in the second edition of the practitioners’ textbook the Law of Artificial Intelligence (Sweet & Maxwell, 2024). The chapter aims to provide practical, evidence-based guidance for practitioners covering: a brief overview of the technology, identifying issues and themes relevant to public sector use of AI; the use of AI in the public sector; the current legal and regulatory framework applicable to public sector procurement, development and deployment of AI; and potential legal challenges.

 

1                     Observations

1.1               During the research and drafting of the chapter the authors made a series of observations relevant to practitioners and which they thought may also be of assistance to the specific questions set by the Committee.  Of course, the authors also have other observations regarding public law and procurement law and AI which they would be happy to discuss if of assistance.

1.2               We make a few overarching observations before turning to the questions raised by the Committee.

1.3               First, it is unclear whether the public sector use a single, consistent definition of “artificial intelligence” or of specific types of AI technology, such as generative AI. 

(a)                The UK’s AI Sector Detail (2018), National AI Strategy (2019), White Paper on AI regulation (2023), government response to the White Paper (2024) and various guidance notes (for example, Procurement) all describe AI loosely, differently, and based on thematic issues, such as adaptability and autonomy.

(b)                Further, those who analyse the public sector, also appear to use different definitions or have different focuses.  For example, the National Audit Office report of 15 March 2024 on “Use of artificial intelligence in government” looked at this issue but did so by focussing on specific types of AI technology, rather than AI more broadly.[1] By way of comparison, the Alan Turing Institute surveyed ‘To what extent is generative AI already in use in the public sector?’.  To put it another way, it is not clear that “apples are always being compared with apples”.

(c)                 That said, the various definitions had a limited impact on the authors’ research and drafting relative to the other observations we discuss below. We considered AI and automated decision making (ADM) to cover a broader remit, given the relatively little attention there has been on public sector use of AI.

(d)                However, the current varied approach to defining AI may prove to be an issue in the future for government, for example, should it seek to comprehensively and precisely identify when and where the public sector is using AI and where it is not[2] for policy development and evaluation purposes. 

1.4               Second, there is no publicly available, comprehensive, updated register[3] of uses of AI in the public sector.

(a)                There is no general duty on public authorities in the UK to reveal the use of an AI system or any more detailed information about its development or operation.   Such a duty could, in principle, exist.  Other countries or public bodies have developed AI registers. The EU AI Act includes transparency requirements for all High Risk AI systems, and requirements in specific circumstances for AI systems to be added to a register of AI systems.

(b)                The authors relied upon academic studies[4], media reports (often reliant upon requests under the Freedom of Information Act 2000), or voluntary publications, such as those in the Algorithmic Transparency Recording Standard (ATRS). The ATRS is expected to expand to additional Government bodies in 2024 and beyond, however the exact extent of its expansion and uptake is not yet known.[5] The authors consider it would be useful for the ATRS to also record where entries on the ATRS are withdrawn and why, in particular, to avoid suggestion that the algorithm/AI system continues when it does not and so the public sector can learn from what does not work.

(c)                 Consequently, it was not possible to comprehensively identify the various types of AI system being considered or used, the various public sector organisations doing so, or the current or anticipated use cases.  This limits the ability to anticipate the areas where public law and procurement law may be tested as a result of public sector use of AI, and where laws may require updating.  Further, this may cause practical issues for government, such as its ability to support the adoption of AI in the public sector as part of its 2021 National Strategy (an issue raised in the NAO report)[6] or for effective ownership and accountability for its delivery.[7]

1.5               Third, up-to-date guidance for public sector use of AI was relatively limited. 

(a)                Guidance, and collation of existing guidance, on public sector procurement and deployment of AI was published in Procurement Policy Note (PPN) 02/4 ‘Improving Transparency of AI use in Procurement’.  However, the focus of the PPN concerned use of AI in procurement, not procurement and/or use of AI, and so reference to other guidance was in passing.  Further, some of that guidance appears to be fragmented and has not been updated; for example, A guide to using artificial intelligence in the public sector was published in June 2019 and last updated October 2019, and has no reference to generative AI.[8] 

(b)                Based on our experience, we expect that guidance is important for effective and risk-managed use of AI in the public sector.  For example, the survey for the Alan Turing Institute of the public sector use of generative AI which found that more people disagree than agree with the statement ‘my workplace has clear guidance on how to use generative AI systems’.[9] Further, it found  that some government bodies interviewed described finding it difficult to navigate the range of guidance available and being unclear on where to go for a definitive view of what they need to consider.

1.6               Fourth, identifying when and where challenges are bought to the use of AI by public bodies is limited.

(a)                Various forms of legal challenge against public sector procurement and use of AI may be bought; in particular, through judicial review, procurement law, and data protection (in particular, regarding automated decision-making). 

(b)                However, the Administrative Court (in which judicial reviews are bought), does not have a register of claims and court documents available publicly.  Consequently, it is not possible to identify new cases or monitor live cases or those which have settled, prior to any public judgement.  In contrast, this type of monitoring and analysis is available for litigation in other parts of the High Court, and technology companies are providing services to assist practitioners.[10] 

(c)                 Publicly available information about challenges to public sector use of AI is limited primarily to court judgements[11] and media reports (often by a campaign group raising awareness)[12].This means that the scope, nature, volume of challenges against public sector use of AI is not known with precision. 

(d)                We anticipate that central government will have (or could have) some understanding of litigation concerning public sector procurement and use of AI due to the work by the Government Legal Department and how government funds and monitors litigation. However, practitioners and the wider public (and potentially government) are unlikely to have this visibility. 

1.7               The authors of the chapter are neutral as to what policy or legal developments are preferable.  The nature of artificial intelligence, both in terms of technology and use cases, especially within the public sector, is factually and potentially legally complex. Consequently, any potential policy options will have pros, cons and unintended consequences which will require careful consideration.

2                     Committee Questions

2.1               The authors now turn to the specific question raised by the Committee:

2.2               Departmental accountability on AI delivery, funding and implementation.

2.3               This is primarily a policy matter which we do not comment on.

2.4               Progress on strategy development and government arrangements

2.5               This is primarily a policy matter which we do not comment on, save that we look forward to seeing the development of specific policies identified in the government response to the White Paper, including the wider use of the ATRS.

2.6               Risks and opportunities of AI adoption and government

2.7               The chapter notes various legal risks with respect to AI adoption in, by and for government, including those concerning: lack of transparency and explainability; procedural failings in the design and implementation of AI systems, including extent of human involvement; unfair, irrational and discriminatory outputs and/or decisions; and dispute procedural issues, including standing, candour and confidentiality, and use of expert evidence.

2.8               Common law is capable of evolving with emerging technologies and use cases.  However, there remains a risk that - without greater transparency of public sector use of AI and review of potential legal issues arising - increasing and evolving public sector use of AI raises novel legal risks which the current law may be able to accommodate but for which the parties will not be able to foresee with certainty.  We expect that such legal risk would inhibit effective public sector uptake of AI.

2.9               Data and skills issues in government

2.10           We make one brief comment. Successful public sector procurement of emerging technologies is dependent upon the public sector having appropriately qualified individuals capable to designing procurements and evaluating bids. Without appropriate skills in government there is a risk of a ‘knowledge gap’ between the buyer (public sector) and seller that makes it more difficult for the public sector to effectively procure emerging technologies.

2.11           We would be delighted to support the work of the Committee further, if helpful.  

 

May 2024

 


[1] Use of artificial intelligence in government (nao.org.uk). Namely, the report focussed on AI that uses machine learning for tasks including language processing, predictive analytics and image or voice recognition.

[2] For example, distinguishing AI technology from simpler forms of analytics which do not necessarily raise the same legal risks.

[3] We are not suggesting that there should be a ‘live’ tracker and recognise that there will be exceptions to any register, such as for national security.

[4] See in particular the Public Law Project’s (PLP) Tracking Automated Government (TAG) Register https://trackautomatedgovernment.shinyapps.io/register/, 78.6% of whose tools were only uncovered or more fully understood through submission of Freedom of Information requests. See also L Dencik, A Hintz, J Redden and H Warne, Cardiff Justice Data Lab, ‘Data Scores as Governance: Investigating uses of citizen scoring in public services’ December 2018 https://datajusticelab.org/wp-content/uploads/2018/12/data-scores-as-governance-project-report2.pdf and more recently J Redden, J Brand, I Sander and H Warne, Data Justice Lab ‘Automated Public Services: Learning from Cancelled Systems’ 2022. See also J Maxwell and J Tomlinson, Experiments in Automating Immigration Systems (2022 Bristol, Bristol University Press).

[5] See government response to the UK White Paper on AI regulation

[6] See NAO report key finding 8.

[7] See NAO report key finding 9.

[8] https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector. This is not to say that the guidance should be updated regularly, but it is unclear to the reader whether the content remains relevant or is outdated.

[9] Generative AI is already widespread in the public sector | The Alan Turing Institute /  https://doi.org/10.48550/arXiv.2401.01291.  Citation: arXiv:2401.01291

[10] For example, Solomonic www.solomonic.co.uk

[11] Such as R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058

[12] For example, Public Law Project.