Written Evidence Submitted by
Professor Albert Sanchez-Graells, Professor of Economic Law, University of Bristol and Co-Director, Centre for Global Law and Innovation
(GAI0004)
A. Introduction
01. This submission addresses two of the questions formulated by the House of Commons Science and Technology Committee in its inquiry on the ‘Governance of artificial intelligence (AI)’. In particular:
02. This submission focuses on the process of AI adoption in the public sector and, particularly, on the acquisition of AI solutions. It evidences how the UK is consolidating an inadequate approach to ‘AI regulation by contract’ through public procurement. Given the level of abstraction and generality of the current guidelines for AI procurement, major gaps in public sector digital capabilities, and potential structural conflicts of interest, procurement is currently an inadequate tool to govern the process of AI adoption in the public sector. Flanking initiatives, such as the pilot algorithmic transparency standard, are unable to address and mitigate governance risks. Contrary to the approach in the AI Regulation Policy Paper,[1] plugging the regulatory gap will require (i) new legislation supported by a new mechanism of external oversight and enforcement (an ‘AI in the Public Sector Authority’ (AIPSA)); (ii) a well-funded strategy to boost in-house public sector digital capabilities; and (iii) the introduction of a (temporary) mechanism of authorisation of AI deployment in the public sector. The Procurement Bill would not suffice to address the governance shortcomings identified in this submission.
B. ‘AI Regulation by Contract’ through Procurement
03. Unless the public sector develops AI solutions in-house, which is extremely rare, the adoption of AI technologies in the public sector requires a procurement procedure leading to their acquisition. This places procurement at the frontline of AI governance because the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’.[2] In that vein, the Committee on Standards in Public Life stressed that the ‘Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements’.[3] Procurement is thus erected as a public interest gatekeeper in the process of adoption of AI by the public sector.
04. However, to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards.
05. On a superficial reading, it could seem that the National AI Strategy tackled this by highlighting the importance of the public sector’s role as a buyer and stressing that the Government had already taken steps ‘to inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens’.[4] The National AI Strategy referred, in particular, to the setting up of the Crown Commercial Service’s AI procurement framework (the ‘CCS AI Framework’),[5] and the adoption of the Guidelines for AI procurement (the ‘Guidelines’)[6] as enabling tools. However, a close look at these instruments will show their inadequacy to provide clarity on the content of procedural and contractual obligations aimed at ensuring the goals stated above (para 03), as well as their potential to widen the existing public sector digital capability gap. Ultimately, they do not enable procurement to carry out the expected gatekeeping role.
C. Guidelines and Framework for AI procurement
06. Despite setting out to ‘provide a set of guiding principles on how to buy AI technology, as well as insights on tackling challenges that may arise during procurement’, the Guidelines provide high-level recommendations that cannot be directly operationalised by inexperienced public buyers and/or those with limited digital capabilities. For example, the recommendation to ‘Try to address flaws and potential bias within your data before you go to market and/or have a plan for dealing with data issues if you cannot rectify them yourself’ (guideline 3) not only requires a thorough understanding of eg the Data Ethics Framework[7] and the Guide to using Artificial Intelligence in the public sector,[8] but also detailed insights on data hazards.[9] This leads the Guidelines to stress that it may be necessary ‘to seek out specific expertise to support this; data architects and data scientists should lead this process … to understand the complexities, completeness and limitations of the data … available’.
07. Relatedly, some of the recommendations are very open ended in areas without clear standards. For example, the effectiveness of the recommendation to ‘Conduct initial AI impact assessments at the start of the procurement process, and ensure that your interim findings inform the procurement. Be sure to revisit the assessments at key decision points’ (guideline 4) is dependent on the robustness of such impact assessments. However, the Guidelines provide no further detail on how to carry out such assessments, other than a list of some generic areas for consideration (eg ‘potential unintended consequences’) and a passing reference to emerging guidelines in other jurisdictions. This is problematic, as the development of algorithmic impact assessments is still at an experimental stage,[10] and emerging evidence shows vastly diverging approaches, eg to risk identification.[11] In the absence of clear standards, algorithmic impact assessments will lead to inconsistent approaches and varying levels of robustness. The absence of standards will also require access to specialist expertise to design and carry out the assessments.
08. Ultimately, understanding and operationalising the Guidelines requires advanced digital competency, including in areas where best practices and industry standards are still developing.[12] However, most procurement organisations lack such expertise, as a reflection of broader digital skills shortages across the public sector,[13] with recent reports placing civil service vacancies for data and tech roles throughout the civil service alone close to 4,000.[14] This not only reduces the practical value of the Guidelines to facilitate responsible AI procurement by inexperienced buyers with limited capabilities, but also highlights the role of the CCS AI Framework for AI adoption in the public sector.
09. The CCS AI Framework creates a procurement vehicle[15] to facilitate public buyers’ access to digital capabilities. CCS’ description for public buyers stresses that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation.’[16] The Framework thus seeks to enable contracting authorities, especially those lacking in-house expertise, to carry out AI procurement with the support of external providers. While this can foster the uptake of AI in the public sector in the short term, it is highly unlikely to result in adequate governance of AI procurement, as this approach focuses at most on the initial stages of AI adoption but can hardly be sustainable throughout the lifecycle of AI use in the public sector—and, crucially, would leave the enforcement of contractualised AI governance obligations in a particularly weak position (thus failing to meet the enforcement requirement at para 04). Moreover, it would generate a series of governance shortcomings which avoidance requires an alternative approach.
D. Governance Shortcomings
10. Despite claims to the contrary in the National AI Strategy (above para 05), the approach currently followed by the Government does not empower public buyers to responsibly procure AI. The Guidelines are not susceptible of operationalisation by inexperienced public buyers with limited digital capabilities (above paras 06-08). At the same time, the Guidelines are too generic to support sophisticated approaches by more advanced digital buyers. The Guidelines do not reduce the uncertainty and complexity of procuring AI and do not include any guidance on eg how to design public contracts to perform the regulatory functions expected under the ‘AI regulation by contract’ approach.[17] This is despite existing recommendations on eg the development of ‘model contracts and framework agreements for public sector procurement to incorporate a set of minimum standards around ethical use of AI, with particular focus on expected levels transparency and explainability, and ongoing testing for fairness’.[18] The guidelines thus fail to address the first requirement for effective regulation by contract in relation to clarifying the relevant obligations (para 04).
11. The CCS Framework would also fail to ensure the development of public sector capacity to establish, monitor, and enforce AI governance obligations (para 04). Perhaps counterintuitively, the CCS AI Framework can generate a further disempowerment of public buyers seeking to rely on external capabilities to support AI adoption. There is evidence that reliance on outside providers and consultants to cover immediate needs further erodes public sector capability in the long term,[19] as well as creating risks of technical and intellectual debt in the deployment of AI solutions as consultants come and go and there is no capture of institutional knowledge and memory.[20] This can also exacerbate current trends of pilot AI graveyard spirals, where most projects do not reach full deployment, at least in part due to insufficient digital capabilities beyond the (outsourced) pilot phase. This tends to result in self-reinforcing institutional weaknesses that can limit the public sector’s ability to drive digitalisation, not least because technical debt quickly becomes a significant barrier.[21] It also runs counter to best practices towards building public sector digital maturity,[22] and to the growing consensus that public sector digitalisation first and foremost requires a prioritised investment in building up in-house capabilities.[23] On this point, it is important to note the large size of the CCS AI Framework, which was initially pre-advertised with a £90 mn value,[24] but this was then revised to £200 mn over 42 months.[25] Procuring AI consultancy services under the Framework can thus facilitate the funnelling of significant amounts of public funds to the private sector, rather than using those funds to build in-house capabilities. It can result in multiple public buyers entering contracts for the same expertise, which thus duplicates costs, as well as in a cumulative lack of institutional learning by the public sector because of atomised and uncoordinated contractual relationships.
12. Beyond the issue of institutional dependency on external capabilities, the cumulative effect of the Guidelines and the Framework would be to outsource the role of ‘AI regulation by contract’ to unaccountable private providers that can then introduce their own biases on the substantive and procedural obligations to be embedded in the relevant contracts—which would ultimately negate the effectiveness of the regulatory approach as a public interest safeguard. The lack of accountability of external providers would not only result from the weakness (or absolute inability) of the public buyer to control their activities and challenge important decisions—eg on data governance, or algorithmic impact assessments, as above (paras 06-07)—but also from the potential absence of effective and timely external checks. Market mechanisms are unlikely to deliver adequate checks due market concentration and structural conflicts of interest affecting both providers that sometimes provide consultancy services and other times are involved in the development and deployment of AI solutions,[26] as well as a result of insufficiently effective safeguards on conflicts of interest resulting from quickly revolving doors. Equally, broader governance controls are unlikely to be facilitated by flanking initiatives, such as the pilot algorithmic transparency standard.
13. To try to foster accountability in the adoption of AI by the public sector, the UK is currently piloting an algorithmic transparency standard.[27] While the initial six examples of algorithmic disclosures published by the Government provide some details on emerging AI use cases and the data and types of algorithms used by publishing organisations, and while this information could in principle foster accountability, there are two primary shortcomings. First, completing the documentation requires resources and, in some respects, advanced digital capabilities. Organisations participating in the pilot are being supported by the Government, which makes it difficult to assess to what extent public buyers would generally be able to adequately prepare the documentation on their own. Moreover, the documentation also refers to some underlying requirements, such as algorithmic impact assessments, that are not yet standardised (para 07). In that, the pilot standard replicates the same shortcomings discussed above in relation to the Guidelines. Algorithmic disclosure will thus only be done by entities with high capabilities, or it will be outsourced to consultants (thus reducing the scope for the revelation of governance-relevant information).
14. Second, compliance with the standard is not mandatory—at least while the pilot is developed. If compliance with the algorithmic transparency standard remains voluntary, there are clear governance risks. It is easy to see how precisely the most problematic uses may not be the object of adequate disclosures under a voluntary self-reporting mechanism. More generally, even if the standard was made mandatory, it would be necessary to implement an external quality control mechanism to mitigate problems with the quality of self-reported disclosures that are pervasive in other areas of information-based governance.[28] Whether the Central Digital and Data Office (currently in charge of the pilot) would have capacity (and powers) to do so remains unclear, and it would in any case lack independence.
15. Finally, it should be stressed that the current approach to transparency disclosure following the adoption of AI (ex post) can be problematic where the implementation of the AI is difficult to undo and/or the effects of malicious or risky AI are high stakes or impossible to revert. It is also problematic in that the current approach places the burden of scrutiny and accountability outside the public sector, rather than establishing internal, preventative (ex ante) controls on the deployment of AI technologies that could potentially be very harmful for fundamental and individual socio-economic rights—as evidenced by the inclusion of some fields of application of AI in the public sector as ‘high risk’ in the EU’s proposed EU AI Act.[29] Given the particular risks that AI deployment in the public sector poses to fundamental and individual rights, the minimalistic and reactive approach outlined in the AI Regulation Policy Paper is inadequate.
E. Conclusion: An Alternative Approach
16. Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens will require new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector. Such legislation would then need to be developed in statutory guidance of a much more detailed and actionable nature than the current Guidelines. These developed requirements can then be embedded into public contracts by reference. Without such clarification of the relevant substantive obligations, the approach to ‘AI regulation by contract’ can hardly be effective other than in exceptional cases.
17. Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence.
18. It would also be necessary to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).
19. Until sufficient in-house capability is built to ensure adequate understanding and ability to manage digital procurement governance requirements independently, the current reactive approach should be abandoned, and AIPSA should have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification.
20. The new legislation and statutory guidance would need to be self-standing, as the Procurement Bill would not provide the required governance improvements. First, the Procurement Bill pays limited to no attention to artificial intelligence and the digitalisation of procurement.[30] An amendment (46) that would have created minimum requirements on automated decision-making and data ethics was not moved at the Lords Committee stage, and it seems unlikely to be taken up again at later stages of the legislative process. Second, even if the Procurement Bill created minimum substantive requirements, it would lack adequate enforcement mechanisms, not least due to the limited powers and lack of independence of the foreseen Procurement Review Unit (to also sit within Cabinet Office).
Biographical information
Professor Albert Sanchez-Graells is a Professor of Economic Law at the University of Bristol Law School and Co-Director of its Centre for Global Law and Innovation. He is a British Academy Mid-Career Fellow and is currently carrying out the project Digital technologies and public procurement. Gatekeeping and experimentation in digital public governance. His current research focuses on the governance mechanisms controlling the purchase of digital technologies, including AI, by the public sector, as well as the deployment of those technologies to transition to a new model of digital procurement governance.
Professor Sanchez-Graells’ influential publications include the leading monograph Public Procurement and the EU Competition Rules, 2nd edn (Bloomsbury-Hart, 2015). He has also co-authored Shaping EU Public Procurement Law: A Critical Analysis of the CJEU Case Law 2015–2017 (Wolters-Kluwer, 2018), edited Smart Public Procurement and Labour Standards. Pushing the Discussion after RegioPost (Hart, 2018), and coedited Reformation or Deformation of the Public Procurement Rules (Edward Elgar, 2016), Transparency in EU Procurements. Disclosure Within Public Procurement and During Contract Execution (Edward Elgar, 2019) and European Public Procurement. Commentary on Directive 2014/24/EU (Edward Elgar, 2021). Most of his working papers are available at http://ssrn.com/author=542893 and his analysis of current legal developments is published in his blog http://www.howtocrackanut.com.
(October 2022)
[1]Note: all websites last accessed on 25 October 2022.
Department for Digital, Culture, Media and Sport, Establishing a pro-innovation approach to regulating AI. An overview of the UK’s emerging approach (CP 728, 2022).
[2] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector (August 2021) 33.
[3] Committee on Standards in Public Life, Intelligence and Public Standards (2020) 51.
[4] Department for Digital, Culture, Media and Sport, National AI Strategy (CP 525, 2021) 47.
[5] AI Dynamic Purchasing System < https://www.crowncommercial.gov.uk/agreements/RM6200 >.
[6] Office for Artificial Intelligence, Guidelines for AI Procurement (2020) < https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement >.
[7] Central Digital and Data Office, Data Ethics Framework (Guidance) (2020) < https://www.gov.uk/government/publications/data-ethics-framework >.
[8] Central Digital and Data Office, A guide to using artificial intelligence in the public sector (2019) < https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector >.
[9] See eg < https://datahazards.com/index.html >.
[10] Ada Lovelace Institute, Algorithmic impact assessment: a case study in healthcare (2022) < https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ >.
[11] A Sanchez-Graells, ‘Algorithmic Transparency: Some Thoughts On UK's First Four Published Disclosures and the Standards’ Usability’ (2022) < https://www.howtocrackanut.com/blog/2022/7/11/algorithmic-transparency-some-thoughts-on-uk-first-disclosures-and-usability >.
[12] A Sanchez-Graells, ‘“Experimental” WEF/UK Guidelines for AI Procurement: Some Comments’ (2019) < https://www.howtocrackanut.com/blog/2019/9/25/wef-guidelines-for-ai-procurement-and-uk-pilot-some-comments >.
[13] See eg Public Accounts Committee, Challenges in implementing digital change (HC 2021-22, 637).
[14] S Klovig Skelton, ‘Public sector aims to close digital skills gap with private sector’ (Computer Weekly, 4 Oct 2022) < https://www.computerweekly.com/news/252525692/Public-sector-aims-to-close-digital-skills-gap-with-private-sector >.
[15] It is a dynamic purchasing system, or a list of pre-screened potential vendors public buyers can use to carry out their own simplified mini-competitions for the award of AI-related contracts.
[16] Above (n 5).
[17] This contrasts with eg the EU project to develop standard contractual clauses for the procurement of AI by public organisations. See < https://living-in.eu/groups/solutions/ai-procurement >.
[18] Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making (2020) < https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making/main-report-cdei-review-into-bias-in-algorithmic-decision-making >.
[19] V Weghmann and K Sankey, Hollowed out: The growing impact of consultancies in public administrations (2022) < https://www.epsu.org/sites/default/files/article/files/EPSU%20Report%20Outsourcing%20state_EN.pdf >.
[20] A Sanchez-Graells, ‘Identifying Emerging Risks in Digital Procurement Governance’ in idem, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming) < https://ssrn.com/abstract=4254931 >.
[21] M E Nielsen and C Østergaard Madsen, ‘Stakeholder influence on technical debt management in the public sector: An embedded case study’ (2022) 39 Government Information Quarterly 101706.
[22] See eg Kevin C Desouza, ‘Artificial Intelligence in the Public Sector: A Maturity Model’ (2021) IBM Centre for the Business of Government < https://www.businessofgovernment.org/report/artificial-intelligence-public-sector-maturity-model >.
[23] A Clarke and S Boots, A Guide to Reforming Information Technology Procurement in the Government of Canada (2022) < https://govcanadacontracts.ca/it-procurement-guide/ >.
[24] < https://ted.europa.eu/udl?uri=TED:NOTICE:600328-2019:HTML:EN:HTML&tabId=1&tabLang=en >.
[25] < https://ted.europa.eu/udl?uri=TED:NOTICE:373610-2020:HTML:EN:HTML&tabId=1&tabLang=en >.
[26] See S Boots, ‘“Charbonneau Loops” and government IT contracting’ (2022) < https://sboots.ca/2022/10/12/charbonneau-loops-and-government-it-contracting/ >.
[27] Central Digital and Data Office, Algorithmic Transparency Standard (2022) < https://www.gov.uk/government/collections/algorithmic-transparency-standard >.
[28] Eg in the context of financial markets, there have been notorious ongoing problems with ensuring adequate quality in corporate and investor disclosures.
[29] < https://artificialintelligenceact.eu/ >.
[30] P Telles, ‘The lack of automation ideas in the UK Gov Green Paper on procurement reform’ (2021) < http://www.telles.eu/blog/2021/1/13/the-lack-of-automation-ideas-in-the-uk-gov-green-paper-on-procurement-reform >.