Written Evidence Submitted by
Kit Fotheringham, Postgraduate Researcher, Centre for Global Law and Innovation, University of Bristol Law School
(GAI0042)
Summary of recommendations:
- For public sector uses of automated decision-making, GOV.UK could include a dedicated dashboard which provides interactive explanation tools and offers individuals the choice to opt-out [paragraph 6].
- Publication of DPIAs could improve transparency and explainability by outlining the expected functioning of the AI system and detailing the potential consequences of automated decision-making for different types of individuals [paragraph 7].
- The Department for Digital, Culture, Media and Sport (DCMS) should consider allowing the ICO to set up a commercial subsidiary which could develop and operate its own certification schemes, using the ICO as a trusted brand to stimulate uptake [paragraph 8].
- Reform of Article 22 GDPR should enhance the ability of individuals to obtain an explanation of automated decision-making in all instances, not just those where the decision was taken ‘solely’ by an algorithm [paragraph 13].
- To prevent undue influence from automation bias, reconsideration of automated decisions should follow the ‘double blind data entry’ methodology [paragraph 14].
- The UK should exert influence over international AI policy is by tabling treaty initiatives to prevent a race to the bottom on global standards for public sector AI, both technical and ethical [paragraph 18].
- Government departments should actively elicit proposals from existing regulators as to how their powers might be improved to keep pace with technological development [paragraph 19].
- DRCF should consider establishing one or more working groups of regulatory officials below management level to increase co-ordination of efforts and share best practice in regulating AI across sectors [paragraph 20].
- All regulatory bodies should be advised to develop an AI strategy if they have not already done so. AI regulatory strategies should include how the regulator proposes to use their existing powers to ensure good governance of AI in their given sector, as well as identifying gaps in the relevant frameworks where further powers are required [paragraph 21].
- To prevent a potential conflict of interest, the terms of reference for the OAI should be reformulated, so that it becomes more of a consultative forum, drawing in expertise from industry and the civil service [paragraph 23].
- CDEI should be spun-off, whilst retaining its public funding, perhaps operating under the auspices of the Alan Turing Institute [paragraph 24].
- The Public Accounts Committee and the Public Administration and Constitutional Affairs Committee should commission the NAO and the PHSO to produce regular reports on the use of AI in the public sector. Government should welcome such scrutiny and ensure that the NAO and PHSO are fully resourced and empowered to investigate government use of AI and ADM in public administration [paragraph 26].
- The statutory definition of ‘maladministration’ should be amended to explicitly include automated administrative actions taken by computer systems [paragraph 27].
- Procedural rules should make provision for disclosure of the operation and implications of the AI system as part of pre-action protocol. Expert testimony on how AI may have influenced outcomes in a particular case should be allowed. The Judicial College should also provide training to judges on the implications of AI in administrative decision-making, so that judges are prepared to engage with claims of improper use of AI or unreasonable reliance on AI outputs [paragraph 28].
- To foster a collaborative research culture on all aspects of AI, including technical and normative issues, UK Research and Innovation (UKRI) should work across Research Council groupings to encourage grant applications from multi-disciplinary research teams [paragraph 29].
A. Introduction
- There is a danger that we overgeneralise when talking about governance arrangements for AI. AI is a broad category of different technologies, and those technologies are, in turn, used in a wide variety of applications. Intuitively, it does not make sense to apply identical standards in every scenario because the risks associated with the use of AI varies so dramatically. For instance, safety standards in robotics applications are substantive in character, whereas by contrast automated decision-making (ADM) systems require predominantly procedural standards. Furthermore, effective sectoral regulation may already be in place in many domains where AI could be used. An overarching AI super-regulator may unnecessarily duplicate existing regulatory frameworks, leading to confusion and reluctance to intervene.
- By way of illustration, consider three deployment scenarios for AI. Firstly, AI software is a common feature in autonomous vehicles. In this scenario, regulation should be treated as holistic safety regulation of the whole vehicle. Parliament has already made some progress in this area by legislating for the liability framework in the Automated and Electric Vehicles Act 2018.[1] Professor Winfield, the formerly of the UWE Bristol Robotics Lab, has recommended that safety regulation for autonomous vehicles and robotics be strengthened by creating an accident investigation agency for autonomous machines, modelled on the success of the Air Accident Investigation Bureau.[2]
- The second scenario involves AI systems used on social media platforms. In this context, such systems may be commonly referred to as ‘the algorithm’ (even though such systems are often an assemblage of several, interconnecting algorithms – some of which are AI-powered and some of which draw upon conventional technologies). Whilst the Information Commissioner’s Office is responsible for patrolling information security online,[3] the data protection framework cannot address harmful content. Instead, mitigation of the undesirable outcomes of social media algorithms is the domain of the Online Safety Bill,[4] with Ofcom as the expert regulatory body.
- The third scenario is AI-enabled automated decision-making (ADM). ADM systems might be encountered when applying for credit, where the system takes data including the applicant’s credit score to determine eligibility for a credit card, loan, or mortgage. When used in finance, firms which use AI systems are subject to regulation by the Financial Conduct Authority (FCA) and those firms are responsible for ensuring that their automated processes comply with FCA standards.[5] ADM applications could also be developed for the public sector. Regulation of ADM in the public sector is less clear, with much scope for improvement. Hence, the main thrust of this submission focuses on public sector applications of AI and recommends improvements to the governance framework for ADM in public administration.
B. Beyond the ‘right to an explanation’ of automated decision-making
- Early indications suggested that the GDPR would introduce a ‘right to an explanation’ of automated decision-making, but on closer inspection of the legislation any such notion turned out to be hollow.[6]
- Regardless, under the UK GDPR, data controllers must disclose the existence of ADM at the earliest opportunity.[7] Many organisations meet this requirement by including such disclosures in their privacy policy, but this is not the only viable option. Some scholars argue that AI explainability is best met through the adoption of transparency enhancing techniques.[8] For instance, organisations could instead choose to fulfil their disclosure duties through pop-up boxes, when collecting data through an application form, or through interactive dashboards.[9] Greater use of alternatives to ‘burying’ disclosures in privacy policies should be encouraged. For public sector uses of automated decision-making, GOV.UK could include a dedicated dashboard which provides interactive explanation tools and offers individuals the choice to opt-out.
- The ICO also recommends the completion of a Data Protection Impact Assessment (DPIA) where the use of AI, including automated decision-making could be considered ‘high risk’.[10] However, there is no corresponding instruction to publish the completed DPIA. Publication of DPIAs could improve transparency and explainability by outlining the expected functioning of the AI system and detailing the potential consequences of automated decision-making for different types of individuals. Whilst the content of a DPIA might include matters of national security or commercially sensitive information, at the very least a redacted version of the DPIA should be made available on request.
- High standards for AI and automated decision-making could be incentivised through the use of certification schemes. The ICO has approved a handful of accreditation bodies, but overall, the rate of progress in encouraging widespread adoption of certification schemes is disappointing. At present, there are no certification schemes which address automated decision-making, despite its growing prevalence in both the private and public sectors. The Department for Digital, Culture, Media and Sport (DCMS) should consider allowing the ICO to set up a commercial subsidiary which could develop and operate its own certification schemes, using the ICO as a trusted brand to stimulate uptake.
C. Limitations of reliance on the data protection framework
- The opportunity to challenge the use of AI in the public sector is limited by the lack of procedural protections. For example, if the Environment Agency wanted to deploy AI in monitoring pollution, there are no formal mechanisms for citizens to challenge the accuracy of the algorithm or even comment on the appropriateness of using an AI system. Furthermore, as was witnessed through case of the Teacher Assessed Grades adjustment algorithm used during the Covid-19 pandemic, even where personal data are involved, the options open to individuals are dependent upon respect for data protection standards at the design stage and timely action to halt further harm when it becomes apparent.
- Recent interventions by the ICO have further highlighted the limitations of reliance on the data protection framework to regulate AI. For instance, the Royal Free NHS Trust failed to comply with data protection standards in its collaboration with DeepMind, in spite of the fact that the Caldicott Review had previously recommended that Data Guardians scrutinise data-intensive projects.[11] Furthermore, despite the ICO’s enforcement action,[12] Clearview cannot guarantee that it does not use images of UK citizens in its facial recognition system.
- Article 22 GDPR is often described as a ‘right’ not to be subject to automated decision-making and profiling but this is by no means an absolute right. Its provisions only apply if the decision has been taken ‘solely’ by machine. Article 22 GDPR also has broad exceptions, rendering it near useless as a mechanism for challenging the use of AI in most situations where the technology is encountered. For example, if a benefits claimant wanted to challenge the use of automated decision-making, under the current rules the any objection could be overridden on the grounds that automated decision-making is authorised by statute.[13]
- Due to the limitations of the data protection framework, individuals are left out in the cold. In fact, the lack of available options to obtain redress for wrongs caused by AI and ADM means that issues which could otherwise have been dealt with administratively are escalated to the courts. In the cases of Johnson[14] and Pantellerisco[15] the only option open to the claimants was to seek a remedy through judicial review. The claimants were recipients of Universal Credit who suffered acute financial hardship because the salary cycles used by their employers did not align with the ‘assessment period’, which is fixed by the automated system for calculating Universal Credit awards. But instead of addressing the ‘elephant in the room’ that the problems had been caused by an inflexible algorithm, judicial deference meant that the court refrained from passing comment as to whether automation was reasonable and proportionate in the first place.
D. Reform of the regulatory framework
- One of the recommendations of the Taskforce on Innovation, Growth and Regulatory Reform (TIGRR) was to abolish Article 22 GDPR, arguing that the existing rules mandated the presence of a ‘human in the loop’ or human review of automated decisions.[16] However, this is based on a misapprehension; Article 22 GDPR contains major exceptions which cover many foreseeable uses of automated decision-making. Wholescale abolition of the Article 22 GDPR rules on automated decision-making would be to head in the ‘wrong direction’[17] and DCMS appears to have acknowledged this in the Data Protection and Digital Information Bill.[18] But DCMS should go further. Reform of Article 22 GDPR should enhance the ability of individuals to obtain an explanation of automated decision-making in all instances, not just those where the decision was taken ‘solely’ by an algorithm.
- One of the safeguards retained under the proposed reforms to Article 22 GDPR is that the data subject can request a ‘reconsideration’ of the automated decision. However, to avoid ‘automation bias’, where human reviewers are reluctant to overturn the determination of an ADM system, the human reviewer should not be granted access to the ADM output. To prevent undue influence from automation bias, reconsideration of automated decisions should follow the ‘double blind data entry’ methodology.[19] Under this model, the ADM result should only be open to a second-stage reviewer, who can compare the results of the ADM process and the initial human review.
- An illustration of this model can be seen in the commonly accepted practice for transcription of census records for genealogical research. Here, each record is transcribed twice, either by a machine and a human, or by two humans acting independently. If both transcriptions match, they are accepted as accurate. However, if there is a discrepancy, an experienced transcriber arbitrates between the proposed transcriptions, or substitutes one of their own. Ideally, the second-stage reviewer will not know whether the output has been produced by machine or by hand.
- Adapting this model for ADM reviews, the human review would only be commenced when requested as part of a complaint, but would essentially be a de novo decision. If the outcomes match, then the decision could be treated as final. Alternatively, the complainant could appeal, and it would only be at this stage that the ADM output could be considered. This model balances the demands of efficiency whilst minimising opportunity for unfair bias to influence the final outcome.
- The EU’s AI Act, and its broader Digital Markets Initiative, are worth monitoring. The ‘Brussels effect’, whereby EU standards effectively become the international benchmark as the price of access to the EU single market, will undoubtedly have an effect on technological development. The UK government should consider carefully how its approach to AI regulation will align or diverge from the EU model, including the potential costs for companies wishing to export AI related goods and services to the EU.
- Early engagement in setting international standards for AI is therefore vital for the UK to maintain its relevance as an international player in the AI industry. Technical standards are being drafted under the auspices of the IEEE[20] and the ISO.[21] The UK should exert influence over international AI policy is by tabling treaty initiatives to prevent a race to the bottom on global standards for public sector AI, both technical and ethical. Given existing alignment on data protection under the Data Protection Convention,[22] the Council of Europe may be a suitable forum for taking up this work. Building on previous collaboration through the World Economic Forum to produce the pioneering Guidelines for AI procurement,[23] the UK should propose that the World Trade Organisation introduce a protocol on AI and automated decision-making to the Government Procurement Agreement.
E. Institutional design
- AI technology evolves quickly, so ideally, applicable legislation and guidance should evolve at a similar pace. In keeping with the principle that sectoral regulation is the more logical approach, government departments should actively elicit proposals from existing regulators as to how their powers might be improved to keep pace with technological development.
- The UK approach to AI governance has evolved along sector-specific lines, as recommended by the House of Lords in their 2017 inquiry, AI in the UK.[24] My view is that sector-specific regulation ought to be maintained and strengthened. It would be a categorical error to regulate all instances of AI identically. Nevertheless, increased collaboration between existing regulators through the Digital Regulation Co-operation Forum (DRCF) is welcome.[25] DRCF should consider establishing one or more working groups of regulatory officials below management level to increase co-ordination of efforts and share best practice in regulating AI across sectors.
- Establishing a meta-regulator for AI would not be recommended. AI technologies are used in a wide variety of practical applications, hence the sectoral approach to regulation is the more logical option. Existing regulators have formed relationships with their relevant sectors over many years and fostering mutual trust between regulators and regulatees is key to achieving good regulatory outcomes. All regulatory bodies should be advised to develop an AI strategy if they have not already done so. AI regulatory strategies should include how the regulator proposes to use their existing powers to ensure good governance of AI in their given sector, as well as identifying gaps in the relevant frameworks where further powers are required.
- Some cross-cutting groups have been established to assist in developing AI policy. Among these are the Office for Artificial Intelligence (OAI) and the Centre for Data Ethics and Innovation (CDEI), which formed part of the government response to the challenges of AI in its AI Sector Deal.[26] There is indeed a need for cross-government coordination on AI applications. However, OAI and CDEI should not be viewed as proto-regulators but should instead work with sectoral regulators to develop mature governance frameworks for AI applications in the given domains.
- The OAI appears to be an appropriate forum for expert advice on policy formation. However, its role as a promoter of AI within government sits uneasily alongside its consultative function. To prevent a potential conflict of interest, the terms of reference for the OAI should be reformulated, so that it becomes more of a consultative forum, drawing in expertise from industry and the civil service.
- CDEI is, in effect, a publicly-funded consultancy. There is little reason why much of this consultancy work could not be done independently of government. Formal independence would allow for more critical scrutiny on the ethics of AI applications and the use of automated decision-making in public administration. CDEI should be spun-off, whilst retaining its public funding, perhaps operating under the auspices of the Alan Turing Institute.
- In the case of automated decision-making in public administration, regulation should be seen as a collaborative effort between existing institutions. My research[27] has found that the foundation for a comprehensive regulatory framework for ADM already exists in the form of three main institutions: namely the ICO, Parliament and the courts. The role of the ICO needs little introduction, but its regulation of AI is limited. However, many of the ‘gaps’ in the regulation of AI in the public sector are filled by the activities of Parliament and the courts.
- Parliament performs a regulatory function through its scrutiny of government and superintendence of public administration. For example, parliamentary debate on the Home Office’s use of an algorithm to ‘sift’ through visa applications led to a change in the government’s approach. The Science and Technology Committee has already established itself as the natural home for detailed scrutiny on government AI policy. Parliament may also call upon the expertise of its independent officers; namely, the Comptroller & Auditor General at the National Audit Office (NAO) and the Parliamentary and Health Service Ombudsman (PHSO). The Public Accounts Committee and the Public Administration and Constitutional Affairs Committee should commission the NAO and the PHSO to produce regular reports on the use of AI in the public sector. Government should welcome such scrutiny and ensure that the NAO and PHSO are fully resourced and empowered to investigate government use of AI and ADM in public administration. The NAO and the PHSO could co-ordinate their efforts through a joint programme of investigatory work, in a manner akin to the EU’s AI Watch initiative,[28] which would be citizen facing and could empower individuals to challenge inappropriate uses of AI and automated decision-making in a ‘virtuous’ cycle of scrutiny.
- Furthermore, in the case of automated decision-making in the delivery of public services, the statutory definition of ‘maladministration’[29] should be amended to explicitly include automated administrative actions taken by computer systems. This would enhance the ability of citizens to seek redress for harms caused by defective algorithms or poor implementation of AI systems through the ombudsman service.
- Finally, the courts are the ultimate authority on the legality of administrative conduct, through the judicial review process. The flexibility offered by the common law system means that the courts can adapt existing rules to confront issues emerging from the use of new technologies. There are a few practical steps that government could take to facilitate the development of the case law. Procedural rules should make provision for disclosure of the operation and implications of the AI system as part of pre-action protocol. Expert testimony on how AI may have influenced outcomes in a particular case should be allowed. The Judicial College should also provide training to judges on the implications of AI in administrative decision-making, so that judges are prepared to engage with claims of improper use of AI or unreasonable reliance on AI outputs.
F. Research landscape
- Multidisciplinary input into developing new AI models is crucial to avert costly design mistakes that might otherwise render the software unusable in the public sector. The Alan Turing Institute and other AI research bodies appear to be receptive to this idea and include technologists, sociologists, lawyers, philosophers, and ethicists in their teams. To foster a collaborative research culture on all aspects of AI, including technical and normative issues, UK Research and Innovation (UKRI) should work across Research Council groupings to encourage grant applications from multi-disciplinary research teams.
Biographical Information
Kit Fotheringham is a Postgraduate Researcher at the University of Bristol Law School and a member of its Centre for Global Law and Innovation. His current research project, Regulatory perspectives on automated decision-making in public administration,[30] looks at the engagement of various regulatory frameworks with the issues highlighted by the automation of decisions on eligibility for public services.
Kit has published articles on tort liability for AI in collaboration with Helen Smith, a colleague at the Centre for Ethics in Medicine, part of the University of Bristol Medical School. These include Artificial intelligence in clinical decision-making: rethinking liability (2020),[31] which was cited in research briefing PN637 produced by the Parliamentary Office for Science and Technology (POST),[32] and Exploring remedies for defective artificial intelligence aids in clinical decision making in post-Brexit England and Wales (2022).[33] Kit’s research profile can be found at ORCiD.[34]
(November 2022)
[1] For further commentary on the ‘enterprise’ liability model introduced in the Automated and Electric Vehicles Act 2018 and its potential application to other domains, such as medical AI devices, see Smith and Fotheringham (2020) ‘Artificial intelligence in clinical decision-making: rethinking liability’ (2020) <https://doi.org/10.1177/0968533220945766>.
[2] Winfield et al., ‘Robot accident investigation: a case study in responsible robotics’ (2020) <https://arxiv.org/abs/2005.07474>.
[3] See e.g. https://ico.org.uk/your-data-matters/online/social-networking/
[4] https://bills.parliament.uk/bills/3137
[5] Further detail on the FCA’s work on AI is available at https://www.fca.org.uk/firms/data-analytics-artificial-intelligence-ai
[6] Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 International Data Privacy Law 76
[7] Articles 13(2)(f), 14(2)(g) GDPR.
[8] Lilian Edwards and Michael Veale, ‘Slave to the Algorithm: Why a Right to an Explanation Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke L. & Tech. Rev. 18; Bryan Casey, Ashkon Farhangi and Roland Vogl, ‘Rethinking Explainable Machines: The GDPR’s “Right to Explanation” Debate and the Rise of Algorithmic Audits in Enterprise’ (2019) 34 Berkeley Technology Law Journal 143.
[9] https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/the-right-to-be-informed/what-methods-can-we-use-to-provide-privacy-information/#how3
[10] https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/data-protection-impact-assessments-dpias/examples-of-processing-likely-to-result-in-high-risk/
[11] Information Commissioner’s Office, ‘Royal Free London NHS Foundation Trust Undertaking’ (2017) <https://web.archive.org/web/20170705104345/https://ico.org.uk/action-weve-taken/enforcement/royal-free-london-nhs-foundation-trust/>
[12] Information Commissioner’s Office, ‘Clearview AI Inc.: Enforcement Notice’ (2022) <https://web.archive.org/web/20220617180654/https://ico.org.uk/action-weve-taken/enforcement/clearview-ai-inc-en/>; Information Commissioner’s Office, ‘Clearview AI Inc.: Penalty Notice’ (2022) <https://web.archive.org/web/20220617180655/https://ico.org.uk/action-weve-taken/enforcement/clearview-ai-inc-mpn/>
[13] Social Security Act 1998, s 2 in conjunction with Art 22(2)(b) GDPR.
[14] R (on the application of Johnson and ors) v Secretary of State for Work and Pensions [2020] EWCA Civ 778.
[15] R (on the application of Pantellerisco) v Secretary of State for Work and Pensions [2020] EWHC 1944.
[16] https://www.gov.uk/government/publications/taskforce-on-innovation-growth-and-regulatory-reform-independent-report
[17] Ariane Adam and Tatiana Kazim, ‘Data: the wrong direction’ (Law Gazette, 23 June 2022) <https://www.lawgazette.co.uk/commentary-and-opinion/data-the-wrong-direction/5112886.article>
[18] https://bills.parliament.uk/bills/3322
[19] For a technical description, see https://www.ibm.com/docs/en/datacap/9.1.8?topic=passes-example-double-blind-data-entry.
[20] https://ethicsinaction.ieee.org/
[21] https://www.iso.org/committee/6794475/x/cATALOGUE/P/0/U/1/W//x/catalogue/P/0/U/1/W/0/D/0
[22] https://www.coe.int/en/web/data-protection/home
[23] https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement
[24] Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (HL 2017-19, 100), <https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm>.
[25] https://www.gov.uk/government/collections/the-digital-regulation-cooperation-forum
[26] https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal
[27] Project description available at https://research-information.bris.ac.uk/en/projects/regulatory-perspectives-on-automated-decision-making-in-public-ad.
[28] https://ai-watch.ec.europa.eu/index_en
[29] Parliamentary Commissioner Act 1967, s 5.
[30] https://research-information.bris.ac.uk/en/projects/regulatory-perspectives-on-automated-decision-making-in-public-ad
[31] https://doi.org/10.1177/0968533220945766
[32] https://post.parliament.uk/research-briefings/post-pn-0637/
[33] https://doi.org/10.1177/09685332221076124
[34] https://orcid.org/0000-0001-5042-2410