Written Evidence Submitted by Public Law Project
(GAI0069)
Introduction
1. Public Law Project (PLP) is an independent national legal charity founded in 1990 with the aim of improving access to public law remedies for marginalised individuals. Our vision is a world in which the state acts fairly and lawfully. Our mission is to improve public decision-making, empower people to understand and apply public law, and increase access to justice. We deliver our mission through five programmes: litigation, research, advocacy, communications, and training. One of PLP’s five strategic priorities for 2022-25 is ensuring that Government use of new technologies is transparent and fair. Scrutinizing the use of automated decision-making (ADM) systems and big data by Government is a key part of our work. Given our specific expertise, this evidence focuses on the use of ADM systems by public, rather than private, bodies.
2. We are not opposed in principle to Government use of ADM systems and we recognise their potential benefits. But given that it is a rapidly expanding practice, and of increasing importance in the lives of our beneficiaries, we are focused on ensuring that such systems operate fairly, lawfully and in a non-discriminatory way.
3. In 2022, PLP ran two roundtables on regulating Government use of AI, with over twenty participants, including civil society organisations, grassroots organisations, academics and individuals affected by ADM tools. Our response draws on the key themes arising from the roundtable discussions, as well as our research.
Executive summary
4. PLP is concerned about the impact of ADM on the lives of our beneficiaries and is focused on ensuring that such systems operate transparently, reliably, lawfully, fairly, in a non-discriminatory way, and with adequate authorisation in law for their use. Public trust in ADM systems with wide societal impacts requires that these tools are shown to work, and that they do so in accordance with the law and in respect of non-discrimination and other individual rights.
5. The existing legal framework governing the use of AI provides a patchwork of vital provisions that require ADM systems to be transparent and not to discriminate and/or breach other individual rights. However, we consider the governance of AI, which is necessary to ensure the that the legal framework works in practice, to be operate less effectively. Current governance of AI in the UK does not provide an adequate level of transparency, accountability, or protection against unlawfulness, unfairness, and discrimination.
6. We recognise the potential of the ‘Algorithmic Transparency Standard’ (ATS) to increase transparency of the public sector’s use of algorithms. However, this potential will remain limited if engagement with the ATS is not made compulsory and if improvements are not made to the detail of information required to be published to allow individuals to properly understand the decision-making process to which they are subjected (paras 7-11).
7. Although fragmented, provisions under the Freedom of Information Act 2000, Data Protection Act 2018, UK GDPR, Equality Act 2010, Human Rights Act 1998 and ECHR provide a patchwork of vital, albeit imperfect, safeguards. If the legal framework were to be reformed, the focus should be on fortifying existing safeguards, ensuring clarity and coherence between existing laws and securing quick and effective ways of enforcing individual rights (paras 12-14).
8. In our view, a compulsory algorithmic transparency regime should be overseen by an independent regulator, such as the ICO or, a dedicated AI regulatory body. The regulation of AI should be based on the key principles of: anti-discrimination, reflexivity, respect for privacy and data rights, meaningful transparency, accountability, and avenues for redress. Quick and effective avenues for redress for affected individuals are essential and could be achieved through a specialist, yet accessible regulator and a forum for complaints relating to Government (paras 15-18).
9. Positive examples of AI governance can be found within the EU Commission’s AI Act proposal, Canada’s Directive on Automated Decision Making (DADM), and France’s Law for a Digital Republic. In our view, it is important that the UK draws inspiration from the compulsory transparency regimes found in Canada and France, under which individuals are notified when automation is used to reach a decision, and enough information is provided for individuals to understand the decision-making process to which they are subjected.
10. If the Government’s White Paper on AI is to reform the legal framework, the focus should be on fortifying existing safeguards and ensuring clarity and coherence between existing laws. In our view, the law already requires transparency in Government use of automation, but much more needs to be done to secure meaningful transparency from public bodies operating tools that can have significant social impact. In this regard, inspiration could be drawn from other jurisdictions with compulsory transparency regimes, such as Canada and France. Furthermore, it is essential that there are quick and effective ways of enforcing existing rights within any reform to the governance of AI.
How effective is current governance of AI in the UK?
11. PLP has found that Government uses ADM in a wide range of high impact areas, including immigration, welfare benefits, policing and prisons, education, and more. To date, PLP has gathered more than forty examples of Government ADM systems through our investigative research. We will be publishing full details of these systems in our forthcoming ‘Tracking Automated Government’ register.
12. Government use of ADM technologies can mean quicker and more consistent decision-making. However, it also comes with significant risks and drawbacks. In our view, current governance of AI in the UK is ineffective to the extent that is has not succeeded in securing an adequate level of transparency; accountability; and protection against unlawfulness and unfairness, including ensuring privacy and data rights and protection against discrimination.
a. Lack of transparency – To have trust in algorithms, particularly those with a wide societal impact, we need transparency and explainability: a trustworthy algorithm should be able to “show its working” to those who want to understand how it came to its conclusions.[1] We are observing however that in the UK Government use of ADM systems is marked by a high degree of opacity. For example, PLP is concerned about the use by the Home Office’s Marriage Referral and Assessment Unit (MRAU) of an automated ‘triage tool’ to decide whether to investigate potential ‘sham’ marriages. For convenience, we will refer to this as the ‘sham marriages algorithm’. The triage tool uses 8 risk factors unknown by the individuals subject to its processing.[2]
PLP is also concerned by the Department for Work and Pensions’ (DWP) lack of transparency around their use of automation. Their 2021-2022 accounts revealed that they are trialling a new machine learning-driven predictive model to detect possible fraud in Universal Credit claims. This model has already been used in relation to advances claims already in payment and the DWP expects to trial the model on claims before any payment has been made early in 2022-23. However, we are concerned that the DWP has not published any Equality Impact Assessments, Data Protection Impact Assessments or other evaluations completed in relation to its automated tools. Given the lack of transparency, and especially the DWP’s failure to publish equality analyses of any of its automated tools, we consider that the roll-out of their machine-learning model is premature.
These are just two examples of a broader trend. PLP’s experience is that when asked to disclose further information on the development and operation of ADM tools through requests under the Freedom of Information Act 2000 (FOIA), both the Home Office and the DWP often refuse disclosure. Such refusals purport reliance on exemptions under FOIA, most often section 31(1)(a), which exempts information which, if released, would likely prejudice the prevention and detection of crime (we expand on this further below under ‘To what extent is the legal framework for the use of AI fit for purpose’). Specifically, both departments rely on the vaguely formulated argument that disclosure of further information would permit individuals to ‘game the system’. However, to our knowledge, the risk of gaming has not been further substantiated or particularised. Literature on the subject suggests that some features are not capable of being gamed, for example, when the features used by the algorithm are fixed and unalterable. Examples are race, sex, disability or nationality. Commentators note that disclosure of criteria not based on user behaviour offer: ‘no mechanism for gaming from individuals who have no direct control over those attributes’.[3] Furthermore, disclosure of the criteria alone is generally insufficient for a user to ‘game’ the algorithm. Authors Cofone and Strandburg state:[4]
Because effective gaming requires fairly extensive information about the features employed and how they are combined by the algorithm, it will often be possible for decision makers to disclose significant information about what features are used and how they are combined without creating a significant gaming threat.
We are concerned that neither the Home Office nor the DWP have engaged with whether the information requested is in fact gameable and instead tend to give a blanket refusal to disclose on the basis of the risk of ‘gaming’. We also emphasise that section 31 is subject to the public interest test, meaning that not only does the information have to prejudice one of the purposes listed but, before the information can be withheld, the public interest in preventing that prejudice must outweigh the public interest in disclosure. Our concern is that Government departments are not sufficiently engaging with a consideration of the public interest in transparency and disclosure, which we expand on further in this submission.
Opacity around the existence, details and deployment of ADM systems is a major challenge of working in this area is. There is opacity not only around the detail and impacts of such systems but also, in some cases, around their very existence. Doubtless, there are many other instances of Government automation, beyond the examples PLP has collected. But finding out about them is very difficult. The fruitfulness of PLP’s investigative research has been largely dependent upon the willingness of Government departments to engage meaningfully with requests for information. We have found this to be patchy. Further, engagement with the Cabinet Office’s Algorithmic Transparency Standard thus far appears to have been limited, with only six reports published to date despite the fact that there are many more than six tools currently in use in Government (see further below – ‘What measures could make the use of AI more transparent and explainable to the public?’).
Opacity is a cost in and of itself, undermining public trust in the use of ADM tools. It also comes with costs in terms of ensuring appropriate scrutiny and evaluation of such systems, including individuals being able to enforce their rights when decisions made using them are unlawful. We consider that transparency is a prerequisite for accountability (see further below).
b. Lack of accountability – Transparency is a necessary starting point for evaluating new AI technologies and for accountability and redress.[5] Most importantly, individuals, groups, and communities whose lives are impacted should know they have been subjected to automated decision-making, especially given the costs, financially, emotionally, and physically, of a flawed AI system on the life of an individual.
As it currently stands, affected individuals are rarely aware of the ADM system they have been processed by, making it much more difficult for them to obtain redress for an unfair or unlawful decision. Currently, PLP is particularly concerned about the ‘sham marriages algorithm’ in this regard. An Equality Impact Assessment disclosed by the Home Office in response to one of PLP’s FOIA requests suggests that the algorithm appears to affect some nationalities more than others (see further below – ‘Risk of discrimination’). But affected individuals rarely know about its existence.
Public bodies ought not to be immunised from review, evaluation, scrutiny and accountability for their decisions. It cannot be the case that public bodies are capable of becoming so immunised in practice through adopting ADM systems that are so opaque as to make it practically impossible to assess how they are operating, whether they are doing so lawfully and fairly, and (if not) what may be done about it.
c. Privacy and data protection costs – Government ADM systems generally involve the processing of data on a large scale. This comes with widely recognised costs when it comes to privacy and data protection. In October 2019, the UN Special Rapporteur on extreme poverty and human rights noted that the digitisation of welfare systems poses a serious threat to human rights, raising significant issues of privacy and data protection.[6] In February 2020, a Dutch court ruled that a welfare fraud detection system, known as SyRI, violated article 8 (right to respect for private life, family life, home and correspondence) of the European Convention on Human Rights.[7] In September 2021, the UN High Commissioner for Human Rights produced a report devoted to privacy issues arising because of the widespread use of artificial intelligence. The report sets out problems including: the incentivization of collection of personal data, including in intimate spaces; the risk of data breaches exposing sensitive information; and inferences and predictions about individual behaviour, interfering with autonomy and potentially impacting on other rights such as freedom of thought and expression.[8]
In this regard, large scale data-matching exercises conducted by the UK Government, such as the DWP’s General Data Matching Service[9] and the Cabinet Office-led National Fraud Initiative,[10] are of particular concern.
d. Risk of discrimination – Bias can be ‘baked in’ to ADM systems for various reasons, including as a result of problems in the design or training data. If the training data is unrepresentative then the algorithm may systematically produce worse outcomes when applied to a particular group. If the training data is tainted by historical injustices then it may systematically reproduce those injustices.
The possibility of problems with the training data was highlighted in the well-known Bridges[11] litigation, concerning the South Wales Police’s (SWP) use of facial recognition technology. Before the High Court, there was evidence that, due to imbalances in the representation of different groups in the training data, such technologies can be less accurate when it comes to recognising the faces of people of colour and women.[12] The appeal was allowed on three grounds, one of which was that the SWP had not “done all that they reasonably could to fulfil the PSED [public sector equality duty]”: a non-delegable duty requiring public authorities to actively avoid indirect discrimination on racial or gender grounds.[13]
There can also be bias in the design of an algorithm’s rules. Following the initiation of a legal challenge by the Joint Council for the Welfare of Immigrants (JCWI) and Foxglove, the Home Office had to suspend a visa streaming tool used to automate the risk profiling it undertook of all applications for entry clearance, which used ‘suspect nationality’ as a factor in red-rating applications. Red-rated visa applications received more intense scrutiny, were approached with more scepticism, took longer to determine, and were more likely to be refused than green or amber-rated applications. JCWI and Foxglove argued that this amounted to racial discrimination and was a breach of the Equality Act 2010.[14]
Many of the ADM systems PLP is aware of appear to have an unequal impact on marginalised groups and/or groups with protected characteristics. For example, the Equality Impact Assessment disclosed for the sham marriages algorithm, referred to above, includes a graph showing that couples of Bulgarian, Greek, Romanian, and Albanian nationality are flagged for investigation at a rate of between 20% and 25%. This is higher than the rate for any other nationality. By contrast, around 10% of couples involving someone of Indian nationality and around 15% of couples involving someone of Pakistani nationality fail triage and are thus, subject to a sham marriage investigation. No other nationalities are labelled on the graph.
To give another example, it appears that the DWP’s automated tool(s) for detecting possible fraud and error in Universal Credit claims may have a disproportionate impact on people of certain nationalities. The Work Rights Centre have told us that, since August 2021, they have been contacted by 39 service users who reported having their Universal Credit payments suspended. Even though the charity advises a range of migrant communities, including Romanian, Ukrainian, Polish, and Spanish speakers, as many as 35 of the service users who reported having their payments suspended were Bulgarian, with three Polish and one Romanian-Ukrainian dual national.
e. Other risks of unlawfulness and unfairness – In addition to the risks of discrimination and data protection/privacy violations identified above, there are a number of common problems with ADM systems giving rise to other risks of unlawful and/or unfair decision-making. These problems include:
Of course, an ADM system can make many more decisions within a given timeframe than a single human decision-maker. While this can be beneficial in that affected individuals may receive decisions more quickly, it also means that the negative impacts of a flawed ADM system are likely to be much greater.
f. Lack of authorisation in law for use of these tools – In order that public officials remain accountable to Parliament and to the public, any actions taken by public decision-makers must be authorised by law.[20] It is vital that: ’the manner in which executive functions will be carried out and to whom they are to be delegated is published, transparent and reliable’.[21]Where an automated system is used to make a decision there is a question mark over whether that decision-making is lawful if use of the system is not expressly provided for in law. For example, section 2 of the Social Security Act 1998 allows for any decision made by an officer to also be made ‘by a computer for whose operation such an officer is responsible’. In the case of Khan Properties Ltd v The Commissioners for Her Majesty's Revenue & Customs,[22] the First Tier Tribunal (Tax Chamber) found that use of an automated system to issue a penalty notice was unlawful as the law only provided for the decision to be made by an individual officer and not a computer. After the decision, Parliament passed section 103 of the Finance Act 2020 which states that anything capable of being done by an officer of HMRC can also be done by a computer. However other departments are using automated systems without a clear authorisation for such use in primary or secondary legislation. It is possible that these departments consider that no express authorisation is necessary because they are only using these systems as tools. If that is the case however, it is even more important that how they are using these tools, when such tools are being used in the decision-making process, and who retains ultimate decision-making control is publicly available information.
What measures could make the use of AI more transparent and explainable to the public?
13. The Cabinet Office is currently piloting an ‘Algorithmic Transparency Standard’ (ATS). The ATS asks public sector organisations across the UK to provide information about their algorithmic tools. It divides the information to be provided into Tier 1 and Tier 2. Tier 1 asks for high-level information about how and why the algorithm is being used. Tier 2 information is more technical and detailed. It asks for information about who owns and has responsibility for the algorithmic tool, including information about any external developers; what the tool is for and a description of its technical specifications (for example ‘deep neural network’); how the tool affects decision making; lists and descriptions of the datasets used to train the model and the datasets the model is or will be deployed on; any impact assessments completed; and risks and mitigations.
14. Our view is that the ATS does not go far enough to meet a minimum viable standard of transparency. At present, it is not compulsory for public sector organisations to engage with the ATS. This does not appear likely to change in the near future. In its response to the consultation ‘Data: a new direction’, the Government stated that it “does not intend to take forward legislative change at this time”, despite widespread support for compulsory transparency reporting, and, indeed, the Data Protection and Digital Information Bill did not include any such requirements.
15. Moreover, even if it were placed on a statutory footing, the ATS does not ask for sufficient operational detail. The purpose of transparency bears on its meaning in the context of ADM. The Berkman Klein Center conducted an analysis of 36 prominent AI policy documents to identify thematic trends in ethical standards.[23] They found there is convergence around a requirement for systems to be designed and implemented to allow for human oversight through the “translation of their operation into intelligible outputs”. In other words, transparency requires the ‘translation’ of an operation undertaken by an ADM system into something that the average person can understand. Without this, there can be no democratic consensus-building or accountability. Another plank of meaningful transparency is the ability to test explanations of what an algorithmic tool is doing. In our view, meaningful transparency requires that people lacking specific technical expertise—i.e., the vast majority of us—are able to understand and test how an algorithmic tool works.[24]
16. Against this definition of transparency, the ATS falls short. Neither Tier 1 nor Tier 2 of the ATS requires sufficient operational details for individuals properly to understand the decision-making process to which they are subjected. At Tier 1, organisations are asked to explain ‘how the tool works’, but nowhere is there a reference to any criteria or rules used by simpler algorithmic tools. At Tier 2, a ‘technical specification’ is requested, but this appears to mean nothing more than a brief descriptor of the type of system used, e.g., ‘deep neural network’.[25]
17. We propose the following measures for improving transparency:
To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?
18. The current legal framework for the use of AI and ADM systems is fragmented. Few existing laws are AI-specific. Nonetheless, the existing legal framework contains a number of crucial safeguards which, even if they were not created with AI in mind, can be interpreted to regulate its use. Laws regulating the use of ADM systems include: the Freedom of Information Act 2000 (FOIA 2000); the Data Protection Act 2018 (DPA 2018) and UK General Data Protection Regulation (GDPR); public law doctrines developed through the common law, such as the duty to give reasons; the Equality Act 2010 (EA 2010); and the Human Rights Act 1998 (HRA 1998) and European Convention on Human Rights (ECHR), particularly articles 8 which protects the right to private and family life, home and correspondence and 14 which ensures equal enjoyment of all ECHR rights and protection from discrimination. In what follows, we give some additional detail on the requirements of the FOIA 2000, DPA 2018 and UK GDPR, and the EA 2010:
We think it is vital that the data subject access requests are free of charge. This is because the ability of an individual to access their own data is a fundamental right. Moreover, some protected characteristics including race, sex (in the case of single mothers), and disability are associated with an increased risk of poverty.[29] The more protected characteristics someone holds, the greater their statistical risk of poverty.[30] As explained above, many known ADM systems, such as immigration enforcement systems and welfare fraud and error detection systems, appear to have a disproportionate effect on these groups. It is especially important that they have adequate, accessible options for finding out about the ADM systems to which they may have been subjected.
In our experience, it appears that Article 22 is capable of placing a meaningful limit on the deployment of ADM systems. For example, in response to PLP’s request for information about Her Majesty’s Prison and Probation Service’s (HMPSS) use of algorithmic decision-making, a representative said: “I hope it is helpful if I explain that HMPPS does not, and will not use computer generated algorithms to automate or replace human decisions in HMPPS that have any significant impact on staff, people in prison or on probation. We may seek to automate low-level administrative decisions but this will always be deployed with human oversight and stringent quality assurance measures. We do not have any examples where ‘a computer automatically performs all of the decision-making process without a human's direct involvement’”.
This suggests that at least some Government departments may be interpreting Article 22 broadly, so as to meaningfully restrict the role of automation in decision-making.
Nonetheless, PLP considers that Article 22 could be made more effective through clarification of its key terms – especially “a decision based solely on automated processing” – to ensure that it has broad practical application. Article 22, properly defined, should prohibit de facto solely automated decision-making where, due to automation bias[31] or for any other reason, the human official is merely rubber-stamping a score, rating or categorisation determined by the computer. It should require meaningful human oversight, rather than a token gesture.
However, the EA 2010 is not solely concerned with fairness on a statistical level but with the treatment of individuals. Dee Masters and Robin Allen KC have given the following example: “Suppose a situation in which a recruitment tool is used to identify 10 candidates for a particular role. There are 1000 applicants, 300 are men and 700 are women. “Outcome fairness” might be used to dictate that 30% of people identified as suitable candidates for the role must be men and 70% must be women meaning that the final recommended pool should consist of 3 men and 7 women… if there were 8 women who were most suitable, one woman would need to be “held back” so that 3 men could be put forward and the “right” statistical outcome achieved.”[36] This example, judged by the standard of ‘outcome fairness’ alone, may seem unproblematic. But, under section 13 of the EA 2010, the woman who is “held back” is subject to direct discrimination on the basis of sex. We should not lose sight of the unfairness of direct discrimination, simply because we are in an ADM context. In PLP’s view, the courts and their interpretation of all the requirements of the EA 2010 have a valuable role to play in assessments of fairness in the context of ADM. We agree with Dee Masters and Robin Allen KC that the DPA 2018 and UK GDPR should not be “siloed” off from the EA 2010. We endorse their recommendation that the DPA 2018 and UK GDPR should be amended to state “unequivocally, and without any exceptions, that data processing which leads to breaches of the EA 2010 is unlawful”.[37]
19. In summary, the existing legal framework provides a patchwork of vital, albeit imperfect, safeguards. If the legal framework were to be reformed, the focus should be on fortifying existing safeguards and ensuring clarity and coherence between existing laws.
20. Furthermore, it will be necessary to ensure that there are quick and effective ways of enforcing existing rights. At PLP’s roundtables, one common view was that there is no need for new digital rights, but there is a need for effective enforcement mechanisms for affected individuals.
How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?
21. As mentioned above, we think that a compulsory algorithmic transparency regime should be operated by an independent regulator, such as the ICO or, potentially, a dedicated AI regulatory body. If the ICO were to carry out this function, it would be important to ensure that it was capacitated with the necessary technical expertise and sufficient funding.
22. More generally, we think that AI regulation must be based on the following principles:
23. In practice, the realisation of these principles – especially the principles of anti-discrimination and respect for privacy and data rights – may require the prohibition of certain types or uses of AI technologies (see further below, ‘What lessons, if any, can the UK learn from other countries on AI governance?’ para.19).
24. Further, it will be vital to create quick and effective avenues for redress for affected individuals. One way to do this could be through a specialist regulator and forum for complaints relating to Government use of AI. However, roundtable attendees were concerned that a specialist forum may not be accessible for affected individuals. Another option could be sector- or system-specific avenues for redress. For example, if welfare benefits are suspended through the use of an automated system, an affected individual should be specifically informed that an automated system was used in the decision-making process and there should be a dedicated complaints procedure if they suspect unfairness.
What lessons, if any, can the UK learn from other countries on AI governance?
25. It is worth considering the EU Commission’s proposed artificial intelligence (AI) regulation, adopted on 21 April 2021. The proposed regulation would include:
26. In our view, there is merit in some of these proposals. First, the proposal that the public register of AI systems is managed by the EU Commission, rather than by providers or users of the systems in question. As articulated above, we consider an independent regulator to be an effective aspect of a compulsory algorithmic transparency regime in the UK. Second, the proposal that the design of AI systems must allow for effective human oversight. If a similar principle was incorporated in the UK’s regulation of AI, it would encourage a broader practical application of Article 22, that would prohibit de facto solely automated decision-making where, due to automation bias or for any other reason - requiring meaningful human oversight, rather than a token gesture. Third, the proposal to ban particularly problematic types or uses of AI to protect against specific harms to individuals and communities. However, we do not believe the UK should follow in devising an exhaustive list of prohibitions as an exhaustive list may allow for loopholes to develop or may quickly become outdated in light of new and unforeseen technological developments. Many attendees to our roundtables considered that a more fruitful possibility may be the prohibition of harmful uses, rather than types, of new technologies.
27. Whilst offering many positive examples, is also important to acknowledge the limitations of the EU AI regulation proposal. For example, amongst attendees at our roundtable events, there was widespread concern about using ‘high risk’ as a touchstone for regulation and, in particular, as the touchstone for compulsory transparency. This is due to the difficulty of coming up with a satisfactory definition of ‘high risk’ and the potential for the threshold requirement to be manipulated or abused, such that AI systems that should be made transparent remain opaque. In the proposed EU regulations, ‘high risk’ is defined with reference to a list of functions, set out at Annex III. The list includes, AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons; AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending, or the risk for potential victims of criminal offences. This list can be added to by the EU Commission (by virtue of Article 7), but there is no “catch all” provision. The worry is that the list may quickly become outdated as new technologies and new uses of technologies emerge, and that it will be too slow and cumbersome to amend. Similar concerns were raised by attendees at our roundtables, with regard to a closed list of prohibitions. Such a list may allow for loopholes to develop or may quickly become outdated in light of new and unforeseen technological developments.
28. In Canada, the Directive on Automated Decision-Making (DADM) has been in effect since on 1 April 2019, requiring compliance as of 1 April 2020. The Directive seeks to make data and information on the use of ADM systems in federal institutions available to the public. The Directive creates a number of mandatory requirements:
29. In our view, the DADM offers positive examples of AI governance in the public sector – particularly the compulsory requirement for all system operators to complete an algorithm-specific impact assessment and release the results on the Open Government Portal, in an accessible format and in both official languages. As set out above, we believe the UK ATS could be more effective if participation was compulsory, like that under the Canadian regime. The DADM also sets a positive standard in its requirement to inform individuals that a decision will be undertaken in whole or in part by an ADM system before the decision is made, and to provide explanations in advance of how a decision will be made. The requirement to test data and information used by the ADM system for unintended data biases or other factors that may unfairly impact the outcomes (and the requirement to continually monitor the outcomes against these same standards); and the requirement of human intervention in decision-making.
30. That being said, there are limitations to the Canadian regime. First, the Directive is very limited in scope – it only regulates systems in the Federal Government and federal agencies. It does not apply to systems used by provincial governments, municipalities, or provincial agencies such as police services, child welfare agencies and/or many other important public institutions. Nor does the Directive apply to private sector systems. Further still, requirements apply only to ‘external services’ of federal institutions. ADM systems used internally by federal institutions fall outside the scope of the DADM - a significant gap when one considers the expanding use of ADM systems in the employment context. The UK must consider this shortfall of the Canadian system to ensure that the scope of its own AI regulation is sufficiently broad and places a mandatory requirement on all public sector organisations.
31. Second, the Directive applies only to systems used to “recommend or make an administrative decision about a client”. Canadian Professor, Teresa Scassa emphasised that “there may be many more choices/actions that do not formally qualify as decisions and that can have impacts on the lives of individuals or communities” which would fall outside the scope of the Directive and “remain without specific governance”. The UK must take note of this limitation and ensure that its regulation of AI is not drafted so narrowly as to leave many forms of automation without specific governance.
32. Lessons may also by learnt by looking to France’s Loi pour une Republique Numérique (Law for a Digital Republic) 2016. [39] Under the mandatory regime of Loi pour une republique numérique, Article L-312-1-3 mandates transparency of all algorithms and ADM systems used by public agencies. France’s Administrative Code is amended to include a right to an explanation of algorithmic decision-making. All agencies are required to publicly list any algorithmic tools they use, and to publish their rules, including systems where AI is only part of the final decision.
33. Like the Canadian DADM, the French regime requires administrations implementing ADM systems to provide notice that a decision is made or supported by an algorithm, but goes further by requiring this of all ADM systems in the scope of the Loi pour une Republique Numérique, not only those seen as high-risk. Under this regime it is also required to publish the rules the ADM system operates on, as well as the purpose of such processing.[40] Further still, if requested by the person concerned the implementing authority must also disclose the extent to which the algorithmic contributed to the decision-making process, the data processed and their sources, and the processing criteria and their weighting.[41]
Conclusion
34. To a large extent, the existing legal framework for the governance of AI is fit for purpose. It provides a patchwork of vital, albeit imperfect, safeguards across public law doctrines developed through the common law, such as the duty to give reasons. The law that makes up this patchwork includes: FOIA 2000, DPA 2018, UK GDPR, EA 2010, HRA 1998 and the ECHR, particularly Article 8, and Article 14. If the legal framework were to be reformed, the focus should be on fortifying existing safeguards, ensuring clarity and coherence between existing laws, and guaranteeing quick and effective ways of enforcing individual rights.
35. However, we consider the practical governance of AI, and specifically ADM, within this legal framework to be less effective. In our view, the law already requires transparency in Government use of automation. However, in practice, public bodies operating tools with a significant social impact, such as recommending who is to be investigated before they are allowed to marry, or whose benefits should be suspended, adopt an approach of secrecy by default.
36. While transparency is far from sufficient to secure the fair and lawful use of new technologies, PLP considers it to be a vital first step. Without transparency, there can be no evaluation of whether systems are working reliably, efficiently, and lawfully, including assessment of whether or not they unlawfully discriminate. Without the necessary evaluations, there can be no accountability when automated decision making goes wrong or causes harm. Nor can there be democratic consensus-building about the legitimate use of new technologies.
37. In our view, securing meaningful transparency should be the first port of call when considering the regulation of AI. In this regard, inspiration could be drawn from other jurisdictions with compulsory transparency regimes, such as Canada and France.
(November 2022)
[1] See further D. Spiegelhalter, ‘Should We Trust Algorithms’, Harvard Data Science Review, Issue 2.1, Winter 2020. This paper was referenced by the Centre for Data Ethics and Innovation in its report ‘Review into bias in algorithmic decision-making’, November 2020.
[2] Three of the eight criteria have been disclosed in an Equality Impact Assessment received by PLP in response to a Freedom of Information request, however we consider that this disclosure is not sufficient in helping individuals understand how the algorithm is operating.
[3] N Diakopoulos, ‘Accountability in algorithmic decision making’ Communications of The Acm, February 2016, Vol. 59, No. 2
[4] Cofone, Ignacio and Strandburg, Katherine J, Strategic Games and Algorithmic Secrecy (2019) 64:4 McGill LJ 623
[5] See Justice and Home Affairs Committee, ‘Technology rules? The advent of new technologies in the justice system’ (30 March 2022), available at: https://committees.parliament.uk/publications/9453/documents/163029/default/.
[6] UNCHR, ‘Report of the Special rapporteur on extreme poverty and human rights’ (11 October 2019) UN Doc A/74/493, available at https://undocs.org/A/74/493.
[7] NJCM v The Netherlands C-09-550982-HA ZA 18-388.
[8] UNCHR, ‘Report of the United Nations High Commissioner for Human Rights: The right to privacy in the digital age’ (13 September 2021), UN Doc A/HRC/48/31 available at https://www.ohchr.org/EN/HRBodies/HRC/RegularSessions/Session48/Documents/A_HRC_48_31_AdvanceEditedVersion.docx.
[9] Information about the General Data Matching Service can be found, for example, in the DWP’s ‘Fraud Investigations Staff Guide - Part 1’, pages 135-6, available at https://www.gov.uk/government/publications/fraud-investigations-staff-guide.
[10] Information about the National Fraud Initiative is available at https://www.gov.uk/government/collections/national-fraud-initiative.
[11] R (Bridges) v Chief Constable of South Wales Police and others [2020] EWCA Civ 1058.
[12] See the expert report of Dr Anil Jain, available at https://www.libertyhumanrights.org.uk/wp-content/uploads/2020/02/First-Expert-Report-from-Dr-Anil-Jain.pdf.
[13] R(Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058, [201].
[14] See 'We won! Home Office to stop using racist visa algorithm', JCWI Latest News blog, available at https://www.jcwi.org.uk/news/we-won-home-office-to-stop-using-racist-visa-algorithm.
[15] See for example Jordan Hayne and Matthew Doran, ’Government to pay back $721m as it scraps Robodebt for Centrelink welfare recipients' (29 May 2020) ABC News, available at https://www.abc.net.au/news/2020-05-29/federal-government-refund-robodebt-scheme-repay-debts/12299410. The Australian Government established the Royal Commission into the Robodebt Scheme on 18 August 2022 and their final report is due in April 2023. Information about the Royal Commission is available at https://robodebt.royalcommission.gov.au/.
[16] See: https://gordonlegal.com.au/robodebt-class-action/
[17] See, for example, L.J. Skitka and others, ‘Does automation bias decision-making?’(1999) 51 International Journal of Human-Computer Studies 991.
[18] Independent Chief Inspector of Borders and Immigration, ‘An inspection of entry clearance processing operations in Croydon and Istanbul: November 2016 – March 2017’ (July 2017) at 3.7, 7.10 and 7.11, available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/631520/An-inspection-of-entry-clearance-processing-operations-in-Croydon-and-Istanbul1.pdf.
[19] Independent Chief Inspector of Borders and Immigration, ‘An inspection of the Home Office’s
Network Consolidation Programme and the “onshoring” of visa processing and decision
making to the UK September 2018 – August 2019’ (February 2020) at 7.26, available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/863627/ICIBI_An_inspection_of_the_Home_Office_s_Network_Consolidation_Programme.pdf.
[20] A Le Sueur ‘Robot Government: automated decision-making and its implications for parliament’ in A Horne and A Le Sueur (eds) Parliament: Legislation and Accountability (Oxford: Hart Publishing, 2016).
[21] R(Bridgerow Ltd) v Cheshire West and Chester Borough Council [2014] EWHC 1187 (admin) at para 36.
[22] [2017] UKFTT 830 (TC)
[23] Jessica Fjeld, and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (January 15, 2020). Berkman Klein Center Research Publication No. 2020-1 (15 January 2020), available at https://ssrn.com/abstract=3518482 or http://dx.doi.org/10.2139/ssrn.3518482.
[24] Mia Leslie and Tatiana Kazim, ‘Executable versions: an argument for compulsory disclosure (part 1)’ (The Digital Constitutionalist, 03 August 2022), available at: https://digi-con.org/executable-versions-part-one/.
[25] See also Tatiana Kazim and Sara Lomri, ‘Time to let in the light on the Government’s secret algorithms (Prospect, 02 March 2022), available at https://www.prospectmagazine.co.uk/politics/time-to-let-in-the-light-on-the-governments-secret-algorithms; and Mia Leslie and Tatiana Kazim, ‘Executable versions: an argument for compulsory disclosure (part 1)’ (The Digital Constitutionalist, 03 August 2022). Available at https://digi-con.org/executable-versions-part-one/.
[26] Mia Leslie and Tatiana Kazim, ‘Executable versions: an argument for compulsory disclosure (part 2)’ (The Digital Constitutionalist, 03 November 2022), available at: https://digi-con.org/executable-versions-part-two/.
[27] FNV, Lighthouse Reports, Argos, and NRC news’ reconstruction of the Dutch ‘Fraud Scorecard’ algorithm (14 July 2022), available at: https://www-fnv-nl.translate.goog/nieuwsbericht/sectornieuws/uitkeringsgerechtigden/2022/07/ben-jij-door-je-gemeente-mogelijk-als-fraudeur-aan?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp
[28] Gabriel Geiger and others, ‘Junk Science Underpins Fraud Scores’ (Lighthouse Reports, 22 June 2022), available at: https://www.lighthousereports.nl/investigation/junk-science-underpins-fraud-scores/.
[29] See, for example, Sara Davies and David Collings, ‘The Inequality of Poverty: Exploring the link between the poverty premium and protected characteristics’ (February 2021), University of Bristol Personal Finance Research Centre, available at https://fairbydesign.com/wp-content/uploads/2021/02/The-Inequality-of-Poverty-Full-Report.pdf.
[30] Ibid.
[31] See, for example, L.J. Skitka and others, ‘Does automation bias decision-making?’ (1999) 51 International Journal of Human-Computer Studies 991.
[32] See ’ICO fines facial recognition database company Clearview AI Inc more than £7.5m and orders UK data to be deleted’ (23 May 2022), https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facial-recognition-database-company-clearview-ai-inc/.
[33] R (Bridges) v Chief Constable of South Wales Police and others [2020] EWCA Civ 1058.
[34] R (Bridges) v Chief Constable of South Wales Police and others [2020] EWCA Civ 1058.
[35] R (Bridges) v Chief Constable of South Wales Police and others [2020] EWCA Civ 1058 at [201].
[36] Robin Allen KC and Dee Masters, Joint second opinion in the matter of the impact of the proposals within “Data: a new direction” on discrimination under the Equality Act 2010, available at https://research.thelegaleducationfoundation.org/wp-content/uploads/2021/11/TLEF-Second-Opinion-5-November-2021.pdf.
[37] Robin Allen QC and Dee Masters, Joint second opinion in the matter of the impact of the proposals within “Data: a new direction” on discrimination under the Equality Act 2010, available at https://research.thelegaleducationfoundation.org/wp-content/uploads/2021/11/TLEF-Second-Opinion-5-November-2021.pdf.
[38] R (Bridges) v Chief Constable of South Wales Police and others [2020] EWCA Civ 1058.
[39] Law No. 2016-321 of 7 October 2016 for a Digital Republic <https://www.legifrance.gouv.fr/loda/id/JORFTEXT000033202746/> accessed 25 May 2022.
[40] Article L.312-1-3, CRPA.
[41] Article R.311-3-1-2, CRPA.