Archie Drake & Perry Keller, King’s College London — Written evidence (NTL0011)
Thank you for the opportunity to submit evidence to this Inquiry. We are making this submission on an individual basis, drawing on recent professional experience.
Our submission is based on a research investigation undertaken over the first half of 2021 in partnership with the British Institute of International and Comparative Law (BIICL). The research sought views on recent legal contestation of automated decision-making, including artificial intelligence, across various sectors, including criminal justice, from groups composed of civil society activists, legal practitioners and academics in the UK. A public panel event in February established a scope of work for full investigation through a ‘policy lab’ expert workshop in May involving 40 participants, co-chaired by a network of other researchers from across the UK. The workshop findings are under peer review, with the objective of publication before the end of 2021.
This work was funded by the Engineering & Physical Sciences Research Council under the Trust in Human-Machine Partnership (THuMP) project (EP/R033722/1).
Question 1. Do you know of technologies being used in the application of the law? Where? By whom? For what purpose?
1. Policymakers have tended to neglect legal actions as a source of evidence on the issues involved in automating decision-making processes. Examples identified in our investigation as relevant to application of the law in a criminal justice context included:
2. It was also apparent from examples encountered during our investigation that there are serious concerns about uses of technology in the application of the law in the field of immigration. Notable legal actions included:
Question 2. What should new technologies used for the application of the law aim to achieve? In what instances is it acceptable for them to be used? Do these technologies work for their intended purposes, and are these purposes sufficiently understood?
3. Those making decisions about any new technology used for the application of the law should ensure that the technology itself complies with applicable law. If the law is unclear in relation to a new technology, either in substance or from the perspective of specific decision-makers, then clarification should be sought before proceeding to use. The law may require or authorise certain legal obligations to be balanced with other legitimate objectives, but in no case is it acceptable for uses of technology to disregard potentially applicable law or to consider lack of legal clarity as an appropriate context in which to proceed.
4. With the passage of time, understandings of the various purposes intended for relevant technologies and of the often-unintended ways in which they fail to meet legal standards have improved. Four related bodies of law are now considered most relevant:
Question 3. Do new technologies used in the application of the law produce reliable outputs, and consistently so? How far do those who interact with these technologies (such as police officers, members of the judiciary, lawyers, and members of the public) understand how they work and how they should be used?
5. Our investigation suggested that the biggest problem in terms of reliability is poor quality in technology products and services used by public bodies, rather than issues inherent to the technologies themselves. Third party vendors are over-claiming system capabilities for commercial advantage and failing to flag the need to invest sufficiently in high-quality systems or data management. It is all too rare that authorities decline to adopt technology because of quality concerns, as in the West Midlands NDAS example given above. More often the issues seem to be emerging later in the form of large-scale injustices, as in the Post Office and Home Office English language examples.
6. The question of how far people understand these technologies when interacting with them is very difficult to answer, not least because of significant variation in the people and in interactions. Our investigation suggested that judges’ application of the law to facts involving technology is perceived to be uneven, for example, especially in the lower courts as they interpret principles established at superior levels.[8] It also suggested that the public have become increasingly familiar with rogue algorithms and other ‘computer says no’ frustrations, but that there is very little systematic governance attention to these burdens on the public or related complaints or disputes.
7. But there are also indications that understandings relevant to public policy are beginning to advance as time passes. Our investigation suggested three insights in particular:
Question 4. How do technologies impact upon the rule of law and trust in the rule of law and its application? Your answer could refer, for example, to issues of equality. How could any negative impacts be mitigated?
8. A vicious cycle of trust is now apparent in legal contestation of relevant technologies in the UK. A proliferation of legal actions and a growing community of committed professionals and advocacy organisations are exposing weaknesses in government technology policy implementation, driving processes for building confidence and trust in technology in an unintended direction. This exacerbates the low level and negative trend in public trust for relevant technology.[10]
9. The question of impacts on the rule of law and on trust in the rule of law are more difficult to answer. On the one hand, it may be that growing legal contestation tends to demonstrate the relevance of law to technological change and its effectiveness as a principles-based constraint on power. On the other, policy may tend to undercut any such institutional development if government continues to authorise, tacitly or carelessly as well as explicitly or deliberately, technology implementations that are poor quality, disregard the law, exploit legal uncertainty and/or evade public disclosure altogether.
10. Unhelpfully from a trust perspective, government has used legal actions to refine authorisations for uses of technology (for example on AFR, the Surveillance Camera Commissioner quickly responded to Bridges by issuing guidance explicitly aimed at supporting its use by forces).
11. The government’s own AI Council advisory group has observed that the government's strategic direction on AI is ‘plagued by public scepticism and lack [of] widespread legitimacy’ in ways that suggest a ‘fundamental mismatch between the logic of the market and the logic of the law’.[11]
Question 5. With regards to the use of these technologies, what costs could arise? Do the benefits outweigh these costs? Are safeguards needed to ensure that technologies cannot be used to serve purposes incompatible with a democratic society?
12. Our investigation suggested that the risk of harm derives mainly from operator misuse, confusion or incompetence rather than inherent qualities of technology. The main sources of risk of harm are as follows:
13. It is certainly possible that these harms, if allowed to develop unchecked, may prove incompatible with democratic society. Two safeguards are needed to mitigate this risk:
Question 6. What mechanisms should be introduced to monitor the deployment of new technologies? How can their performance be evaluated prior to deployment and while in use? Who should be accountable for the use of new technologies, and what accountability arrangements should be in place? What governance and oversight mechanisms should be in place?
14. The first and most important mechanism which should be introduced to monitor the deployment of new technologies is broader public awareness. The Committee for Standards in Public Life has observed that ‘the government is failing on openness’ and the ‘lack of transparency is particularly pressing in policing and criminal justice’.[14] At a minimum, relevant authorities should be obliged to disclose which new technologies are in use (for example using an annual procurement information return compiled and published by a central authority, or by obliging prior publication of and consultation on DPIAs).
15. Performance evaluation and advisory oversight functions should be independent from implementing organisations as in the ‘West Midlands’ example, with particular attention to the mobilisation of technological expertise and robust ethical standards (including diverse representation from local communities as well as legal inputs to support compliance). In particular the protection of commercial confidentiality and trade secrets in the use of new technologies by public authorities needs robust, independent mechanisms of evaluation and oversight to ensure that resort to this legitimate ground for confidentiality is not abused.
16. Accountability arrangements need to start at ministerial levels and extend to the general policy environment established for relevant sections of the criminal justice system. These problems start with ministers and the policy ‘signals’ they send. The Secretary of State for the Home Department, the Lord Chancellor and Secretary of State for Justice and the Minister for Crime and Policing should be called upon to articulate how the government’s vision of technological change in the system safeguards its effectiveness and legitimacy.
17. There are three essential governance mechanisms that should be in place:
Question 7. How far does the existing legal framework around new technologies used in the application of the law support their ethical and effective use, now and in the future? What (if any) new legislation is required? How appropriate are current legal frameworks?
18. The main problem is compliance with the existing legal framework, not the framework itself. Many of the participants in our investigation considered that government policy is irresponsible in that it has not sought genuine engagement with the issues. Criminal justice organisations, and police forces especially, have a reputation in the field for uncritical technology implementation encouraged by permissive central policy. Regulators are chronically under-resourced and are effectively deciding not to enforce relevant law where there are political sensitivities. Rather than encouraging innovation, legal uncertainty tends to harm business and innovation as well as public trust in the criminal justice system (and technology).
19. Two specific new pieces of legislation might be appropriate. First, it would be appropriate to legislate to ban clearly harmful or high-risk applications of technology (such as AFR) where these are not accompanied by robust arrangements to improve public accountability.[15] Second, it would be appropriate to introduce a measure firmly aimed at establishing improved arrangements for transparency and public participation in setting standards for relevant uses of technology (eg the ‘Public Interest Data Bill’ proposal mentioned above).
Question 8. How can transparency be ensured when it comes to the use of these technologies…?
20. As part of broader research project work focused on fostering trust in human-machine interactions[16], our investigation highlighted how useful it is for transparency and other legally-relevant standards to form an integral part of the technology design process to ensure that these standards have a meaningful presence in subsequent applications.
21. Pro-actively providing very clear, basic information about the existence of relevant technology systems is the best starting point for transparency. This accords with a recent survey for CDEI stressing the need for ‘active, up-front communication that the algorithm is in use, to those affected’.[17]
2 September 2021
8
[1] Christie, J. (2020). The Post Office Horizon IT Scandal and the Presumption of the Dependability of Computer Evidence. Digital Evidence and Electronic Signature Law Review, 17, 49
[2] Oswald, M. (2021). A Three-Pillar Approach to Achieving Trustworthy and Accountable Use of AI and Emerging Technology in Policing in England and Wales: Lessons From the West Midlands Model (SSRN Scholarly Paper ID 3812576). Social Science Research Network. https://doi.org/10.2139/ssrn.3812576
[3] National Audit Office. (2019). https://www.nao.org.uk/wp-content/uploads/2019/05/Investigation-into-the-response-to-cheating-in-English-language-tests.pdf
[4] The Joint Council for the Welfare of Immigrants. (2020). We won! Home Office to stop using racist visa algorithm. https://www.jcwi.org.uk/News/we-won-home-office-to-stop-using-racist-visa-algorithm
[5] For example, in the government’s recent applications of technology to monitor behaviour relating to Covid-19 rules: Veale, M. (2020). Analysis of the NHSX Contact Tracing App ‘Isle of Wight’ Data Protection Impact Assessment. LawArXiv. https://doi.org/10.31228/osf.io/6fvgh; Joint Committee on Human Rights. (2020). The Government’s response to COVID-19: Human rights implications. https://publications.parliament.uk/pa/jt5801/jtselect/jtrights/265/26509.htm
[6] See for example Allen, R., & Masters, D. (2019). In the Matter of Automated Data Processing in Government Decision Making: Joint Opinion. https://www.cloisters.com/wp-content/uploads/2019/10/Open-opinion-pdf-version-1.pdf
[7] See for example the cases discussed in Maxwell, J., & Tomlinson, J. (2020). Public law principles and secrecy in the algorithmic state. https://www.lag.org.uk/article/207441/public-law-principles-and-secrecy-in-the-algorithmic-state
[8] Participants in our investigation referred in particular to the way in which principles established in Bridges were interpreted in cases such as The Motherhood Plan v HMT & HMRC [2021] EWHC 309 (Admin) and The 3million Ltd, R (On the Application Of) v Secretary of State for the Home Department [2021] EWHC 1159 (Admin)
[9] See for example The Law Society. (2019). Algorithm use in the criminal justice system report. https://www.lawsociety.org.uk/support-services/research-trends/algorithm-use-in-the-criminal-justice-system-report/
[10] Edelman. (2021). Trust Barometer Tech Sector Report. https://www.edelman.com/sites/g/files/aatuss191/files/2021-03/2021%20Edelman%20Trust%20Barometer%20Tech%20Sector%20Report_0.pdf
[11] AI Council. (2021). AI Roadmap. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/949539/AI_Council_AI_Roadmap.pdf
[12] As is widely noted, this issue has been a focus of attention for relevant debates in the United States. See for example Liu, H.-W., Lin, C.-F., & Chen, Y.-J. (2019). Beyond State v. Loomis: Artificial Intelligence, Government Algorithmization, and Accountability. International Journal of Law and Information Technology, 27. https://doi.org/10.1093/ijlit/eaz001
[13] Fussey, P., Davies, B., & Innes, M. (2021). ‘Assisted’ facial recognition and the reinvention of suspicion and discretion in digital policing. The British Journal of Criminology, 61(2), 325–344. https://doi.org/10.1093/bjc/azaa068
[14] The Committee on Standards in Public Life. (2020). Artificial Intelligence and Public Standards: Report. https://www.gov.uk/government/publications/artificial-intelligence-and-public-standards-report
[15] Williams, R. (2019). Accountability key to the adoption of surveillance technology. Oxford Law Faculty. https://www.law.ox.ac.uk/centres-institutes/centre-criminology/blog/2019/05/accountability-key-adoption-surveillance
[16] See for example Canal, G. et al. (2020). Building Trust in Human-Machine Partnerships. Computer Law & Security Review, 39, 105489. https://doi.org/10.1016/j.clsr.2020.105489
[17] BritainThinks. (2021). Complete transparency, complete simplicity. https://www.gov.uk/government/publications/cdei-publishes-commissioned-research-on-algorithmic-transparency-in-the-public-sector/britainthinks-complete-transparency-complete-simplicity