Written Evidence Submitted by CENTRIC
(GAI0043)
Organisational profile: CENTRIC
CENTRIC (Centre of Excellence in Terrorism, Resilience, Intelligence and Organised Crime Research) is a multi-disciplinary and end-user focused centre of excellence, located within Sheffield Hallam University. The global reach of CENTRIC links both academic and professional expertise across a range of disciplines providing unique opportunities to progress ground-breaking research. The strategic aim of CENTRIC is to facilitate the triangulation between the four key stakeholders in the security domain: government, private industry, academia and the public. The mission of CENTRIC is to provide a platform for researchers, practitioners, policy makers and the public to focus on applied research in the Security domain.
Reason for submitting evidence
CENTRIC is at the forefront of ongoing developments to AI governance leading the development of a Framework for AI accountability in the Law Enforcement and Justice Domain: AP4AI (Accountability Principles for AI). The initiative is co-led by Europol and supported by EU JHA agencies including EUAA, Eurojust, CEPOL and FRA. The ambition and impact of AP4AI is however, much wider than the EU, engaging with experts and citizens world-wide. So far AP4AI has collected expert and citizen input from 30 countries (UK, USA, 22 EU Member States, Australia, Canada, Norway and Ukraine) on the basis of which it developed universal principles and mechanisms to assess and ensure AI accountability for current and future approaches of AI across the full lifecycle (i.e., research, design, procurement, deployment as well as evidence of appropriate use). Further engagements are ongoing.
We believe that the approach, measures and knowledge created within this initiative can provide crucial insights to inform AI governance efforts in the UK in accordance with the government’s Digital Strategy. AP4AI focuses on applications of Artificial Intelligence (AI) in the Law Enforcement and Justice domain. However, comparable considerations also apply for all other sectors with potentially high-impact applications of AI.
The evidence will provide information to the three following call questions which we believe are of fundamental importance to the work of the Committee:
- How should decisions involving AI be reviewed and scrutinised in both public and private sectors?
- How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?
- What measures could make the use of AI more transparent and explainable to the public?
For full information on the initiative:
- AP4AI website: www.ap4ai.eu
- Reports:
Funding information: AP4AI has not received specific funding from any public, private, governmental or non-governmental body.
Evidence to “Accountability for Artificial Intelligence in the Law Enforcement and Justice Domain: approach, instruments and conceptual guidance from the AP4AI project”
A. Accountability challenges addressed by AP4AI
AI deployments in the Law Enforcement and Justice domain rightly deserve heightened scrutiny given their high-impact nature. The challenge for both internal decision-makers in the domain and bodies of democratic oversight is how to capitalise on the new technological capabilities of AI, the use of which derive from legitimate societal expectation and demands for safety and security, while at the same time assuaging societal concerns about the use of such technologies.
The central issue at the heart of AI deployments in the Law Enforcement and Justice sector is not whether or what AI should be used by the police and other law enforcement agencies, but rather how to ensure that their use of available technology in all its forms and operational use cases is accountable. In any jurisdiction it is usually clear where police accountability lies but what its component parts consist of and how accountability can be measured, reviewed and improved is often far less defined.
The AP4AI (Accountability Principles for Artificial Intelligence) Project addresses this situation and offers a practice-oriented framework that supports Law Enforcement and Justice organisations as well as policy developments at local, national and even global levels in the accountable use of AI. For this purpose, AP4AI Project created a comprehensive and validated Framework for AI Accountability for Policing, Security and Justice. The AP4AI Framework consists of two core elements:
AP4AI offers a step-change in the application of AI by the Law Enforcement and Justice community by defining a robust and application-focused Framework that integrates security, legal, ethical, technical and citizen perspectives towards the accountable design and deployment of AI. |
B. Conceptualising Accountability as core element of AI governance
We consider demonstrable AI Accountability as a core element of AI governance, as any use of algorithms and AI-based systems and platforms must be able to be auditably scrutinised by and responsive to the relevant public and oversight authorities. This is especially the case in those organisations which have security and justice as a core mandate. Here demonstrable accountability is essential in ensuring a successful relationship with citizens. In fact, in many instances establishing the necessary arrangements for democratic accountability in the context of, e.g., biometrics and surveillance, is in fact a legal requirement.[1]
We argue for the primacy of accountability as guiding framework for AI use in the Law Enforcement and Justice domain as it is the only concept that binds organisations to citizen-enforceable obligations and thus provides a foundation that has actionable procedures at its core.
Definition of Accountability used in AP4AI: “Accountability is the acknowledgement of an organisation’s responsibility to act in accordance with the legitimate expectations of stakeholders and the acceptance of the consequences – legal or otherwise – if they fail to do so. In this context liability or rather ‘answerability’[2] is the basis for meaningful accountability as it creates a foundation for the creators and users of AI to ensure that their products are not only legally fit for the legitimate purpose(s) in the pursuit of which they are used (attracting the appropriate claims for negligence or other breach of duty as fixed in law), but also invite scrutiny and challenge and accept the consequences of using AI in ways that their communities find morally or ethically unacceptable. There is further responsibility to ensure the avoidance of misuse and malicious activity in whatever form by both the relevant security practitioners and their contractors, partners and agents. AP4AI, by focusing on AI Accountability, is a framework designed to underscore the importance of legal, ethical and societal duties of responsible organisations using AI in a security context, which explicitly includes consequence for misuse and breaches in conduct.” (Source: Akhgar et al., 2022a)
In a law enforcement or security context, discussions of accountability tend to be focused on police accountability towards citizens. Given the complexity and the scale of effects security applications of AI have on individuals, communities, societies and organisations (Law Enforcement and others) not only at local and national levels but increasingly at a global level[3] this is insufficient.
Instead AP4AI work is informed by the conviction that all AI stakeholders (citizens, security practitioners, judiciary, policy makers, industry, academia, etc.) have to be active constituents in the accountability process, and that this process needs to be grounded in a broad and sustained engagement.[4]
The innovative potential of AP4AI is in establishing the extent, form and nature of accountability in relation to society (including needs and legitimate expectations of individuals and specific groups), Law Enforcement and Justice organisations, law and ethics, and their translation into (a) overarching, universal principles to guide current and future AI-capabilities for the internal security community guided by fundamental human rights and (b) the conception of methods and instruments for their context-sensitive and adaptive implementation.
AP4AI conducted a broad expert consultation to define AI Accountability and to contextualise it for the Law Enforcement and Justice Domain engaging with subject matter experts from law enforcement agencies, judiciary, industry, ethics, civil society and academia in 28 countries (UK, USA, 22 EU Member States, Australia, Canada, Norway and Ukraine).
The contextualisation of AI Accountability is vital to ensure its applicability within diverse cultural, social and political domains as well as its specific form and nature in the context of the Law Enforcement and Justice Domain. More generally, contextualisation helps to establish which principles and implementation processes for AI Accountability are meaningful across operational and national contexts.
The expert consultations led to 12 universal principles that together define AI Accountability. The 12 principles and their abbreviated definitions are listed below. The full descriptions of the principles as well as the methodology for their development can be found in Akhgar et al. (2022a).
|
|
|
|
|
|
|
|
|
|
|
|
Further details: Akhgar et al. (2022a). AP4AI Report on Expert Consultations. https://www.ap4ai.eu/node/6
D. Guidance for the implementation of AI Accountability into practice
To be productive, AI Accountability needs translation into actionable steps. For this purpose, AP4AI created an evidenced-based methodology for their implementation and adaptation for the disparate stakeholder groups relevant to the AI Ecosystem (i.e., Law Enforcement organisations, citizens, data protection officers, local or national policy makers, technology providers, researchers, etc.).
The translation into actionable steps and processes contains:
AI Accountability Agreement (AAA): AP4AI advocates for an AI Accountability Agreement (AAA) that identifies the relevant accountability provisions for each application of AI. While not a legal document or enforceable contract, the AAA commits parties to the approach that each will take towards a formal and implementable processes for the application of the Accountability Principles for different uses of AI within the internal security domain.
“An AI Accountability Agreement (AAA) should be viewed as a social contract underpinned by legal obligations between internal security organisations and its stakeholders including citizens, oversight bodies, suppliers, consumers of AI services (e.g., other agencies) and others, as applicable. The AAA can thus be understood as an implementation container or reference architecture, which drives implementation of the Principles in a practical and operational settings of internal security organisations. It hence serves as a mechanism to bring the abstract nature of the principles into the implementable environment of internal security organisations and their wider ecosystem (e.g., oversight bodies and government agencies).” (Source: Akhgar et al. (2022b) |
The AAA clearly sets out and formalises four steps, in which the AP4AI Principles help to identify:
The AAA should be signed at an executive level and published as a formal decision to the relevant bodies.
Contextualisation for specific application cases: The AAA and AI Accountability Principles will be most effective if they are contextualised for an application domain. Contextualisation will ensure that AI Accountability can be assessed confidently for AI deployments in specific usage cases. Areas chosen for contextualisation are AI deployments for CSE and categorisation of CSEM (Child Sexual Exploitation Materials), cyber-dependent crime, serious and organised crime activities including cross-border issues, detection of harmful internet content such as terrorist generated internet content, protection of public spaces and communities and Investigation of terrorism (including CVE) related offences, with possible expansion to further areas.
AP4AI currently conducts validation and contextualisation sessions with subject-matter experts, the results of which will be published throughout 2023.
Further details: Akhgar et al. (2022b). AP4AI Framework Blueprint. https://www.ap4ai.eu/node/14
E. Citizen validation and perspectives on AI regulation
The AI Accountability approach has been validated through a citizen consultation in 30 countries (UK, USA, Australia and 27 EU countries, total: 6,674 citizens). Current mechanisms of accountability were rated either too weak (26%[5]), too restrictive (8%) or not well understood (34%) compared to only 32% deeming them ‘just right’. The overwhelming majority of participants however confirmed that a universal mechanism to ensure the accountable AI use by police is required (82% compared with 3% who disagreed).
AP4AI Principles received overwhelming confirmation with between 87% and 75% of participants validating their importance. Of highest importance to citizens were Conduct: 87%, Legality: 86% and Explainability: 84%.
Preferred oversight bodies for monitoring accountability (multiple answers possible):
Preferred oversight bodies for redressing cases of AI misuse (multiple answers possible):
Further details: Bayerl et al. (2022). Citizen Perspectives on AI and Accountability. Will shortly be available online on www.ap4ai.eu
F. Relevance of AP4AI for AI Governance in the UK
Every public service already depends to some degree on AI. In administrative and mechanical settings, data has become commoditised, and every functioning organisation needs AI to manage that data. While this basic functionality is nowhere near the top level of deep learning or recursively self-improving machines, AI has become another utility helping manage iterative tasks at scale and speed efficiently and freeing up resources for other purposes.
Law enforcement is a strong context for the development of a robust AI Accountability concept and as such can act as an important starting point as well as test case for an overarching AI Governance Framework. Insights from the AP4AI Project therefore offer well-tested conceptual and practical tools for a core tenet of AI Governance, namely AI Accountability.
In the context of policing by consent as enjoyed by the UK the public perception of AI being deployed in support of decision-making is critical: there is a qualitative difference between a chief officer using AI to order new uniform off the shelf versus using AI to order people off the streets. It is therefore important that the public can understand exactly which forms of AI are being used in their neighbourhoods and in pursuit of what legitimate purpose. Beyond a minimally functional level, there are some high-risk areas in the police use of AI, examples of which include biometric surveillance functions that raise legitimate concerns about its proper role and a dearth of guidance and regulation as to how that role might be overseen and scrutinised.
AI-driven video analytics in particular have revolutionised the power of surveillance which can now combine multiple images captures from a range of sources (CCTV, Go-Pros, dashcams, Ring doorbells, body-worn devices etc.) in helping the police understand what happened during an incident or investigation and to support a prosecution. AI has also enhanced the capabilities of others including organised crime groups, hostile state actors and individual offenders. At the same time, deep suspicions exist against algorithmic decision-making and the potential for inherent bias, differential reliability across groups with protected characteristics and the early experimentation with inferential algorithms purporting to be able to interpret mood and intent or even, in the case of actuarial systems, claims that AI is able to anticipate criminal conduct before it has happened.
Further, significant efforts are ongoing in the ‘training’ of algorithms which means scanning as many manifestations of physiognomy as possible including that of children and other ‘categorisations’ of intersectionality. How far people are even aware of these features and functions in what are powerful computers which they see as simply ‘cameras’ is far from clear, but they are essential considerations for any organs tasked with governance, assurance and accountability. And finally, the importance of having trusted technical partners has been highlighted by the concerns leading to the UK government’s ban on Chinse-state sponsored surveillance.[6]
There are currently calls for the legislative framework governing biometrics to be revisited, not just as proposed within the government’s data reform consultation, but also in broader terms.[7]
The detailed insights from subject-matter experts as well as the public convened in AP4AI can be part of this conversation. The AP4AI approach and instruments are intended to demonstrate accountability in a way that can be tested by presenting available evidence against a carefully researched and accessible standard. And especially the conceptualisation of AI Accountability that is based on the idea of accountability as a process of mutual obligations across stakeholder groups that has citizens at its core may form a fruitful and productive addition to current initiatives. Overall, we believe that the project insights have the potential to guide and inform legislative bodies in the UK to create future-proofing legislation and enforcement directives agnostic of particular technological development and changes.
Responsible contact/evidence submitted by:
Prof Babak Akhgar OBE, Director of CENTRIC
Prof Fraser Sampson, UK Commissioner for the Retention and Use of Biometric Material and Surveillance Camera Commissioner
Prof P. Saskia Bayerl, CENTRIC
(November 2022)
[1] See for example PT I of the Police Reform and Social Responsibility Act 2011 in England and Wales.
[2] E.g., Duff, R. A. (2017). Moral and Criminal Responsibility: Answering and Refusing to Answer. Available at SSRN: https://ssrn.com/abstract=3087771 or http://dx.doi.org/10.2139/ssrn.3087771
[3] https://www.hrw.org/world-report/2022/autocrats-on-defensive-can-democrats-rise-to-occasion
[4] The decision of the Court of Appeal for England & Wales on 11 August 2020 serves to underscore the importance of this project. In R (on the application of Bridges) v Chief Constable of South Wales Police and Ors [2020] EWCA Civ 1058 the court identified the key legal risks and attendant community/citizen considerations in the police use of Automated Facial Recognition (AFR) technology during December 2017 and March 2018 and whether those deployments constituted a proportionate interference with Convention rights within Article 8(2) ECHR. The judgment emphasises the critical importance of LEAs having an “appropriate policy document” in place in order to be able to demonstrate lawful and fair processing of personal AFR data. Further, it emphasised that having “a sufficient legal framework” for the use of the AI system includes a legal basis that must be ‘accessible’ to the person concerned, meaning that it must be published and comprehensible, and it must be possible to discover what its provisions are. The measure must also be ‘foreseeable’ meaning that it must be possible for a person to foresee its consequences for them (R (on the Application of Catt) v Association of Chief Police Officers [2015] UKSC 9. Each of these elements is covered within this project.
[5] Reported results are weighted to account for disparities in population sizes across countries (population weights).
[6] https://www.dailymail.co.uk/news/article-11466071/Government-departments-ordered-STOP-installing-new-Chinese-security-cameras.html?ito=email_share_article-floatingBar
[7] House of Lords Justice and Home Affairs Committee Technology rules? The advent of new technologies in the justice system, 30th March; Matthew Ryder QC Review The Ryder Review: Independent legal review of the governance of biometric data in England and Wales | Ada Lovelace Institute