Written Evidence Submitted by

Dr Elena Abrusci, Brunel University London,[1] and Dr Richard Mackenzie-Gray Scott, Bonavero Institute of Human Rights and St Antony’s College, University of Oxford[2]

(GAI0038)

2.   This submission addresses the following question:

(i)  How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

Summary

3.    Our evidence does not seek to make any value judgment on the use or not of automated decision-making systems that may be considered Artificial Intelligence (AI). Instead, we highlight some opportunities regarding how existing legal frameworks within the UK can be better implemented to engage with the use of such systems, and offer recommendations for regulatory development.

4.   We recommend regulatory bodies be granted further powers and resources to oversee the design and use of AI systems by providers and deployers in the public and private sectors. In particular, we consider it crucial that ex ante and post-audit impact assessments be submitted regularly to independent monitoring bodies so as to inform of the risks, impacts and potential harms of specific AI systems. The content of these assessments should take into consideration the substance derived from the Data Protection Act 2018, the Equality Act 2010, and the Human Rights Act 1998. The law on data protection, equality, and human rights should be used to determine whether and how AI systems should be designed and used in a particular domain.

5.  In addition, we recommend that providers and deployers of AI systems have in place complaints procedures to address claims of individuals and groups that may have been adversely affected by such systems, in addition to providing mechanisms for effective human oversight. These mechanisms should be bolstered by independent public institutions overseeing their effectiveness and compliance with data protection, equality, and human rights law. Individuals’ and groups’ access to remedy should not be limited to avenues provided by the judicial branch of the State.

 

 

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

6.   Any regulation of AI should be grounded in data protection, equality, and human rights law. The UK Government’s National AI Strategy[3] and policy on ‘Establishing a pro-innovation approach to regulating AI’[4] should be aligned with these legal frameworks. Regulation of AI and innovation are not conflicting, and it would be misleading to present this relationship as a binary trade-off. Economic growth and innovation can be fostered by regulation that offers meaningful and practical protection against the risks, impacts and potential harms of AI.

7.  Public bodies should play a central role in ensuring that the existing legal frameworks enshrining data protection, equality, and human rights are implemented by providers and deployers of AI systems. In particular, the Information Commissioner’s Office (ICO), the Equality and Human Rights Commission (EHRC), and the Joint Committee on Human Rights (JCHR) should be provided further powers to oversee the design and use of AI systems. These three bodies should also coordinate their scrutiny and ensure harmonised proposals are offered to providers and deployers of AI systems so as to minimise risk, impact and potential harms on individuals and groups.

8.  As AI impacts individuals, groups, and society across several sectors and regulators’ remits, a holistic approach to its governance and oversight is needed. The Digital Regulation Cooperation Forum (DRCF) has the potential to be a key actor in coordinating and carrying out comprehensive, cross-sector oversight of AI systems. Its workplan for 2022-2023 already lists algorithmic transparency among the key challenges to address.[5] Yet its focus should be broadened to include addressing the risks, impacts and potential harms arising from the design and use of AI, throughout the AI lifecycle.

9.   AI impacts the exercise and enjoyment of human rights, and the DCRF should play a more prominent role in respecting, protecting, and fulfilling the human rights obligations of the UK. The DRCF currently lacks a focused human rights component, which is a critical part of ensuring the design and use of AI that is safe and trustworthy. The EHRC included addressing the equality and human rights impacts of digital services and AI in its Strategic Plan 2022-2025.[6] Further steps should be taken to ensure that this body along with the ICO and JCHR have effective powers to do so.

10. Oversight carried out by regulators should also be accountable to Parliament, to ensure respect for democratic accountability principles. The Select Committee on Artificial Intelligence and/or that on Science and Technology should establish dedicated procedures to review and scrutinise the oversight activities of the relevant regulators as they pertain to AI systems, including their engagement, or lack thereof, with data protection, equality, and human rights law.

11. While there is considerable substantive law across data protection, equality, and human rights law that is applicable to the design and use of AI, what is currently missing in the UK is a robust implementation of these applicable legal rules through procedural machinery that is capable of providing meaningful and practical protection to individuals and groups against the risks, impacts and potential harms of AI. Public and private providers and deployers of AI systems, working with regulators and other public oversight bodies, have an opportunity to develop related procedural safeguards so as to ensure the effective, safe, and legally compliant development of AI.

November 2022 


[1]   Lecturer in Law: elena.abrusci@brunel.ac.uk

[2] Postdoctoral Fellow. Research funded by the British Academy (grant no. BAR00550-BA00.01): ​​richard.mackenzie-grayscott@law.ox.ac.uk  

[3] UK Government, National AI Strategy (22 September 2021) <https://www.gov.uk/government/publications/national-ai-strategy>

[4] UK Government, Department for Digital, Culture, Media and Sport, Establishing a pro-innovation approach to regulating AI, Policy Paper (20 July 2022) <https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement>

[5] Digital Regulation Cooperation Forum, Workplan 2022 to 2023, Policy Paper (28 April 2022) <https://www.gov.uk/government/publications/digital-regulation-cooperation-forum-workplan-2022-to-2023>

[6] Equality and Human Rights Commission, Strategic Plan: 2022 to 2025 (29 March 2022) <https://www.equalityhumanrights.com/en/publication-download/strategic-plan-2022-2025>