AIG0002
Written evidence submitted by Dr Tetyana Krupiy
(Lecturer at Newcastle University)
Reason for submitting the evidence: I have been conducting research on the legal and social issues arising from the use of artificial intelligence for over seven years. I am concerned by the rapid expansion of the use of artificial intelligence to make predictions about people and to make decisions about them by the public authorities given the inherent limitations of this technology.
Summary: Artificial intelligence is not a reliable tool for making predictions about peoples’ behaviour. People cannot exercise adequate oversight over artificial intelligence technology. The government should enact a law prohibiting the use of artificial intelligence to produce predictions about people and to make decisions about them. This ban should cover partial and full reliance on artificial intelligence as part of the decision-making process. Moreover, the government authorities should not use artificial intelligence without people giving informed consent to such employment of technology. The document has 4 pages.
Risks and opportunities of artificial intelligence adoption in government
- Artificial intelligence is suitable for use in contexts by public authorities where there are fixed parameters which determine possibilities. Although there can be different pathways to achieving the desired outcome, there are parameters determining what pathways can be pursued and what results are possible. People’s behaviour does not fall under this category.
- Government authorities should not employ artificial intelligence to predict peoples’ behaviour and to make decisions about them. People are inherently complex and unpredictable. They make decisions and choose actions in a very complex social context. Computer scientists acknowledge that the use of artificial intelligence has innate limitations and does not permit to achieve accuracy.[1] The performance of artificial intelligence does “not correspond to lay understandings of ‘prediction.’”[2] The subjective discretionary decisions of the programmer shape what predictions the system produces about a person and what decisions it generates.[3] Given this state of affairs, the public authorities should not use artificial intelligence to make predictions and to reach decisions about people. For example, the government should not use artificial intelligence to determine educational opportunities of students.[4] It should not assess whether there is a need for intervention in regard to children using artificial intelligence; an example is reaching a decision whether to take a child into care.[5]
- International human rights law treaties to which the UK is a party[6] place limitations on the use of artificial intelligence as part of the decision-making process. The full automation of the decision-making process using artificial intelligence is in breach of the prohibition of discrimination in the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW).[7] The prohibition of discrimination in the Convention on the Rights of Persons with Disabilities obliges states to seek the consent of the persons of disabilities before subjecting them to the partial or full automation of the decision-making process using artificial intelligence.[8] Persons with disabilities can object to the use of artificial intelligence as part of the decision-making process without the need to request reasonable accommodation.[9] Martin Scheinin showed that this requirement to obtain consent from the subjects of the decision-making prior to being able to employ artificial intelligence extends to other human rights treaties including CEDAW, International Convention on the Elimination of All Forms of Racial Discrimination[10] and International Covenant on Civil and Political Rights.[11]
- In my opinion there is a need for legislation prohibiting partial or full reliance on the employment of artificial intelligence to make predictions about individuals and to reach decisions about them.[12] More broadly, the government authorities should not use artificial intelligence technology without the prior informed consent of the individual affected by the deployment of artificial intelligence technology.
Departmental accountability in the context of artificial intelligence: delivery, funding and implementation
- Researcher Riika Koulu demonstrated that human beings lack the capacity to oversee the workings of complex systems, including artificial intelligence.[13] Matilda Arvidsson and Gregor Noll believe that the use of artificial intelligence shifts decision-making to a less democratic, accessible and transparent process.[14] Using artificial intelligence to explain the basis for a decision to the subject of the decision-making does not provide meaningful information about what this person can change to achieve a positive outcome.[15] Some types of explanations can hide the fact that the decision was based on a protected characteristic, such as race.[16] Computer scientists can make arbitrary choices which change what kind of explanation the artificial intelligence generates for the decision.[17] Consequently, it is hard to achieve appropriate human oversight and accountability when public authorities use artificial intelligence as part of the decision-making process.
- For this reason, public authorities should not use artificial intelligence to predict people’s behaviour and to issue decisions about people. They should treat partial automation of the decision-making process as posing the same degree of risk to breaching fundamental rights and social values as the full automation of the decision-making process.
Progress on strategy development and governance arrangements
- There is a need for legislation prohibiting the use of artificial intelligence to make predictions about people and to reach decisions about them.
Data and skills issues in government
- The evaluation of whether an artificial intelligence system produced an appropriate prediction about a person and decision requires a collaboration between experts in law, computer science and social scientists from numerous disciplines. Such disciplines can be decolonial computing, critical data studies, sociology and anthropology among others. This process of evaluating the outputs of artificial intelligence is very time consuming, laborious and expensive. This process of detecting problems with the operation of artificial intelligence is hindered by the opaqueness of these systems.[18] Samir Rawashdeh explains that “we can’t trace the system’s thought process and see why it made this decision.”[19] Given this state of affairs, I do not think that training staff members will address the problems associated with using artificial intelligence to make predictions about people and to make decisions about them.
April 2024
[1] Momin M Malik, ‘A Hierarchy of Limitations in Machine Learning’ (2020) arXiv:2002.05193 1, 45.
[2] Ibid.
[3] Solon Barocas and Andrew Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671, 679-680; Sorelle Friedler, Carlos Scheidegger and Suresh Venkatasubramanian, ‘On the (Im)possibility of Fairness’ arXiv:1609.07236 1, 3; Matilda Arvidsson and Gregor Noll ‘Decision Making in Asylum Law and Machine Learning: Autoethnographic Lessons Learned on Data Wrangling and Human Discretion’ (2023) 92 Nordic Journal of International Law 56, 90-91; Giovanni De Gregorio, ‘The Normative Power of Artificial Intelligence’ (2023) 30(2) Indiana Journal of Global Legal Studies 55, 63.
[4] Tetyana (Tanya) Krupiy, ‘The Need to Update the Artificial Intelligence Act to Make it Human Rights Compliant.’ (The Digital Constitutionalist, 25 March 2024) <https://digi-con.org/the-need-to-update-the-artificial-intelligence-act-to-make-it-human-rights-compliant> accessed 19 April 2024.
[5] Virginia Eubanks, Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor (St Martin’s Press 2018).
[6] United Nations, ‘Convention on the Elimination of All Forms of Discrimination against Women’ (United Nations, 2024) https://treaties.un.org/pages/ViewDetails.aspx?src=IND&mtdsg_no=IV-8&chapter=4&clang=_en (accessed 18 March 2024); United Nations Human Rights Office of the High Commissioner, ‘International Convention on the Elimination of All Forms of Racial Discrimination’ (United Nations Human Rights Office of the High Commissioner, 21 February 2023) https://indicators.ohchr.org/ (accessed 18 March 2024); United Nations Human Rights Office of the High Commissioner, ‘International Covenant on Civil and Political Rights’ (United Nations Human Rights Office of the High Commissioner, 21 February 2023) https://indicators.ohchr.org/ (accessed 18 March 2024); United Nations Human Rights Office of the High Commissioner, ‘Convention on the Rights of Persons with Disabilities’ (United Nations Human Rights Office of the High Commissioner, 21 February 2023) https://indicators.ohchr.org/ (accessed 18 March 2024).
[7] Tetyana (Tanya) Krupiy, ‘Meeting the Chimera: How the CEDAW Can Address Digital Discrimination’ (2021) 10 International Human Rights Law Review 1, 34.
[8] Tetyana (Tanya) Krupiy and Martin Scheinin, ‘Disability Discrimination in the Digital Realm: How the ICRPD Applies to Artificial Intelligence Decision-Making Processes and Helps in Determining the State of International Human Rights Law’ (2023) 23 Human Rights Law Review 1, 10.
[9] Ibid.
[10] Ibid., 25.
[11] Ibid., 23.
[12] Tetyana (Tanya) Krupiy, ‘The Need to Update the Artificial Intelligence Act to Make it Human Rights Compliant’ (The Digital Constitutionalist, 25 March 2024) <https://digi-con.org/the-need-to-update-the-artificial-intelligence-act-to-make-it-human-rights-compliant> (accessed 19 April 2024).
[13] Riikka Koulu, ‘Human Control over Automation: EU Policy and AI Ethics’ (2020) 12(1) European Journal of Legal Studies 9, 31.
[14] Matilda Arvidsson and Gregor Noll ‘Decision Making in Asylum Law and Machine Learning: Autoethnographic Lessons Learned on Data Wrangling and Human Discretion’ (2023) 92 Nordic Journal of International Law 56, 91.
[15] Andrew Selbst, Manish Raghavan and Solon Barocas, ‘The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons’ (Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27-30 January 2020) 84.
[16] Ibid.
[17] Ibid.
[18] Samir Rawashdeh, ‘Artificial Intelligence Can Do Amazing Things That Humans Can’t, But in Many Cases, We Have No Idea How AI Systems Make Their Decisions’ (University of Michigan-Dearborn News, 6 March 2023) <https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained> (accessed 19 April 2023).
[19] Ibid.