WRITTEN EVIDENCE SUBMITTED BY DR ALINA PATELLI
(GAI0095)
Data privacy law is an excellent start towards formulating fit-for-purpose AI policy; in that sense, it is a strength. Significant work remains to be done to develop this legislative foundation into a comprehensive regulatory framework that is sufficiently straightforward and well-enforced to enable trust in AI. Although it is counterproductive to call them weaknesses, I recommend the following points of improvement be addressed as a matter of priority.
- Strengthen data privacy laws. Currently, users of online platforms are asked for their consent to have their personal data shared with third parties. This is a move in the right direction, yet it is has limited impact in terms of protecting user privacy. Not all consent forms are displayed in the same format: some have a “Reject all” button, whilst others do not, some have the “Legitimate interest” options enabled by default, requiring users to manually disable those one by one, whilst others do not, etc. Although the cognitive switch between different formats may be acceptable under normal conditions, the data privacy onus can no longer fall on the user in safety-critical circumstances, e.g., when looking up medical advice on the NHS website or purchasing flight tickets to attend a family emergency (it is unreasonable to ask people to pour over terms and conditions or click on numerous consent radio buttons in such high-stakes situations). Failing to consider this may lead to personal data being used to the detriment of the user (e.g., buying a plane ticket to visit a parent suffering from a genetic condition may be shared with the traveller’s health insurance company which may then raise their premium on grounds of genetic illnesses in the family history). This ultimately erodes the public’s trust in AI [1] and encourages the general perception that such technologies are vehicles of social injustice. AI legislation should therefore mandate that lack of consent be the default setting for all data privacy options.
- Hold AI creators, distributors and users accountable for AI failures. Current provisions place the legal burden of responsibility on the AI manufacturer; yet it is all the actors in the production, distribution and deployment chain that should be held accountable for their handling of AI. The algorithm that awarded inadequate A-level grades to UK students unable to attend examinations in person at the height of the COVID-19 pandemic is not culpable; the responsible ones are the human decision makers who determined the algorithm’s specification, what data it should take as input and how its output should be used in practice. Yet, legislators and public opinion alike were content to “blame the AI” with no significant consequence to the humans behind it [2]. This is not to say that all those who produce, distribute and deploy AI are inherently careless or ill-intended. An example of responsible authorship and deployment of AI (one of many) is differential privacy [3], an AI-based approach that adds a controlled amount of noise to individual data samples, making it impossible to trace those back to the people to whom they refer; all the while, the relevance of the aggregated data remains unaffected. Also deployed during the pandemic, differential privacy allowed local authorities to track the collective movement of people through public spaces, thus determining which were busiest, therefore in need of stricter social distancing restrictions, without being able to isolate the whereabouts of any one specific individual. This is a perfect illustration of AI used in the service of public good without compromising fundamental rights. AI law should hold all human actors involved in AI creation, distribution and deployment to a similar ethical standard; the trustworthiness of the AI handlers will automatically transfer onto the AI itself.
- Require AI creators, distributors and users to fully disclose the nature of their AI products. It is not uncommon for AI manufacturers and promoters to actively or indirectly encourage the perception of AI as an intricate process taking place inside a black box, beyond human comprehension. This disingenuous depiction of AI is then exploited to avoid liability when AI fails. This erodes the public’s trust in AI; appropriate legislation is needed to prevent that. Facial recognition software that misclassifies ethnic minorities more often than white faces does not behave this way because of some unknown cause that cannot be determined given the complexity of the underlying algorithms; its disproportionate accuracy rates are caused by imbalanced training data (there are more photos of white faces then there are of people of colour in the online repositories the algorithm used in order to learn how to detect patterns). ?
Lewis et al. introduce democratic AI [2]. In that context, all AI technologies and the products (applications) they power should be accompanied by a specification that clearly answers a series of fundamental questions:
- What service is the AI tool meant to provide?
- How does the AI tool provide that service?
- Why does the AI tool operate as described in the answer to the previous question? (In other words, is there no viable alternative to achieving the same or a sufficiently similar outcome?)
- What are the risks associated to the use of the AI tool? (Any threats to fundamental democratic rights, such as privacy, safety, freedom of speech, etc.? Any environmental concerns, e.g., heightened energy consumption rates?)
In my view, requiring AI creators, distributors and users to provide answers to the above questions, in a form that everyone can comprehend, should be mandated by law. This will prompt all relevant actors to put ethics ahead of commercial interest and develop, sell and deploy AI solutions that genuinely improve people’s lives. Those organisations and individuals who consistently demonstrate excellent practice and establish themselves as democratic AI champions may then collaborate with policy makers to produce guidance based on their failures and success stories, thus assisting new-comers to the AI landscape design their products in compliance with the principles instilled in the questions above.
Before a public or private sector organisation adopts an AI system to inform their decision making, they should review the accompanying specification, i.e., the answers to the questions mentioned in the previous section. This will afford the insight necessary to establish the trade-off between the benefits and risks associated to the AI in question. A local traffic authority may be interested in using AI to predict future vehicle flow dynamics thus informing the layout of new roads, one that would avoid congestion and reduce pollution. However, that potential benefit needs to be weighed against the amount of energy necessary to run the AI algorithm in the first place; should the environmental impact be comparable to or greater than the one offset by the new road layouts, then the adoption of such a system becomes superfluous. Ideally, this trade-off analysis should not be left entirely to the organisations looking to embed AI in their decision making processes; it should be overseen by an independent, government-appointed arbitrator.
Regulations should focus on:
- Making sure that AI supports decision making, without fully automating it. Human agency and oversight is essential to avoid unintended negative consequences. This is in keeping with guidance issued by the Information Commissioner’s Office on the appropriate use of AI [4].
- Restricting the introduction of AI meant to automate labour where labourer displacement cannot be managed without harming the workforce. AI can still be used, in an ethical way, to automate repetitive tasks, thus cutting operational costs and freeing employees from menial jobs, consequently allowing them to pursue more rewording work. In order for that to happen, the employees in question need to possess the relevant skills or be supported to develop them, before their existing roles are automated. The UK already has a scheme in place to facilitate the development of such skills, namely, apprenticeships. To secure the ethical treatment of people being professionally displaced by AI, it is imperative that AI law include a provision mandating that employers looking to automate parts of their business enable the upskilling of the affected employees via a recognised scheme, such as the apprenticeship one.
- Ensuring that the operation of AI systems causes no environmental harm. The energy costs of running AI algorithms do not outweigh the benefits of the solutions produced by said algorithms. The implications of AI use in the context of environmental sustainability are systematically investigated in [5].
- Enabling transparency. Black box approaches, such as deep learning, are not inherently explainable: it is typically difficult for outside observers to understand the path followed by such algorithms in order to produce an outcome. Absent this understanding, AI tools falling under the black box category must not be allowed to inform decision making processes potentially affecting humans. There exist AI approaches [6] that offer a viable alternative, in the sense that the logical path from problem representation through to the presentation of the end solution is completely traceable.
In its white paper on AI [1], the European Commission proposes a strategy that rests on two pillars: ecosystems of excellence (a policy to incentivise research and development, with a focus on SMEs where innovation is more likely to thrive than in big corporations that are typically driven by financial gain) and ecosystems of trust (a policy that regulates the use of business and consumer data to prevent abuse). If adopted by the UK, this two-pillar system provides the ideal environment for developing legislation that encourages local AI innovation without the danger of fragmentation (i.e., multiple independent systems incompatible with each other). This boils down to providing a scaffolding to guide the holistic development and merging of individual AI solutions into a coherent national platform that can significantly improve society. The Commission suggests a practical way of putting that scaffolding in place: facilitating a structured engagement between Government and the public sector. Transport, healthcare and education officials are thus provided with a means to embed AI in their systems in a principled way rather than the ad-hoc approach many are adopting at present. To that end, the UK can learn valuable lessons by analysing the Commission’s ‘Adopt AI programme’ proposal intended to standardise and enhance the public procurement of AI.
This proposal is meant to convey the perspective of a software engineer, well-versed in AI theory and practice, with regard to the fundamental requirements that AI governance should meet in order to enable this technology to fulfil its transformative potential. Based on my experience to date, it is my firm belief that the efficient regulation of AI implies establishing it and the applications it powers as trustworthy, which is the ultimate precursor to successfully integrating AI in society, where it can address humanity’s great challenges, safely, ethically and effectively.
[1] | "European Commission," 2020. [Online]. Available: https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en. [Accessed November 2022]. |
[2] | P. R. Lewis, S. Marsh and J. Pitt, "AI vs “AI”: Synthetic Minds or Speech Acts," IEEE Technology and Society Magazine, vol. 40, no. 2, pp. 6-13, 2021. |
[3] | "Differential Privacy: The Pursuit of Protections by Default," Communications of the ACM, pp. 36-43, February 2021. |
[4] | "Information Commissioner's Office," [Online]. Available: https://ico.org.uk/media/for-organisations/documents/4022261/how-to-use-ai-and-personal-data.pdf. [Accessed November 2022]. |
[5] | A. Van Wynsberghe, "Sustainable AI: AI for sustainability and the sustainability of AI," AI and Ethics, vol. 1, no. 3, pp. 213-218, 2021. |
[6] | P. P. Angelov, E. A. Soares, R. Jiang, N. I. Arnold and P. M. Atkinson, "Explainable artificial intelligence: an analytical review," Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 11, no. 5, 2021. |
[7] | P. R. Lewis and S. Marsh, "What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence," Cognitive Systems Research, vol. 72, pp. 33-49, 2022. |
(NOVEMBER 2022)