Written Evidence Submitted by Thorney Isle Research

(GAI0016)

Evidence for the Inquiry into the Governance of Artificial Intelligence (AI)

  1.                 Thorney Isle Research is a research and advisory body specialising in the interactions between technology, public policy, governance and public administration. Our recent focus has been on the regulation of algorithms and AI, with an emphasis on analysing the EU AI Act in communication with European Commission officials, MEPs, and their expert advisers. We have criticised aspects of its approach [1] and have been advocating the sectoral, context- and application-specific model proposed by the UK Regulating AI policy statement published in July 2022 to which we have responded. Our material is published at www.thorneyisle.co.uk.
  2.                 One of the challenges in all efforts so far to address the governance and regulation of “AI” is determining exactly what the focus of attention should be (i.e. the unresolved issue of defining “AI”). I shall discuss this before returning to the Committee’s specific questions.

The Scope for the Governance of “AI”

  1.                 A combination of mathematics, statistics, and advances in data processing and software and hardware engineering has created the capability to have computers perform tasks that couldn’t be done before in the fields of prediction, classification, cluster analysis, image and language processing or game playing. “AI” has become a marketing label[1] that implies some such novelty in a product or computer program, and the continuing debates on definitions (e.g. for the EU AI Act) illustrate that this is a problem for scoping a governance system or regulation [2]. We might speculate that if “AI” becomes regulated, its popular use may diminish, and more mundane (unregulated) labels replace it. Additionally, many of the computational procedures used for machine learning or “AI” are openly available in the statistical package R[2], programming languages like Python (e.g. the scikit-learn library[3]), or even Microsoft Excel, so anyone with sufficient knowledge could build a system from these that had “AI” features. Consequently, thinking in terms of “AI systems” and “suppliers” is insufficient in establishing a governance system.
  2.                 Computer programs that aren’t labelled “AI” can cause harm and not all “AI” programs are harmful. We have found that it is certain practices that cause risk and harm, and these may use any of a wide range of computational methods [3]. Therefore, regulation needs to focus on those practices — the use — rather than the thing used or its creator. Our suggested scope, which need not be exclusive but addresses the areas of highest risk, is as follows.

A governance system or regulator should assess the need for rules to apply

when any actor proposes to use a statistical, mathematical or other algorithmic process

to make any calculation, estimate, classification, profile or prediction of,

or to influence,

any present or future circumstances, activities, opinions, behaviour, or other characteristics (including identity) of legal persons, individually or collectively.

  1.                 A rationale for this scope is at [4] — there it is set in a public sector context, but the argument generalises to all sectors. We believe it is well-defined enough that governance and regulation can identify what they need to focus on, without being drawn into the black hole of finding a technical definition that works — something that, as the Government’s Regulating AI policy statement notes, has not so far been achieved. It is consistent with the DRCF consideration of methods for auditing algorithms [5] and the CDDO Algorithmic Transparency Standard’s[4] scope. The corollary to this is that any policy statement or legislation should not carry “AI” in the title. The UK Government’s material often uses the general term “algorithm”, or “algorithmic system, so we suggest instead something like “governance of the use of algorithmic methods”.
  2.                 An illustration of how this approach works out in practice to support useful guidance (and potentially rules) is the guide to professional practice for using algorithms in recruitment created by the Recruitment and Employment Confederation with the CDEI [6]. Its structure could provide a reusable template for other domains of application.
  3.                 It is possible that this approach may also help resolve the issues identified in the Challenges section of the Government’s Regulating AI policy statement about overlapping or inconsistent regulatory regimes in the case of developing multi-sector or cross-jurisdictional products. With this approach, regulators are assessing the risks of an activity, application or function in their own context, not just the technology (though knowledge of technical aspects may be necessary to assess risk or rule compliance). This makes it a pro-innovation approach as regulation focuses on the point of use rather than R&D.

How effective is current governance of AI in the UK? What are the current strengths and weaknesses of current arrangements, including for research? To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

  1.                 A number of relevant governance instruments exist, but recognition of their relevance and compliance is patchy. It is generally recognised that data protection law applies. However, in many cases it is likely that human rights, equality, and other sectoral legislation and regulation does too. In the UK the general laws are the Data Protection Act 2018 (that has specific clauses on profiling and automated decision making), the Human Rights Act 1998, and the Equality Act 2010 including its Public Sector Equality Duty (s.149). Also, for the public sector there is the Public Records Act 1958 and the administrative law that determines what a public sector body is and the way it works. Sectors such as finance, employment and health have strong governance systems that will cover many uses of algorithmic methods. The weakness is in understanding by suppliers, developers and users, and inadequate enforcement leading to gaming or ignoring the rules. We are aware of public sector cases that were apparently unlawful on one or more counts.
  2.                 The Government’s Regulating AI policy statement rightly says, “subject to considerations of context and proportionality, the functioning, resilience and security of a system should be tested and proven”. In many cases studied, it has become clear that some systems marketed or used simply do not work as claimed. We and researchers have identified a taxonomy of types of functionality failure and illustrated where it has happened [8, 9]. There may be gaps around existing requirements for something to safely “do what it says on the tin”, and in clarity of liability. While in practice it is not necessarily easy to determine what will work when deploying an algorithm in a particular setting, the obligation to do so before using it must be enforceable.
  3.             Given our proposed scoping by usage, research and innovation is governed by existing ethical, legal, safety and technical concerns, not new rules based on technological definitions. The key moments for regulatory intervention are when something moves from research and development in the life cycle towards deployment [7].

What measures could make the use of AI more transparent and explainable to the public?

  1.             For some types of complex algorithms there are no established solutions to key challenges such as explicability, bias, robustness, and proof of safety. There is much debate about what “transparency” and “explainability” mean in practice. Regardless, I suggest that in any situation where a decision is made or informed algorithmically, the fact that it is, the method, the way the output is used, and the means to challenge or get an explanation for a decision should be proactively advised to anyone affected. Evidence of something working properly and a means to give a satisfactory explanation for a decision are prerequisites. Where a system produces results that have probabilities attached, this must be made clear, and an explanation given. Any governance system should put these requirements in place (as does the UK Parliamentary and Health Service Ombudsman’s principles of good public administration for the public sector [14]).

How should decisions involving AI be reviewed and scrutinised in both public and private sectors? How should the use of AI be regulated, and which body or bodies should provide regulatory oversight? Is more legislation or better guidance required?

  1.             The proposed approach of the Government’s Regulating AI policy statement is to assign responsibilities to existing sectoral regulators rather than have a single across-the-board regulation, and to set cross-sectoral principles for the regulators to adopt when regulating context-specific and application-specific uses of the technology. Given our views above on the high context dependency of the use of algorithmic methods, this makes sense.
  2.             However, this leaves gaps in sectors and domains of application where there is no appropriate regulator or where the government performs that function, perhaps for example employment. It may be that some overarching “default” oversight rules, statutory codes, guidance, and roles may be required, perhaps by extending the role of the Information Commissioner’s Office.
  3.             It may be that the basic principles and processes could transfer from the public sector for this purpose. A public body working in a system governed by the rule of law has to be transparent, fully accountable, making consistent, equitable and predictable decisions that it can explain (both the process and an individual decision). The Ombudsman’s principles of good public administration [14] set out these criteria. One of the best guides on their application to algorithms is published by the New South Wales Ombudsman in Australia[5].
  4.             The public sector itself is a highly sensitive sector where algorithms are and have been employed in public administration with damaging consequences in the UK and abroad (e.g. in welfare, social services, criminal justice, and policing). Government must lead and be seen to lead on safe, lawful, accurate, and effective use, complying with the principles of good public administration at minimum [9, 13] (as examples of non-compliance, concealed or non-explainable “black box” systems immediately contravene the principles). By making the principles statutory or the setting of mandatory operational rules, there is a need to spell out how this will be achieved, and to put the public sector Algorithmic Transparency Standard into this broader framework. The potential need for changes to existing laws (e.g. on education, employment, policing, or social care) would have to be examined.

Are current options for challenging the use of AI adequate and, if not, how can they be improved?

  1.             The narratives around the use of “AI” have routinely started from the position that the technology will solve a problem and the issue is about how to do it (e.g. “ethically”). Seldom do they ask, should technology be used in this way in this situation at all? Research has concluded that some proposed applications are impossible in practice [8, 10]. Some products are being sold that depend on methods (e.g. recognising emotions) that have no scientific basis or credibility [11]. In other cases, the ethical, human rights, social, political or environmental implications raise risks that massively outweigh any possible benefits [11]. Sometimes there is just a much easier and cheaper way to address the problem. We recommend that regulators are advised to start with considering this question before anything else. This is challenging, as it will require them to understand the deep implications of, for example, particular machine-learning techniques that use statistical methods and optimisation to classify human beings for commercial, social or political purposes [12]. It will take time and investment to build the capability to effectively challenge uses, but this is unavoidable if the public is to be protected.

What lessons, if any, can the UK learn from other countries on AI governance?

  1.             The suggestions we make here are quite different to the EU AI Act’s pan-sectoral product-orientated approach. That attempts to regulate “AI systems” as a whole rather than sectoral applications. The focus on products leads to an implicit assumption that “AI” is a thing, or an attribute of things, rather than a field of research (for which the term was originally coined). This requires legislators to define the scope of an “AI” regulatory instrument in terms of the nature of the artifacts being regulated — the intractable problem discussed above.
  2.             The main contribution to issues of governance to take from the evolution of the AI Act will be its debate on what applications should be banned entirely and which should be subject to a high level of scrutiny, such as live facial recognition. The way it tries to achieve coherency between the Act, the GDPR, the Digital Services Act, the AI Liability Directive, and existing sectoral regulations will also be informative for developing any UK governance scheme.
  3.             It has also raised the issue of how to deal with “General Purpose AI”, which effectively means pretrained generators or classifiers. “Generative Systems” may be a better term, to include ones that output new text, images, or speech from “seed” inputs like a question or a prompt, to produce “AI Art” for example. There are still lots of problems with these, such as bias caused by training them on internet content, and they are scarcely fit for public use[6]. They are “general purpose” in the sense that access is given by their creators (mainly resource-rich big-tech companies) to developers of other applications that plug in to them and use their capability in the background, possibly invisibly and propagating any problems. Hence their use, and the use of applications building on them, for example for the production of fake news and images, require lawmakers’ attention.
  4.             Despite the use of “AI” in its title, the U.S. White House Blueprint for an AI Bill of Rights speaks throughout of algorithms and automated systems, explaining

While many of the concerns addressed in this framework derive from the use of AI, the technical capabilities and specific definitions of such systems change with the speed of innovation, and the potential harms of their use occur even with less technologically sophisticated tools. Thus, this framework uses a two-part test to determine what systems are in scope. This framework applies to (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.[7]

  1.             The first part of this aligns with my argument at the start of this submission. However, criterion (2) may not be sufficiently well defined. While “rights, opportunities, or access to critical resources or services” is expanded and examples given, it may be hard to tell whether that is the case for a particular system until its use in context is thoroughly examined: “have the potential” is speculative and could apply to almost anything. Otherwise, the Blueprint sets out five sound principles for such systems and practical steps to comply with them and is worth reviewing[8].
  2.             Canada is proposing an Artificial Intelligence and Data Act, and that too is raising a debate about its scope. One significant discussant group[9] recommends that A potential pathway for regulation is to define algorithmic systems based on their applications instead of focusing on the various techniques associated with machine learning and AI, as I have done in this note, to achieve legal certainty and avoid conflicts with other legislation.

 

Conclusion

  1.             My overall conclusion is that the assessment of risk to individuals and to groups collectively arising from decision-making systems based on algorithms, and the setting and enforcement of rules, needs to be done within the specific context and existing governance at sectoral level. The range of governance instruments available to mitigate those risks is large and choices will need to adapt to each in context [15]. National (or higher) level legislation may set principles or default rules and responsibilities, but the international efforts so far suggest that an across-the-board approach to ill-defined “AI” is probably insufficient, even distracting, and will be hard to enforce.

 

References

[1] Thorney Isle Research, 2022. The EU AI Act was doomed the moment its title was written. Blog post at http://thorneyisle.co.uk/blog/the-eu-ai-act-was-doomed-the-moment-its-title-was-written.

[2] Waller, P., 2022. Defining “artificial intelligence” for regulation: why it is a pointless distraction. Medium, 7 June 2022. Available at https://medium.com/@pwaller99/defining-artificial-intelligence-for-regulation-36572d06f5fc.

[3] Waller, M. and Waller, P., 2020. Why Predictive Algorithms are So Risky for Public Sector Bodies. SSRN Electronic Journal. Available at https://ssrn.com/abstract=3716166.

[4] Thorney Isle Research, 2022. Scoping a policy on using algorithms in the public sector. Available at https://thorneyisle.weebly.com/uploads/1/3/5/4/135464803/scoping_a_policy_on_using_algorithms_in_the_public_sector.pdf.

[5] Digital Regulation Cooperation Forum, 2022. Calls for views on auditing algorithms 28 April 2022. Available at https://www.gov.uk/government/news/uk-s-digital-watchdogs-take-a-closer-look-at-algorithms-as-plans-set-out-for-year-ahead.

[6] Recruitment and Employment Confederation, 2021. Data-driven tools in recruitment guidance. Available at https://www.rec.uk.com/our-view/research/practical-guides/data-driven-tools-recruitment-guidance.

[7] Lavin, A., Gilligan-Lee, C.M., Visnjic, A. et al, 2022. Technology readiness levels for machine learning systems. Nature Communications 13, 6039. Available at https://doi.org/10.1038/s41467-022-33128-9.

[8] Raji, I. D., Kumar, I. E., Horowitz, A. and Selbst, A. D., 2022. The Fallacy of AI Functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22), June 21–24, 2022, Seoul, Republic of Korea. ACM, New York, NY, USA. Available at https://doi.org/10.1145/3531146.3533158.

[9] Thorney Isle Research, 2022. Predictive Analytics in Public Services Explained. Available at http://thorneyisle.co.uk/uploads/1/3/5/4/135464803/explainer_-_predictive_analytics_in_public_services.pdf.

[10] What Works Centre for Children’s’ Social Care, 2020. Machine Learning in Children’s Services: Does it work? Available at https://whatworks-csc.org.uk/research-report/machine-learning-in-childrens-services-does-it-work/.

[11] Crawford, K., 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, USA. ISBN 978-0300209570.

[12] McQuillan, D., 2022. Resisting AI. Bristol University Press. ISBN 978-1529213508.

[13] Waller, P., 2022. Algorithms and AI in the public sector: the rules. Medium, 17 June 2022. Available at https://medium.com/@pwaller99/algorithms-and-ai-in-the-public-sector-the-rules-3d912720f82.

[14] UK Parliamentary and Health Service Ombudsman. Principles of Good Administration. Available at `https://www.ombudsman.org.uk/about-us/our-principles/principles-good-administration.

[15] Lévesque, M., 2021. Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications. Centre for International Governance Innovation, paper 260. Available at https://www.cigionline.org/publications/scoping-ai-governance-a-smarter-tool-kit-for-beneficial-applications/.

Footnotes


[1] https://www.aimyths.org/the-term-ai-has-a-clear-meaning

[2] https://www.r-project.org/about.html

[3] https://scikit-learn.org/stable/index.html#

[4] https://www.gov.uk/government/collections/algorithmic-transparency-standard

[5] https://www.ombo.nsw.gov.au/news-and-publications/publications/reports/state-and-local-government/the-new-machinery-of-government-using-machine-technology-in-administrative-decision-making

[6] https://mailchi.mp/technologyreview.com/where-will-ai-go-next

[7] https://www.whitehouse.gov/ostp/ai-bill-of-rights/#applying

[8] Useful summary at https://www.arnoldporter.com/en/perspectives/advisories/2022/10/the-blueprint-for-an-ai-bill-of-rights

[9] https://www.cybersecurepolicy.ca/aida#:~:text=Accountability%20and%20Protecting,and%20Data%20Act

 

 

(November 2022)