Written Evidence Submitted by Oxford Internet Institute
(GAI0058)
This written evidence has been prepared for the UK Parliament Science and Technology Committee’s enquiry into the Governance of Artificial Intelligence. It has been composed by Professor Brent Mittelstadt, Professor Sandra Wachter, Dr Keegan McBride, Dr Ana Valdivia, and Mr Rory Gillis on behalf of the Oxford Internet Institute (OII). The OII is a multidisciplinary research and teaching department at the University of Oxford, dedicated to the social science of the Internet.
We are responding to the committee’s call for evidence due to the relevance of our research to its current investigation into the Governance of Artificial Intelligence (AI). The responses in this report have been compiled from research conducted by multiple faculty members, but do not necessarily represent an over-arching position of the OII. We thank the committee for inviting submissions on this topic and are available for any further enquiries.
Though there is not a widely agreed-upon understanding of what the ‘effective’ governance of AI involves, most definitions will say that effective approaches are those that enable innovation, whilst simultaneously limiting any potential harms that may result.
There are two areas where the current governance of AI in the UK is particularly effective. Firstly, the UK has led in the development of procurement guidelines and strategies for public sector AI projects. This is demonstrated by the UK’s involvement in the ‘AI Procurement in a Box’ guidelines, released by the World Economic Forum in 2020.[1] Secondly, the UK has looked to empower existing sectoral regulators to deal with issues raised by AI. This allows for problems raised by AI in a particular area to be dealt with by domain experts through targeted regulation, rather than by a central regulator. This is promising as the problems raised by AI vary across sectors. For example, Professor Mittelstadt has outlined challenges that are emerging due to the growing adoption of AI in the healthcare sector. The increased involvement of the private sector has led to new tensions between AI performance metrics and medical norms.[2]
A weakness of the current arrangements is their inability to adequately address some of the unique harms of AI. For example, as Professor Sandra Wachter has shown, non-discrimination law does not adequately protect groups used by AI in decision-making.[3] Unless they correlate to an existing protected group, algorithmic groups are not protected under the law. This is because, in contrast to protected groups, algorithmic groups are often not based on immutable characteristics, their attribution is often not completely arbitrary, members are not always victims of historical oppression, and these groups are not always socially salient. This can lead to morally wrong cases of discrimination that are not currently protected by the law.
If an AI solution is not transparent (or explainable), it is a deliberate design choice. AI (and its use in the public sector) can be made more transparent. Whilst researching the transparency of AI systems in the United States, Robert Brauneis and Ellen Goodman highlighted that missing documentation, trade secrecy, and public sector concerns are often important obstacles.[4] To deal with such issues, mandating clear requirements for documentation creation and record keeping throughout the AI development process is required. This is standard practice across industries, especially in those that have the potential to have large negative impacts on society. Importantly, if the UK wants to develop AI that is more transparent and explainable it must develop binding guidance for how such documentation is to be developed, collected, and audited.
In the absence of strong regulations, the public sector may use strategic procurement to promote equitable and transparent AI. In previous research, Dr McBride has detailed concrete steps to help AI procurement.[5] These include mandating various criteria in procurement announcements and specifying design criteria, including explainability and interpretability requirements. In addition, clear documentation on the function of a proposed AI system, the data used and an explanation of how it works can help. Beyond this, an approved vendor list for AI procurement in the public sector is useful, to which vendors that agree to meet the defined transparency and explainability requirements may be added.
Other ways to improve the transparency and explainability of AI include AI ‘model cards’ and algorithmic registers. Several private sector organisations have begun to experiment with the creation of AI model cards that highlight how a model is used, any potential bias, training information, and the results.[6] The usage of such cards has not yet seen widespread use in the public sector but is likely to become increasingly common soon. Algorithmic registers have been developed in Amsterdam and Helsinki that aim to create a catalogue of all AI systems in use, along with important background and contextual information.
Research in the Governance of Emerging Technologies programme has also provided measures that can make the use of AI more transparent to the public. Professor Mittelstadt, Dr Chris Russell and Professor Wachter have shown that people prefer good ‘everyday explanations’ of AI decisions rather than technical explanations of the underlying code.[7] For example, a good everyday explanation of a decision made by an AI system could be that ‘you were denied parole because you had four prior arrests. If you had two prior arrests, you would have been granted parole’.
Counterfactual explanations are a technical solution that explain why an AI has made a certain decision that can act as good everyday explanations.[8] These can be useful in showing how an individual can change their behaviour. Also, as they only require the release of a smaller amount of information, they are easy to understand and less likely to infringe on trade secrets and IP rights. In addition, a biased counterfactual explanation can reveal the need to rectify the algorithm in question. Though useful, these are not a substitute for a completely transparent AI system.
Partly owing to industry marketing and vested interests, AI models are often seen as black boxes. This attempt to use perceived complexity to escape regulation is dangerous, and scholars have developed explainability and interpretability methods to help understand AI decision-making processes. These methods can be integrated into the review process of decision-making in both the public and private sectors.
AI decision-making can be understood through individual predictions or general models. The former involves scrutinising a data point to see how a system decides about it, for example seeing why a hiring algorithm rejects a particular candidate. In contrast, the latter involves using statistics and visualisations to analyse model behaviour, for example by analysing which variables a hiring algorithm is using.
AI models also reproduce social discrimination and demographic bias. If algorithmic bias is not scrutinised or mitigated, this can harm vulnerable communities. Fairness metrics can help to avoid this. Research by Dr Ana Valdivia et al. has proposed a technical framework to analyse the trade-off between accuracy and fairness metrics. This can be implemented to scrutinise AI decision-making.[9]
AI decision-making should also be assessed through civic participation and by independent organisations committed to democratic standards. Wherever possible, citizens’ councils and digital rights organisations should be invited into the review process. The public can deliberate about whether a decision taken by an algorithm is fair, transparent, and explainable.
Based on existing research, it is possible to highlight certain important components of a strong AI regulatory regime. Given space constraints, this answer cannot be exhaustive and neglects important issues around compliance.[10]
Firstly, it may not always be sensible to rely solely on a singular (or unitary) body to regulate AI. Rather, it is important to develop government capacity more broadly. This likely requires new educational and training opportunities for public servants as well as the development of communication channels for relevant actors to share their experiences.
Relatedly, it is important to develop an understanding of areas in which existing legislation can be expanded to cover AI. As Dr McBride has noted, there are certain parallels here between AI and finance.[11] A key question is how to optimise and improve AI regulation within existing bodies. This work must be complemented through cross-departmental collaboration with the specific aim of engaging the wider stakeholder community. It will be especially important to ensure that there is sufficient discussion with and involvement by non-governmental stakeholders such as in academia, the private sector, and civil society.
For public sector AI projects, two specific strategies could be adopted. The first, as previously discussed, is the use of strategic procurement. This approach utilises government funding to drive change in how AI is built and implemented, which can lead to positive spill-over effects in the industry. Alternatively, the UK could aim to recreate Canada’s architectural review board for new digital governance services, but with a specific focus on new AI systems being implemented in the public sector.[12] It may be possible to develop a new review board based within an existing Government department and coordinated by, for example, a Chief Information Officer. The goal of such a committee would be to check if the necessary precautions have been taken and if legal or regulatory guidelines have been complied with before any new system is launched or piloted.
The current regulatory framework for the use of AI in making decisions currently has several gaps. Research in the Governance of Emerging Technologies programme has focussed on measures to reduce biases involved in AI decision-making. Most existing technical measures of AI fairness have been developed in the United States and do not live up to the aims of UK non-discrimination law.[13] This is because UK non-discrimination law aims to reduce discrimination by helping to ‘level the playing field’ to achieve substantive (rather than merely formal) equality.
In their paper ‘Bias Preservation in Machine Learning’, Professor Wachter, Professor Mittelstadt and Dr Russell proposed a classification scheme for fairness metrics that reflects the distinction between formal and substantive equality. Metrics that are ‘bias preserving’ treat the status quo as a neutral starting point to measure inequality. In effect, these metrics take the acceptability of existing inequalities for granted. This is a problem if we want to use AI not simply to uphold the status quo, but to actively make society fairer by rectifying existing social, economic, and other inequalities. In addition, it likewise clashes with the aims of non-discrimination law to achieve substantive equality. In contrast, ‘bias transforming’ metrics do not take the status quo for granted, but rather actively question what existing inequalities and biases are appropriate to teach a model or AI system. To help achieve system-level transparency and fairness in practice, we recommend requiring organisations using AI to make important decisions to use bias-transforming metrics to measure fairness and make fair decisions and ideally, publish summary statistics.
There has been a large amount of international interest in the regulation of AI. The OECD has identified over 1,000 policy instruments related to AI policy and strategy and 299 that relate to guidance and regulation. UK AI governance must move beyond recommendations and best practices. Many countries have developed AI bias checklists, tools, audits, and so on, but these are rarely binding. The Council of Europe has collected 450 different initiatives on AI and its governance, of which just six are marked as binding.[14] Thus, when developing new governance tools, it is essential that they are concrete and that there are consequences for not complying.
The European Union is clearly at the forefront of developing concrete AI regulation, and the UK could learn from debates surrounding its AI Act. In particular, the UK can draw two lessons from the AI Act when developing its regulation. Firstly, the Act illustrates the difficulty of attempting to balance AI innovation with a respect for fundamental rights.[15] The goal of the Act is to create a legal basis for AI and to prevent societal risks and broader negative consequences. Its legal framework sets rules to regularise the use of socio-technical systems in both the private and public sectors. For instance, it proposes to categorise AI-based products as high-risk or non-high-risk, dependent on factors such as the context in which these systems are implemented. High-risk systems will need to meet strict requirements to mitigate harms.
Secondly, the Act shows the difficulty of explicitly regulating the technology itself, rather than using related legislation. In many cases, legislation exists that is relevant to AI and its governance or that, at the very least, could be slightly amended to apply to AI. Relevant laws often relate to data privacy, discrimination, and harm. Related regulatory bodies can also provide inspiration should the UK choose to regulate the technology itself. Spain is creating the first supervisory agency for AI in the EU. This public organisation will guarantee regulatory compliance with the AI Act legislation.[16] Although this is an agency that has not yet been set up, it will reproduce the powers that the Spanish Data Protection Agency already has around data protection and privacy.[17]
Beyond the AI Act, there are also recurrent themes in debates around international AI governance that the UK could draw from in developing its own regulatory regime. One common debate surrounds the necessity of developing cross-departmental task forces related to AI that help to develop and coordinate strategy, governance, and regulation.[18] Such groups often involve consultations with the private sector, civil society, and other stakeholders to then develop and propose clear guidance related to AI. Consultation, involvement, discussion, and the transparency of regulatory development processes are key for developing better regulations for AI.
(November 2022)
|
|
|
[1] See World Economic Forum (2020) ‘AI Procurement in a Box’, at https://www3.weforum.org/docs/WEF_AI_Procurement_in_a_Box_AI_Government_Procurement_Guidelines_2020.pdf
[2] For more on this topic, see Brent Mittelstadt (2017) ‘“The Doctor Will Not See You Now”: The Algorithmic Displacement of Virtuous Medicine’. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3298923
[3] Sandra Wachter (2022) ‘The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law’. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4099100
[4] See Robert Brauneis and Ellen Goodman (2018) ‘Algorithmic transparency for the smart city’. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012499
[5] See Keegan McBride et al. (2021). ‘Towards a Systematic Understanding on the Challenges of Procuring Artificial Intelligence in the Public Sector’. Available at https://www.researchgate.net/publication/354507139_Towards_a_Systematic_Understanding_on_the_Challenges_of_Procuring_Artificial_Intelligence_in_the_Public_Sector
[6] See https://huggingface.co/docs/hub/model-cards
[7] See Brent Mittelstadt, Chris Russell, and Sandra Wachter (2018) ‘Explaining Explanations of AI’. Available at https://arxiv.org/abs/1811.01439. In addition, see Tim Miller (2018) ‘Explanation in artificial intelligence: Insights from the social sciences’. Available at https://www.sciencedirect.com/science/article/abs/pii/S0004370218305988
[8] See Sandra Wachter, Brent Mittelstadt and Chris Russell (2018) ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’. At https://jolt.law.harvard.edu/assets/articlePDFs/v31/Counterfactual-Explanations-without-Opening-the-Black-Box-Sandra-Wachter-et-al.pdf
[9] See Ana Valdivia, Javier Sánchez‐Monedero, and Jorge Casillas (2021) ‘How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness’. Available at https://onlinelibrary.wiley.com/doi/10.1002/int.22354
[10] On this topic, see Christian Djeffal, Markus Siewert and Stefan Wurster (2022) ‘Role of the state and responsibility in governing artificial intelligence: a comparative analysis of AI strategies’. Available at https://www.tandfonline.com/doi/abs/10.1080/13501763.2022.2094987
[11] See Mark Dempsey et al. (2022). ‘Transnational Digital Governance and Its Impact on Artificial Intelligence. In The Oxford Handbook of AI Governance’. Available at https://academic.oup.com/edited-volume/41989/chapter-abstract/355437944?redirectedFrom=fulltext
[12] See https://wiki.gccollab.ca/GC_Enterprise_Architecture/Board
[13] See also Sandra Wachter, Brent Mittelstadt and Chris Russell (2021) ‘Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law’. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3792772
[14] See https://www.coe.int/en/web/artificial-intelligence/national-initiatives
[15] Achieving this balance has created debates with civil society groups. For example, see https://edri.org/our-work/the-eus-artificial-intelligence-act-civil-society-amendments/ and Michael Veale, and Frederik Zuiderveen Borgesius. (2021). ‘Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach.’ Available at https://www.degruyter.com/document/doi/10.9785/cri-2021-220402/html?lang=en
[16] See https://portal.mineco.gob.es/es-es/comunicacion/Paginas/agencia-espa%C3%B1ola-de-supervisi%C3%B3n-de-inteligencia-artificial.aspx