Written Evidence Submitted by Evidence submitted by the University of Cambridge Minderoo Centre for Technology and Democracy
(GAI0032)
The University of Cambridge Minderoo Centre for Technology and Democracy is an academic research centre with world-leading expertise covering the regulation and governance of emerging technologies. Working with key stakeholders from academia, the private and public sectors, and civic society, the University of Cambridge Minderoo Centre for Technology and Democracy deploys its expertise to hold tech to account. The submission from the Minderoo Centre for Technology and Democracy is prepared by Dr Ann Kristin Glenster, Senior Policy Advisor on Technology Governance and Law.
1.1. Over the last few years, so-called artificial intelligence (AI)[1] technologies have emerged in most sectors, whether these are public or private. While these technologies have brought, and continue to bring, phenomenal benefits to a wide range of society, they also pose new risks to privacy, civil liberties, and democratic values, including equitable justice.
1.2. The University of Cambridge Minderoo Centre for Technology and Democracy therefore recommends the adoption of a comprehensive binding statutory governance framework to ensure that the balance between various stakeholders underpins the use of AI technologies in the UK, and that these technologies are used in a proportionate and responsible way. We recommend that the framework be based on or incorporate the recommendations on AI by the Organisation for Economic Cooperation and Development (OECD), published in 2019.[2]
1.3. In this context, we further support the initiative taken by the Department for Digital, Culture, Media and Sport (DCMS) for a framework for the governance of AI that covers all sectors.[3] In lieu of a universal definition of AI applications, the DCMS proposes ‘a set of core characteristics and capabilities of AI to guide regulators.’[4] Specifically, the DCMS proposes that each sector-specific regulator should be tasked with identifying risks and mitigating measures which apply to the use of AI technologies falling under their scope.
1.4. We support the view that a statutory-based framework must not use a closed definition of the term AI as this is likely to cause loopholes, which will allow users of new technologies to escape regulatory oversight. We believe that the definition of AI technologies should therefore be broad and technology-neutral in any statutory framework of governance.
1.5. We are nevertheless concerned that a lack of a definition will give too much leeway to sector-specific regulators to develop governance frameworks lacking force, which will be too vague to offer meaningful regulation on the ground. We therefore recommend that Parliament adopt a binding statutory framework based on overarching principles to govern the use of AI technologies, setting out specific obligations and powers of sectors-specific regulators to oversee and enforce the implementation of the framework by all users of AI technologies.
The recommendations by the University of Cambridge Minderoo Centre for Technology and Democracy to the Science and Technology Committee for the governance of artificial intelligence (AI) are:
To adopt a binding statutory framework to govern the use of AI technologies.
To adopt mandatory transparency rules obliging users of AI technologies to explain to regulators how AI uses data and the effects that their use may have on individuals, including their impact on civil liberties and privacy
To adopt mandatory transparency rules obliging users of AI technologies to explain to individuals the use of personal data and the effects AI technologies may have for individuals
To adopt mandatory data minimisation rules including requirements to anonymise and pseudonymise data whenever possible
To adopt a mandatory complaint procedure overseen by an independent ombudsman with powers to fine users of AI technologies and to award individuals compensation
To adopt a statutory-based Code of Conduct securing researchers’ access to data on the use of AI technologies for research purposes
2.1. Transparency is crucial for the public and regulators to understand and trust the use of AI technologies. Transparency must be meaningful, which means that the regulatory framework must impose conditions on users of AI technologies to explain what data and how that data are used in AI technologies, and what the likely consequences of the use of the technologies may have, especially for individuals and marginal groups. We therefore support the notion that if a user of AI technologies is not able to forecast the risk, and that the kinds of data used is sensitive and/or decisions taken by the system are significant, the lack of foresight of risk should prohibit the use of the AI technologies until these can be produced to a satisfactory standard, set by the regulator. We therefore recommend that the governance framework includes a prohibition on the use of AI technologies to experiment on people without robust risk assessments and mitigation guardrails in place.[5]
2.2. The obligation to provide transparency must encompass a duty on users of AI technologies to explain to regulators how the use of the AI technologies is designed for safety, how risks are assessed and mitigated, particularly risks to civil liberties, including the right to privacy. The governance framework must make it possible for regulators to examine AI technologies without limitations, including limitations placed by intellectual property and trade secret protections.
2.3. To ensure that the rights of privacy and proprietary rights in data and systems are protected, we recommend that the governance framework include an independent oversight body with the powers to ask for access to all data and AI systems.[6]
2.4. Transparency also means that the users of AI technologies must explain to individuals how these technologies use personal data and how they may affect individuals. From the intense debate on ‘a right to explanation’ in EU law,[7] we do not consider that it is useful to explain to individuals all the technical details of an AI system. To that end, we advocate for an approach of meaningful and accessible information. Such information should include how long the data is stored, how it is combined with other data, and whether it is shared with third parties.
2.5. Our recommendations are in line with the OCED’s recommendation 3.1. on transparency and explainability, stating that:
‘AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: (i) to foster a general understanding of AI systems; (ii) to make stakeholders aware of their interactions with AI systems, including in the workplace; (iii) to enable those affected by an AI system to understand the outcome; and (iv) to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as a basis for the prediction, recommendation or decision.’[8]
2.6. Instead of a transparency obligation prescribing the release of comprehensive technical detail of AI technologies, we recommend the adoption of mandatory transparency obligations which highlight what personal data is used and the possible implications or consequences the processing by an AI system of data may have for the individual. This must include a right afforded the individual to correct their personal data and, when possible, a right to consent, but only in instances where the consent would be real, meaning there must be an actual choice.
2.7. We further recommend the introduction of mandatory data minimisation rules, whereby an AI system is only allowed to process personal data necessary for the specific outcome the system has been designed to achieve. All AI systems should anonymise or pseudonymise personal data as far as possible in all instances, even when the personal data is not considered to be sensitive.
2.8. The transparency obligations should also include the obligations to provide researchers’ access to data, which is discussed below in Section 4.
3.1. There is currently no comprehensive legal regime in the UK under which individuals or organisations can challenge the use of AI technologies. This is a major concern and a gap that must be filled to gain the public’s trust.
3.2. We recommend the adoption of a statutory governance framework which would make it mandatory for users of AI technologies to afford individuals an effective and easy-to-use right to complain in cases where their personal data is used in AI technologies and/or where the outcome of these technologies produces a significant effect on the individual concerned.
3.3. The right to complain must include a right to oppose effects that were based on wrong data or data to which the user of AI technologies should not have had access. The right should also include a right to complain about shortcomings of providing information and transparency, and a right to demand human oversight over decisions which may have significant implications for specific individuals. The right to complained should be overseen by an independent ombudsman with powers to fine users of AI technologies for breaches, and to award individuals direct compensation.
4.1. As there are no current arrangements, we cannot comment on their strengths or weaknesses other than to say that it is imperative that Parliament puts in place mandatory rules securing researchers’ access to data used in AI technologies. In that regard, there is already a statutory framework for researchers’ access to data held by public authorities under Chapter 5 of Part 5 of the Digital Service Act 2017, and provisions for access to statistical data under Section 39(4)(1) of the Statistics and Registration Service Act 2007, which can be used as a template.
4.2. According to the European Digital Media Observatory (EDMO), ‘Academic and civil society researchers have been calling for greater access to digital platform data – data that would allow greater insights into the impacts these platforms have on individuals, social groups, and our societies as a whole – for some time.’[9] The need for researchers’ access to data on AI technologies are increasingly becoming pressing as these technologies will have profound ethical implications, including their impact on privacy and their potential for surveillance capabilities.[10] Granting independent researchers access to these technologies is imperative to understand the effects of these technologies, to ensure they are built safely, and that they will be optimised for the benefits of society. The EDMO explains that,
‘Scientific research is intended to help us understand the world around us. It aims to identify trends, to monitor change over time, to build understanding of what is happening and why, and to develop and trial innovations that can improve society. As the communication, media, and information ecosystems have changed and evolved, so too has the nature of scientific inquiry. And yet, researchers’ ability to study a growing and increasingly important element of the human experience is curtailed because of the limited availability of platform data. In particular, individual-level data are needed to build a better understanding of why observed phenomena are happening, who is affected, and what the effects entail.’[11]
4.3. Recently, the EDMO proposed a Code of Conduct for researchers’ access to data to ensure digital platforms conform with European law.[12] The Code of Conduct specifies what research would qualify for a right to access based on the purpose of the research, the aim of contributing to ‘society’s collective knowledge’ and can only be conducted by an institution which conducts non-for-profit research as one of its principle activities.[13] The research must comport with accepted ethical and methodical standards, including the Ethical Guidelines for Internet Research of the Association of Internet Researchers.[14]
4.4. We recommend the adoption of a similar Code of Conduct for researchers’ access to data on the use of AI technologies. The implementation of the Code must be overseen by an independent body, possibly under the UK Research and Innovation, which will be tasked with weighing up different considerations, including the rights of privacy and proprietary rights in data. We recommend that the Code of Conduct include a provision lifted from the DEA concerning the need to de-identify personal data whenever possible. We recommend that the governing body should be in charge of managing a similar accreditation scheme as set out in the DEA, based on the seven principles in the DEA of: (i) confidentiality; (ii) transparency; (iii) ethics and law; (iv) public interest; (v) proportionality; (vi) accreditation; and (vii) retention and onward disclosure.
4.5. There are numerous initiatives where concerns regarding the potential for misuse of data – such as inadvertent loss of trade secrecy protection – has been circumvented by setting up safe spaces or methods by which researchers can access data without compromising of copying the data.[15] The independent body – which could be the UK Research and Innovation or a regulator such as Ofcom – should be tasked with overseeing the use of such spaces or methods to ensure that researchers’ access to data is fulfilled in a timely and complete fashion, and that users of AI technologies shall not have the right to withhold access because of commercial concerns. We therefore recommend adopting similar stipulations as already enacted in Chapter 5 of Part 5 of the Digital Service Act 2017.
4.6. We also recommend that the overseeing regulatory body, whether it be the UK Research and Innovation or another public body, be tasked with reporting on researchers’ access to information in a similar manner to OFCOM’s duty under clause 137 of the proposed Online Safety Bill. The annual report must (a) describe how, and to what extent, researchers carrying out independent research into the development and use of AI technologies are able to obtain information from users of AI technologies for research purposes, and (b) propose tangible plans to remove any legal or other obstacles to access to data by researchers. The body should report to the Secretary of State and to the Digital Regulation Cooperation Forum.
Bibliography
Maja Brkan and Grégoru Bonnet, ‘Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morgans’ (2019) 11 EJRR 18
Lilian Edwards and Michael Veale, ‘Slave to the Algorithm: Why a Right to an Explanation Is Probably Not the Remedy You Are Looking For’ (2018) 16 DLTR 18
European Digital Media Observatory and the George Washington University’s Institute for Data. Democracy & Policy, Report of the European Digital Media Observatory’s Working Group on Platform-to-Researcher Data Access (31 May 2022)
Department for Digital, Culture, Medi and Sport (DCMS), Establishing a pro-innovation approach to regulating AI: An Overview of the UK’s emerging approach (CP 728, 18 July 2022)
Jakko Kemper and Daan Kolkman, ‘Transparent to Whom? No Algorithmic Accountability Without a Critical Audience’ (2019) 22 Information, Communication & Society 2081
Joshua A. Kroll et al, ‘Accountable Algorithms’ (2017) 165 U of Penn L Rev 633
Gianclaudio Malgieri and Giovanni Comandé, ‘Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation’ (2017) IDPL 243
Organisation for Economic Cooperation and Development (OECD), Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449 2019)
Andrew D. Selbst and Julia Powles, ‘Meaningful Information and the Right to An Explanation’ (2017) 7 IDPL 233
Katarina Foss-Solbrekk and Ann Kristin Glenster, ‘The intersection of data protection rights and trade secret privileges in “algorithmic transparency”’ in Research Handbook on EU Data Protection Law (Eleni Kosta, Ronald Leenes and Irene Kamara, eds., Edward Elgar Publishing 2022)
Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 3 Harv J L & Tech 841
Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 IDPL 76
(November 2022)
[1] We do not attempt to define AI in this submission, but acknowledge that AI is a contested term. Our focus is not on the technical limitations of AI technologies, but on their governance to ensure that their benefits are balanced with the risks they may pose to society.
[2] Organisation for Economic Cooperation and Development (OECD), Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449 2019).
[3] Department for Digital, Culture, Medi and Sport (DCMS), Establishing a pro-innovation approach to regulating AI: An Overview of the UK’s emerging approach (CP 728, 18 July 2022)
[4] ibid p. 9.
[5] ibid p. 14.
[6] Katarina Foss-Solbrekk and Ann Kristin Glenster, ‘The intersection of data protection rights and trade secret privileges in “algorithmic transparency”’ in Research Handbook on EU Data Protection Law (Eleni Kosta, Ronald Leenes and Irene Kamara, eds., Edward Elgar Publishing 2022)
[7] See inter alia Lilian Edwards and Michael Veale, ‘Slave to the Algorithm: Why a Right to an Explanation Is Probably Not the Remedy You Are Looking For’ (2018) 16 DLTR 18; Gianclaudio Malgieri and Giovanni Comandé, ‘Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation’ (2017) IDPL 243;; Joshua A. Kroll et al, ‘Accountable Algorithms’ (2017) 165 U of Penn L Rev 633; Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 3 Harv J L & Tech 841; Maja Brkan and Grégoru Bonnet, ‘Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morgans’ (2019) 11 EJRR 18; Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 IDPL 76; Andrew D. Selbst and Julia Powles, ‘Meaningful Information and the Right to An Explanation’ (2017) 7 IDPL 233; and Jakko Kemper and Daan Kolkman, ‘Transparent to Whom? No Algorithmic Accountability Without a Critical Audience’ (2019) 22 Information, Communication & Society 2081.
[8] OECD supra note 2.
[9] European Digital Media Observatory and the George Washington University’s Institute for Data. Democracy & Policy, Report of the European Digital Media Observatory’s Working Group on Platform-to-Researcher Data Access (31 May 2022), p. 1.
[10] ibid p. 1.
[11] ibid p. 4.
[12] ibid.
[13] ibid pp. 3-4.
[14] Association of Internet Researchers, https://aoir.org/ethics/ (accessed 24 November 2022).
[15] See for example the US HathiTrust Research Centre, https://www.hathitrust.org/htrc (accessed 24 November 2022).