Written Evidence Submitted by The University of Glasgow
(GAI0057)
The Committee welcomes submissions on the following questions:
Artificial intelligence (AI) systems are now being integrated into everyday applications that make decisions which affect the public, often in high-risk domains such as medicine, jurisprudence, education, hiring, etc. There is currently a lack of governance in the UK that regulates how high-risk AI systems should be developed and be deployed. Recent efforts by the Information Commissioner’s Office (ICO) on providing toolkits and guidance for AI data protection and transparency in line with GDPR, the Data Protection Act 2018 and the Equality Act 2010 are to be applauded but are clearly not enough. While there are efforts by the European Commission High-Level Expert Group and the International Standards Organisation (ISO) to provide standards and guidelines for developing trustworthy AI, with new EU AI regulation on the horizon the UK has fallen behind in ensuring that AI systems are developed responsibly.
Academic researchers as well as industry have been increasingly calling for human-centric AI systems that are fair, accountable and transparent. There are research efforts underway to ensure systems are fair, however, much of the power to execute these measures rest with AI experts employing quantitative fairness measures instead of reflecting human values. In a governance vacuum, many organisations have drawn up their own development guidelines that aim to help designers and developers build responsible and accountable AI systems. Research into Explainable AI (XAI) has made great strides to provide transparent AI systems however little of this work has been taken up in deployed AI systems.
We believe that this matter needs to be tackled in a three-pronged approach:
AI systems that make decisions that affect the public should be reviewed and scrutinised, whether these AI systems are developed in private or public sectors, especially in high-risk domains. A growing concern is that the AI development lifecycle encompasses many stages that are currently not the focus of any form of governance nor is it possible to challenge decisions made around AI systems. For example, decisions whether to develop an AI system in the first place during a business case stage is currently not subject to scrutiny. In addition, forms of scrutiny and feedback once a system is deployed are also very rare. While there is increasing focus on scrutinising the model building and evaluation steps, data collection has not received equal attention even though it is the driving factor of much of AI. For example, lay users might not be aware that their data - possibly their DNA - is being used to build an AI system. There are also challenging areas ahead with regards to ownership of the data and the resulting AI systems. For example, who does an artwork created by DALL-E belong to? Is it the artist who created the original artwork used in training DALL-E, or the creators of DALL-E, or the user who requested an artwork to be created? What data should text or code generation models, like GPT3 and Github Copilot, be permitted to use for training? Who bears the legal responsibility when the models produce copyrighted content?
Governance will necessarily need to involve those who are affected by the decisions made by the AI system ie. lay users, as well as domain experts and AI experts who develop these solutions. Possible approaches such as human-in-the-loop AI, participatory design approaches or representations at organisational committees need to be integrated into governance practices appropriate for the AI development lifecycle step.
High-stakes AI systems will need regulation at national level. Possibly the ICO’s role could be extended to provide this regulatory oversight.
See The Guild’s response to the AI Act.
We believe that valuable lessons could be learned from the European Commission’s proposed regulation on AI which classifies AI applications into level of risks. Indeed, the UK needs to embrace the proposed AI Act or should have an equivalent framework. It is also important that not too many restrictions are placed on AI development for low-risk applications that can still bring many benefits to users. In this context, The Guild’s response to the AI Act calling on EU to involve researchers and Universities in discussions around the AI Act and its future refinements should be supported. The idea is to recognise that while there needs to be a legal framework around the use of AI, such a framework should not put unnecessary burden on researchers and research progress in AI[1].
(November 2022)
[1] https://www.the-guild.eu/news/2021/the-guild-sets-recommendations-for-the-ai-act.html