Written Evidence Submitted by Queen Mary University of London

(GAI0073)

The response includes input from colleagues from across the University, summarised to address the questions posed.

Queen Mary University London is a leading research-intensive University and through our history, we have fostered social justice and improved lives through academic excellence. And we continue to live and breathe this spirit today, not because it’s simply ‘the right thing to do’ but for what it helps us achieve and the intellectual brilliance it delivers.

Our reformer heritage informs our conviction that great ideas can and should come from anywhere. It’s an approach that has brought results across the globe, from the communities of east London to the favelas of Rio de Janeiro. We continue to embrace diversity of thought and opinion in everything we do, in the belief that when views collide, disciplines interact, and perspectives intersect, truly original thought takes form.

We have established research highways to focus our research on tackling some of society’s biggest challenges, to grasp new opportunities and transform lives globally through interdisciplinary research. We currently have five research highways, focused on tackling key global challenges; one of which is Digital, Information, Data, challenged with shaping a healthier, more prosperous and equitable future for our digital society. We have also, as part of our 2030 strategy, invested in the creation of a brand-new research-intensive Institute at Queen Mary, focused on the Digital Environment. The Institute is tasked with bringing together researchers from across all disciplines to use interdisciplinary approaches to address rising global issues.

We are well placed to provide a response to this inquiry given the critical mass of researchers we have working in this space and believe that adequate governance of AI is critical to the safe, reliable, and equitable use of AI and its applications.

How effective is current governance of AI in the UK?

The implementation of effective governance for AI in the UK is challenging as the technology develops, and as understanding, and uses of AI continues to evolve.  The current legal and regulatory framework does not provide for detailed rules addressing specific technologies’ use in specific contexts, and the lack of comprehensive legislation is hindering both development and protection of individual interests and security.

Large, multi-national companies predominately service their own self-interest, and if the UK takes a pro-innovative approach without necessary guarantees to protect individuals' rights and freedoms, social justice and cohesion, and our democratic processes, there is a danger that this unrestrained pursuit of profit will continue. Wider conversions on the use of technology, and what it can, or cannot do for us, and how we as a society see its place in the future would help to formulate a picture of the required level of governance for AI in the UK.

What are the current strengths and weaknesses of current arrangements, including for research?

Strengths: While there are some high-level ethical principles and guidelines with regards to the development and deployment of AI, these are usually not translated into specific rules within a specific context. This means that tech companies, developers, and investors have a lot of flexibility.  For research this may pose an advantage in that innovative new AI technology can be developed and trialled more easily.

Weaknesses:  While the flexibility within the current frameworks and guidance offers opportunities, it also has disadvantages to the governance of AI. Typically, we see that there are not specific rules within specific contexts, which permits companies, investors, or developers to have greater freedom in applications of AI. This lack of importance placed on context presents issues with accountability in determining what technology will be developed and how it is going to be used. Greater scrutiny could be placed on industry on how they set directives, who they are serving and the potential for exploitation, in addition to greater emphasis being placed on training students in the ethics of AI, empowering them to demand more accountability from tech companies.

 

What measures could make the use of AI more transparent and explainable to the public?

It appears AI is often overhyped, either positively or negatively, with unhelpful consequences.  As the technology is advancing quickly, the public may lack understanding of what AI is, how it works, its strengths and limitations.   Therefore, there is uncertainty amongst the general population of when AI is used, and greater efforts could be invested in educating the public on AI, how it works, or its applications. Open consultations with the public could be held to address this, however care should be taken due to the way in which AI is often reported on and presented in the media. How the media engage with AI, and how they choose to report on it should be considered, along with the development of methods to educate the media in this space.

More research is required to make AI itself more transparent and explainable.  This is particularly true as methods get more complicated, particularly with modern deep learning systems. Methods like LIME and SHAP are examples of steps in the right direction, although these methods are a long way away from a human-like explanation.  However, it should be recognised that requiring human-like explanations of complex AI systems is beyond the current state-of-the-art and may not be feasible in the near future.  It may be reasonable to require higher risk AI systems to produce additional outputs providing some level of diagnostic information, like heatmaps or local feature importance for predictions.  Given the heterogeneity of AI systems it may be difficult though to homogenise the format of diagnostic information.

 

How should decisions involving AI be reviewed and scrutinised in both public and private sectors?

Transparency where possible should be brought to decision-making involving AI, and a widely accepted agreement on what decision-making could, or should be made via AI and what should remain in the human domain. Agreed mechanisms by which decisions involving AI can be openly shared, or reviewed, especially where the application of AI could be considered as high-risk or is automated processing, could be established. Greater scrutiny could also be given to sustainability of AI, both in terms of its applications but also in regard to the environmental impacts.

Research has shown that public officials are generally hesitant to delegate their decision-making discretion to algorithmic systems. A study by Queen Mary researchers showed that cognitive biases support a better explanation of why public officials resist algorithmic decision-making than fear of loss of traditional discretion. These biases relate to the opaque nature of algorithms and AI systems that result in performance uncertainties. The recommendation is that discretion via algorithms needs to be embedded into the professional development of public sector officials. Accordingly, applications of algorithms and AI in public policy should embrace discretion in decision-making and enable public officials to perform data-driven decision making in ways more compatible to current professional practices than simply automating tasks.

Are current options for challenging the use of AI adequate and, if not, how can they be improved?

The use of AI is not consistently shared openly, which makes challenging its use difficult. Mechanisms to challenge the use of AI through GDPR legislation are perhaps not as widely known as they could be, and whilst there is no general declaration of when AI is being used, it is increasing difficult to know when, or what to challenge.

There is equally a vested interest from large corporations in maintaining the status quo, and consideration should be given to how impactful change could be achieved.

 

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

Any AI regulation should be risk assessed.  Many deployments of AI are no or low risk and therefore require little, if any regulatory oversight.  However, higher risk uses of AI require higher scrutiny.  Some uses of AI may have an unacceptable risk threat to the safety, livelihoods and rights of people and best prohibited.  The EU framework for the regulation of AI offers such an approach.

For higher risk AI, introducing increased regulator checks and/or overarching Government regulation could be of benefit, in addition to seeking inspiration from currently regulated substances, both nationally and internationally. Models exist for the monitoring of other controlled materials, for instance drug productionFDA-style approach can be used to filter out harmful AI applications. The US Food and Drug Administration requires controlled testing on animals to establish safety, and then more testing on small populations to establish efficacy. This approach would allow to test the impact of an algorithm or an intelligent system on, for instances, user preferences, cognitive autonomy or leveraging harmful content.

Researchers from Queen Mary have conducted a series of case studies on algorithmic governance and the data-driven transformation of regulation in the UK. They found that regulators are upskilling their organisations with the development of data infrastructures, intelligence hubs and training while experimenting with novel applications of data science and AI. Some of these initiatives have yielded novel insights in the practice of risk-based regulation and better use of enforcement resources. Initiatives around smart regulation are beginning to enable regulators in the UK to be more responsive although significant barriers to organisational readiness, skills and capabilities persist.  Regulatory innovation via sandboxes and similar initiatives are driving changes to traditional approaches and are promising to establish the UK’s global position as a leader in regulatory standards in the post-Brexit world.

 

To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

Guidance on the explainability of AI emphasises the development of public trust and practical translation of case studies and resources. While emphasis is rightly on compliance and public assurance, more attention is needed on harnessing the innovative potential of AI via supporting initiatives translated into an enabling legal framework.

The current legal framework is not fit for purpose in managing the use of AI, especially in decision-making. The current offering is very limited, high-level and not context-specific, and greater emphasis needs to be given to developing and implementing appropriate data governance and management practices for high quality training, validation and testing data sets.

Is more legislation or better guidance required?

The combined development of enhanced legislation, and better guidance is required to support improved governance of AI in the UK. 

 

What lessons, if any, can the UK learn from other countries on AI governance?

Examples of what may be currently considered as the gold standard for responsible use of AI are the approaches taken by the EU, Estonia, and Scandinavian countries more widely. It is important that any guidance or legislation produced by the UK is not in conflict with respect of this as this will impact transferability of technology and innovation.

 

(November 2022)