Written Evidence Submitted by the University of Surrey Institute for People-Centred Artificial Intelligence
(GAI0075)
About the Surrey Institute for People-Centred Artificial Intelligence
The University of Surrey’s Institute for People-Centred Artificial Intelligence offers a unique environment where visionary academics are shaping the future of AI as part of a multi-disciplinary community of co-creators. We pioneer the future of trustworthy, responsible, and inclusive AI for education, healthcare, wellbeing, social interaction, entertainment, work, and sustainability. We are working with the next generation of inspiring research leaders with a passion for shaping AI to create a brighter future for everyone.
Our Institute brings together Surrey’s UK-leading AI-related expertise in vision, speech and signal processing, computer science, and mathematics, with its domain expertise across engineering and the physical sciences, human and animal health, law and regulation, business, finance and the arts and social sciences.
The pan-University institute is spearheaded by a group of academics with a passion for collaboration and co-creation, innovation in how AI is taught, and a strong vision for people-centred AI. With this distinctive approach, the academic team builds on Surrey’s track record of collaboration with industry, the public sector, government, and other relevant institutions to develop imaginative solutions to real-world challenges.
Introduction
As recognised in the National AI Strategy, AI has the power to enhance resilience, productivity, growth, and innovation across all aspects of the UK economy. However, the use of AI cannot be introduced ungoverned, so effective regulation will be essential in harnessing the opportunities that AI creates.
The approach to this should be people-centred, ensuring that the advancement of AI delivers in the interests of the entire population, not just companies that can profit, developing trust and understanding in a technology that will radically reform how we enhance and live our lives.
That the Government and the Science and Technology Committee are looking at the regulation of AI is both timely and critical to the future development of the technology and the UK’s long-term leadership and influence.
We are pleased to submit our response to the Science and Technology Committee’s inquiry into the Governance of artificial intelligence (AI). We look forward to supporting the Committee’s work and will be happy to provide oral evidence when the evidence sessions begin.
For further information about the Surrey Institute for People-Centred Artificial Intelligence and our research please visit our website.
How effective is current governance of AI in the UK?
The approach to AI governance in the UK has evolved over time and varies across different sectors operating under different regulatory bodies. The strongest governance for AI currently stems from the control of data through legislation such as the Data Protection Act, the Human Rights Act and the Equality Act with AI becoming more commonly featured in recent legislation such as the National Security and Investment Act, while other controls on AI may be inferred within other legislation and regulation, typically more sector specific, such as the Financial Conduct Authority’s Principles for Business and Prudential Regulation Authority’s Fundamental Rules, both of which apply to the financial services sector.
Advances in AI have translated into radically different applications which have diverse implications for individual consumers and wider society. Fragmenting AI regulation across multiple sector-specific bases not only serves to create confusion but presents risk around accountability on legitimate applications that fall foul of expected standards and may incentivise innovation that exploits rather than works with a firm set of widely accepted principles.
This is likely to be compounded by the fact that many AI systems are not operated in a sector specific fashion, with the same AI providing decisions and analysis across multiple sectors, a simple example being the Google search engine. In such cases, having a sectoral approach to regulation means that the AI will have to be developed to be compliant with multiple sets of regulation, a costly and time-consuming process that will be the UK uncompetitive in the fast-moving world of AI innovation.
The current approach is reliant on existing regulatory bodies adding the challenges of AI to their regulatory portfolio. Because the field of AI regulation is relatively recent and expertise in relatively short supply, in many cases these bodies will lack the adequate skills, expertise and scale to regulate AI effectively. While sectors such as finance, medicine, security and defence, and energy have cultures of strong digital regulation and may be expected to adapt successfully, many other sectors are lacking in this regard.
At present, there is no international agreement or regulatory consistency across different jurisdictions. While the UK has an advanced legal and regulatory system, there is little or no common language with other nations of a similar status when it comes to AI. The EU’s ‘International outreach for human-centric artificial intelligence initiative’ is one example of an initiative to address this but is symptomatic of the need to develop AI regulation in an international context to avoid the UK being at risk of falling behind. This is particularly important if we are to establish sound principles that underpin the adoption of AI, and more specifically, the ideas of ensuring a people-centric approach to future regulatory frameworks in AI.
Nevertheless, there is an opportunity for the UK to play a leading role. The proposed EU AI Act, for instance, offers prospects for convergence and divergence, where appropriate.
Divergence needs to be strategically directed at areas where UK interests are unique and best served by a different approach but caution in adopting such approaches needs to be maintained as unintended consequences can easily cause damage to the UK’s future primacy in AI, with social and economic consequences to follow. Examples of related challenges are evident in the international treatment of data protection, for example, the divergence between EU and US data protection law and the very real problems that this posed for companies and institutions exchanging data on a global basis.
Transparency and trust are essential features of the successful growth of AI adoption. A system of governance that is unbalanced and lacks transparency risks undermining public trust in a technology that can otherwise radically enhance our lives. The Government’s “pro-innovation” approach to the regulation of AI doesn’t adequately address these issues and this could ultimately undermine the UK’s ability to fully realise the opportunities that AI can bring.
What are the current strengths and weaknesses of current arrangements, including for research?
Current arrangements have helped sectors that are advanced in the use and application of AI to innovate and establish a set of policies and guidelines that are relevant and tailored to particular industries needs.
However, as the use of AI starts to extend beyond sectors which have mature regulatory and ethical frameworks, there is cause for concern. It is self-evident that the UK should avoid a multi-speed approach to AI regulation, with different sectors treating AI in different ways, reflecting different levels of maturity.
AI also has an increasingly important role to play in academic research. AI, as a new paradigm in computing, is finding its way into areas of research as diverse as social sciences, engineering, medicine and performing arts. While ethical frameworks exist for most spheres of research, the use of AI in such areas is less well developed and needs to be addressed urgently.
The importance of data protection, subject rights and privacy and broadly understood but become less clear as AI developments accelerate, often utilising AI algorithms developed in other parts of the world, utilising data sets of mixed provenance, sharing insights and ideas on an international stage with no control over how such ideas may be utilised elsewhere and failing to address such questions such as whether a technology should be developed just because it can be developed.
What measures could make the use of AI more transparent and explainable to the public?
AI is a powerful tool that can aid the process and speed of decision making. Its applications across society are broad, with examples seen in areas as diverse as recruitment, healthcare, transport, energy and financial services products.
With its increasing prevalence in our day-to-day lives, transparency and explainability are crucial in building public understanding and trust in how and when AI can legitimately be applied is key, as is creating the appropriate mechanisms of how AI decisions can be challenged and explained.
The key to this is accountability. In situations where an individual person, or even a group of individuals are responsible for a decision it is clear where accountability lies and the recipient of the outcome of that decision is able to question or even challenge this.
However, where AI is used in decision making this is far less clear as the AI is not accountable. Therefore, it should be incumbent on organisations to be transparent about the role of AI in decision making processes, support the ability to make such decisions explainable, and to have persons still held accountable for the AI system’s outputs and outcomes.
For example, in the case of a recruitment or medical decision, the accountable person might be a qualified HR professional or clinician. There may also be related accountability for the organisation’s data protection officer or information officer. The potential complexity of such accountabilities needs to be formalised and clarified if the UK is to avoid a potential ‘wild west’ approach to the adoption of AI.
The Information Commissioner’s Office and The Alan Turing Institute are among many organisations which have produced guidance for organisations who use AI in decision making. However, a stronger emphasis needs to be placed on organisations explaining to stakeholders when and how AI is used in decision making to ensure that power is equally shared. The AI systems themselves need to be ‘intelligible’ in that the decision making process needs to be straightforwardly understood by a human, ideally a non-expert.
There are good examples of frameworks developed by industry that can be used to embed processes for particular examples. IBM’s AI Explainability 360 has been made open source to help create a community of best practice.
While good examples exist greater awareness is needed among the public who are impacted by decisions where AI has a role and are in many cases not aware of this. In order to help build trust and understanding an information campaign would help to raise awareness to help ensure that people are informed participants in the decisions that are made about them. A relatively straightforward approach might be to notify the user of any system that an AI is involved in the processing of their data, as proposed in the draft EU AI Act.
How should decisions involving AI be reviewed and scrutinised in both public and private sectors?
As outlined in our response to the previous question the key criterion for AI explainability, and the review and scrutiny of a decision involving AI, is that a person remains responsible for the use and application of AI in any given scenario. What this means in practise is complex, for example, how a human remains responsible for an autonomous car using AI to navigate the roads may be substantially different to the responsibilities that might accrue to the human oversight of a mortgage application system that uses AI.
There should be overall responsibility at board level within organisations to ensure that best and latest practice is being followed. It can be delegated to individuals within teams that are responsible for the specific processes where AI informed decision making occurs to be contactable and accountable to individuals who may want to challenge these decisions.
Are current options for challenging the use of AI adequate and, if not, how can they be improved?
As acknowledged, there is an issue around legal responsibility and accountability for the use and misuse of AI. Without AI, if a product or service falls below an expected standard there is clear accountability whether that lies with a person, company, or organisation. However, where AI is applied and is “responsible” for providing an outcome, accountability is far less clear.
Without an adequate framework in place that recognises the role of AI, not only does this potentially harm public trust in the technology, it also risks stifling innovation, as companies are unclear of their rights and responsibilities.
Professor Ryan Abbott, an AI Fellow at Surrey’s School of Law has been leading the international discourse over the role of the legal, regulatory, and industrial sectors in responding to the challenges brought upon by the rapid expansion of AI systems. AI patents can facilitate innovation in key areas (such as pharmaceutical drugs) whilst protecting the integrity of human ingenuity and easing the transition towards commercial use of AI technology.
Partly in response to cases of AI patents being filed as a direct result of Professor Abbott’s research, the UK Intellectual Property Office has conducted two legislative consultations advising for a change in the current law. Adopting a principle of ‘legal neutrality[1],’ in which AI and humans are subject to the same laws, will avert the unintended negative consequences.
Professor Abbott’s work is illustrative of the wider need to examine the implications of AI to current legal frameworks in many different areas, not just the protection of intellectual property.
How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?
The UK Government is currently consulting on AI regulation and has published a policy paper “Establishing a pro-innovation approach to regulating AI” outlining an approach based on “cross-sectoral principles”.
The approach outlined in the Government’s green paper implies a retrospective approach to regulation that will need to be agile in reacting to the rapid pace of change in the way that AI impacts upon our society. We suggest a forward-looking approach to establishing a regulatory framework and cross-sectoral set of principles that seeks to anticipate and future-proof against foreseeable impacts of AI on society in the future.
While we cannot, at this stage, plan for every innovation and application for AI, a firm principles-based approach can cater for a broad set of outcomes and allow for innovation to occur that is directed and incentivised with a positive set of objectives.
The principle of a “pro-innovation” approach to regulation is agreeable, however, this should be centred around the encouragement of "innovation for good”, in the same way that economic growth is not an end in and of itself when compared to “good growth” which is inclusive. A pro-innovation approach that is balanced with a firm set of cross-sectoral principles, underpinned by a legal framework, will help ensure that the development of AI can help strengthen our society and help drive innovation and economic growth. It is equally ‘pro-innovation’ to ensure that a set of commonly understood principles, with clear accountabilities, is expected from the development and application of AI in the future.
In the executive summary of the Government’s proposals, the approach for regulating AI in the UK is defined as one that "will drive business confidence, promote investment, boost public trust and ultimately drive productivity across the economy.” This ambition is clearly business driven and innovation focused, which is most beneficial when social impact and ‘people’ are taken into consideration. This business-centred approach only refers to the public in relation to 'boosting their trust', as opposed to serving them and making their lives better through AI driven innovation and technologies. The goal of technology should be to serve and benefit people, and only then can their trust be gained.
The Government’s approach as it stands does not mention the importance of skills and awareness building – teaching graduates about ethical frameworks and regulatory principles is an important mechanism for introducing responsible technology into the developing world of AI. Additionally, by creating wider public awareness around AI technologies, people will be better equipped to recognise and challenge ‘bad innovation,’ and be able to distinguish this from beneficial, ‘good innovation.’
Finally, we need a recognised narrative around AI through education and industry engagement. AI, like many new and emerging technologies, provides vast opportunities for the betterment of society. But unchecked, the evolution of the technology and its applications can be used for a variety of purposes – some good, some bad. In such an environment the narrative can vary from utopian to dystopian futures. Effective regulation and a reasoned narrative are the best way to mitigate against this and ensure a populace that is informed of the risks and the benefits of AI, the realities of what AI can and can’t be expected to do, and that individual responsibility is encouraged.
The Surrey Institute for People-Centred AI is a proponent of this narrative and is a collaborative partner working with industry and policymakers to leverage the benefits of an AI led future, putting the needs of individuals and society first, rather than the technology.
A cross-sector approach built on shared principles and guidance will help in cases where there is equivalence in regulatory maturity and consistency in AI application. However, it is unlikely to solve the problem of different definitions of AI being applied across different sectors, giving rise to mismatches in regulatory intervention and increased regulatory cost. Therefore, it is the Institute’s view that a harmonised legal framework would instil greater confidence and efficiency, and help to advance the UK’s leadership globally.
To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?
As already outlined in our response, there is currently no specific legal framework for the use of AI in the UK, although there are particular laws which give AI specific or implied attention.
This patchwork regulatory environment has developed over time as advances in AI have taken place at different pace across sectors. Therefore, in industries such as healthcare and financial services which have a mature regulatory environment, the framework is more robust.
Nevertheless, the increasingly prevalent use of AI across a range of sectors and in a range of circumstances means a more co-ordinated and robust legal framework is needed.
Is more legislation or better guidance required?
Currently, there are dozens, if not hundreds, of guidance frameworks on AI, originating from organisations as diverse as the Ministry of Defence, Google and the Vatican. Although this is recognised, an approach to solving this proliferation is not identified. Too many sets of guidance end with multinational companies doing their own thing, for better or worse.
An approach that embeds cross-border co-operation with an internationally recognised set of principles will help to ensure equivalence and a regulatory regime that can be effective in raising standards in countries with different legal or political systems.
What lessons, if any, can the UK learn from other countries on AI governance?
The EU published its draft AI Act in 2021, taking a risk-based approach to the regulation of AI, with some specific ideas and definitions, including certain types of AI that should be prohibited and a reasonably rigorous definition of what an AI constitutes. Those AI systems that are classified as ‘high risk’ would have a number of obligations put upon them. The importance of transparency is highlighted as is the need to retain human oversight. The penalties for misuse of AI are significant, exceeding those in force for data protection.
Similarly, the American Office of Science and Technology Policy recently published its Blueprint for an AI Bill of Rights. The Blueprint is a result of a year-long process to seek insights from people across the US exploring how AI will impact communities, industry, technology developers and other stakeholders. The blueprint, although less detailed, shows good alignment with the EU AI Act and joins a wealth of guidance and principles that have been published in the last few years on the development and deployment of AI.
There’s much to applaud in both frameworks, and there is much to explore and discuss. It’s encouraging to see increasing alignment on the international stage regarding AI and highlights the importance of the UK playing a role in these discussions. There are still some areas of divergence, potentially with the UK's 'Pro-Innovation' approach to AI regulation which might try to steer a different course towards more permissive control of AI but faces the challenge of AI being a truly global industry.
(November 2022)
[1] Abbott, R. (2020). The Reasonable Robot: Artificial Intelligence and the Law. Cambridge: Cambridge University Press.