Context
UK Research and Innovation (UKRI) invests in a significant portfolio of AI research. In 2021 this totalled an investment of around £1Bn across over 700 grants to research organisations and additional funding to businesses via Innovate UK. This spans fundamental mathematics and computer science, via the creation of a responsible and ethical ecosystem, through to its development into other domains, disciplines and industries. It is equally split between fundamental AI development and its application, with around £332M of funding leveraged from industry and government partners.
AI is a strategic priority technology for the UK, and research and innovation will be key to delivering on the huge promise of AI to transform society, the economy, and the environment. Through the UKRI AI Review1 (transforming our world with AI) we have sought to understand the complex UK AI Research and Innovation (R&I) landscape, and to set out a clear vision and ambition for the role of the research and innovation community and our plans to support AI research. UKRI’s vision is truly interdisciplinary and sets out a highly integrated view of what is needed to support closely connected basic AI research and the responsible development of AI to address the challenges facing society, whilst also creating an environment for AI R&I that will help it to bring benefits to society, this includes ensuring the right skills mix and infrastructures are available.
This written submission builds on our interactions with stakeholders across the Research and Innovation landscape, in universities, institutes, businesses and with other stakeholders.
1 https://www.ukri.org/about-us/strategy-plans-and-data/strategies-and-reviews/ai-review-transforming- our-world-with-ai/
development, is key to ensuring that appropriate assurance does not impede progress in capturing the benefits of AI.
This means:
AI is currently very lightly regulated in the UK. Governance relies on sectoral regulation, export controls and data protection laws. It is not always clear how these different types of regulation interact with each other, which can lead to a complicated picture that could be simplified. We would note the outputs of the House of Lords select committee on AI: AI in the UK: Ready
Willing and able. The majority of academic and innovation-based respondents noted that data laws were robust and that legislation, including liability legislation, exists for the use and misuse of products and services into which AI is embedded. A technology neutral approach, independent of changing technology development and new innovations in the marketplace is likely to be more robust than regulating the technology itself.
This is not to say that standards are not present, both in research, innovation and the broader market. Within research environments, an extensive network of due diligence is in place when AI is researched in partnership with third party stakeholders. In businesses, an increasing focus on the responsible use of AI is a prerequisite for developing consumer trust. While implementation of standards is relatively light touch, the benefits of transparent, interoperable models and deploying AI responsibly is driving best practice in research and innovation. This low level of regulation has likely contributed to the swift development of AI technologies in the UK.
Recent legislation plays a key role in how AI is governed in the UK. For example, AI itself is inextricably linked to the Data Protection Act 2018, which has been swiftly adopted across the sector and incorporated as best practice in research and innovation environments. Many of the issues around future governance will come down to how data is integrated and shared, and how people might react to initiatives that use personal data. This is because it may not be clear how data are generated, used, and whether permissions have been given. This is not currently a significant issue, but that position could evolve as new methodologies evolve across AI or data sharing best practice takes into account new drivers within the economy.
The most recent and significant alteration in this environment has been the introduction and implementation of the National Security and Investment (NSI) Act. This brought additional powers to the department of Business, Energy and Industrial Strategy (BEIS), to scrutinize and to block or apply conditions to certain financial transactions. AI is one of the 17 sensitive areas covered under this Act. UKRI has been working closely with colleagues in BEIS on the operationalization of the Act and embedding new operating principles within UKRI to ensure sight of potential risks within funded programmes.
Due to the extent of the NSI Act, which covers acquisitions of intellectual property and intangible assets as well as company acquisitions, we are aware of a high number of voluntary notifications across the research and innovation sector. In AI this is exacerbated by a broad definition of the technology. There is ongoing uncertainty across the research sector as to how the Act will be interpreted in practice. Clarity and additional guidance, and further engagement with the research and innovation community would be beneficial to ensure that uncertainty does not have a detrimental effect on AI development within the UK. Ensuring there is cross departmental collaboration to align the outcomes of the Act with data, technology and economic strategies will be important to ensure a holistic approach to regulation. This requires ongoing engagement as the uses for AI will evolve over time and what was out of scope yesterday might be in scope tomorrow.
The UK is regarded as world leading in aspects of AI, is considered under most metrics to be third in the world in terms of AI research and innovation, and it is likely that the relatively light regulatory environment has contributed to this. AI is a technology where the point at which
theory moves into practice and application is unclear, which can make it difficult to implement regulation effectively.
While it is right and proper that new due diligence is introduced to ensure full compliance with the NSI Act, the introduction of the Act has exposed previous collaborations across both research and innovation which may no longer be admissible under the new legislation. This has the potential to discourage collaborations in specific technologies and with specific partners with ownership or interests in specific countries. While this will reduce risk, it is also likely to decrease inward investment from some sources. There may be a period of adjustment for the sector with reduction in some investment areas, though this may increase levels of opportunity for UK based venture capital investment. We are keen to work with BEIS colleagues so as not to create unintended outcomes from the legislation.
Data regulation can become a limiting factor for research in the UK. AI development is dependent on access to extensive and accessible datasets for training models. Progress has been made on the creation of synthetic data for this purpose, but more research is required to enable a full shift to training AI in this way. Many of the issues around future governance will come down to how data is integrated and shared, and how people might react to initiatives that use personal data. Sharing of best practice on how data should be handled could ensure this does not become an issue.
The current funding landscape for AI remains fragmented. UKRI is addressing this through our current strategy in AI which aims to enable an end-to-end ecosystem for the UK that draws on strength in every region, sector and AI application. Harnessing the expertise and convening power of bodies such as The Alan Turing Institute and Catapult Network will enable facilitation of this ecosystem and the spread of ideas, people, and best practice in responsible AI development. Clear linkages with the necessary compute, data needs and expertise of national and local resources will be vital, including the Hartree Centre, STFC’s Scientific Computing Department, ARCHER2, JASMIN, etc.
AI will impact all of society in some way. AI is creating new opportunities in sectors but also raises concerns. There is a pressing need to understand the impacts on society of the application of AI-enabled technologies. Additionally, we need to understand how people might perceive these impacts. In the AI Review: Transforming our world with AI, UKRI set out the priorities and potential for AI in the UK. Seven recommendations were made for creating a world class ecosystem for AI research and Innovation. Engaging the Public is one of those key recommendations.
UKRI is working to enable direct interaction between publics and those who develop and study AI and its implications. AI researchers need to engage the public directly on AI, from public dialogues to explore directly public opinions and expectations on AI, to communicating the benefits and impacts of research and innovation to people and society. This engagement needs to be embedded into the AI research and innovation agenda and research process itself, ideally
in the design of goods and services, so their explicit aim is to deliver social and environmental benefits. Direct research is required about AI applications and how to enable social innovation. An interdisciplinary approach is crucial for tackling this challenge. It requires collaboration between technology experts, social and behavioural experts, and experts from the humanities disciplines such as anthropology, law and ethics. UKRI is particularly well-placed to fund interdisciplinary research that brings together disciplines to address these kinds of societal challenges.
Interdisciplinary research that examines real social and economic impacts, and potential impacts of technologies, including AI, are vital. The Economic and Social Research Council (ESRC) currently funds the Digital Good Network which aims to define the 'digital good' and ensure that people benefit, and are supported not excluded, from digital technologies, including AI. One of the key outputs will be the Digital Good Index tool that will help policymakers and companies developing digital technologies to assess these technologies and their effects on society. The ESRC Digital Futures at Work Research Centre looks at how digital technologies are changing work and the implications for employers, workers, job seekers and governments. Similar investments have previously been made in the Discribe hub to look at the impacts and uptakes of digitally secure technologies and the ESRC Centre for Sociodigital Futures - which looks at how futures are imagined and how they are brought about across private and public actors.
In addition, AHRC’s £8.5m investment in Enabling a responsible AI ecosystem is designed to build public confidence by fostering the growth of a responsible and innovation friendly ethical AI ecosystem informed by world-class, trusted research. The programme will enable progress in how responsible and ethical approaches to AI technologies are applied to positively transform commercial, business-led and public-facing endeavours. It will move beyond AI ethics frameworks, creating recommendations and using cases that can be put into practice for a range of AI applications, including biometrics and facial recognition, big data analytics in the financial sector, and diagnostics in healthcare.
As well as funding research into social impacts, people should be brought directly into the development of new technologies via social innovation. UKRI is working to enable UK researchers and innovators to undertake public engagement as part of their research and innovation activities, so that they can request the training, tools and time to ensure their research is informed by public opinion. To be successful, people (and social and economic aims) should form part of the process of their development. Feedback can iteratively inform safeguards and mitigate perceived risks. People need to input directly at the earliest phases of development. This will help both to socialise the technology and mitigate risks of bias and other errors when AI-enabled technologies are adopted and used.
UKRI funds Sciencewise2 which supports high quality deliberative public dialogue to help policy makers and UKRI’s 9 councils include the public voice in their decision making. Several recent Sciencewise public dialogue projects are looking at issues around AI. For example:
Drawing government and private stakeholders together, with the research and innovation community, into a wider public engagement landscape and exploring complex aspects of AI in public dialogues will be crucial to ensuring greater trust and public discourse on AI governance. UKRI currently has a major funding opportunity open on this issue which is taking applications, for a leadership team to play a key role in driving the UK’s responsible and trustworthy AI agenda, building a diverse and inclusive community across disciplines and sectors5.
Co-creation between research and innovation entities who develop new technologies, the public sector through their role in oversight and governance and integration of publics into dialogues is vital to ensure a diverse opinion set in how technologies are developed and deployed.
Technology will affect everybody, and therefore everybody will need to be a stakeholder for trust and understanding to develop as we transition to a digitally enhanced world.
The Alan Turing Institute was setup as a joint venture between an initial set of leading universities and the Engineering and Physical Sciences Research Council (EPSRC), and receives a baseline budget of around £10M per year from UKRI. The Institute's public policy programme partnered with the Office for Artificial Intelligence and the Government Digital Service to produce guidance on the responsible design and implementation of AI systems in the public sector: Understanding Artificial Intelligence Ethics and Safety.
A similar approach to joint working has been taken by the Trustworthy Autonomous Systems programme (TAS), a UKRI cross council initiative delivered by EPSRC. This £34m programme is delivered through a central hub and six nodes that address trust, verifiability, security, resilience, functionality and governance & regulation, working to research and spread best practice in the use of trustworthy and autonomous systems, a key application for AI with robotics.
TAS has engaged directly with policy makers to address emerging policy challenges through a number of ways. This includes directly engaging with policy makers through the Chief Scientific Advisors network and using the methodology of Policy Labs6 that focus on specific contentious policy questions, as well as responses to public consultations. Many of these responses were
3 Ipsos report (sciencewise.org.uk)
4 Social-Intelligence-on-Emerging-Technologies-Report_FullReport.pdf (sciencewise.org.uk)
5 Responsible and trustworthy artificial intelligence – UKRI
co-authored from a cross disciplinary perspective across the network of the TAS programme, taking into account a breadth of opinion on issues around policy making.
This creation and sharing of best practice through white paper research and direct working of stakeholders with researchers has been successful in embedding the principles of responsible and trustworthy AI in the public and research sectors. Business uptake is more difficult to quantify, given the often-proprietary nature of technology development in these instances. Due diligence is in place for those companies in receipt of innovation funding for UKRI. Innovate UK, working with the Office for AI, have been developing the Bridge AI programme, which will encourage the uptake and adoption of AI in high priority sectors in the UK. Part of this investment will also involve upskilling and spreading best practice around AI use. Linkages between programmes of this type, research endeavours and policy makers will be essential to ensure policy can reflect the operational and Innovate UK, of their strategic delivery plan.
We have commented on the current approach to AI governance in the above text.
Specifically with respect to UKRI support for AI research and innovation, we believe that the level of assurance which is carried out within universities and businesses in receipt of UKRI funding for AI is appropriate, including risk assessment.
The Trusted Research protocols provide a framework to allow a risk-based assessment of individual programmes grants and activities. Assessment is based around the type and nature of the technology development, the partnerships and actors associated with the activity and the potential for dual use. Where higher risk is identified within projects our approach is to engage in dialogue with the relevant enterprises (typically universities) to gain additional information on due diligence processes, or develop additional conditions of funding to introduce controls or mitigations (e.g., implementation of data standards or ensuring engagement with the Centre for Protection of National Infrastructure (CPNI))
UKRI’s Policy on the Governance of Good Research Practice7 sets out expectations for those we fund with regards to good research practice and conduct. While UKRI does not grant ethical clearance, we expect award holders to adhere to the highest level of research ethics throughout the life cycle of their project. Within the terms and conditions of all funding from UKRI is an acceptance of award holders to ensure levels of due diligence, adherence to regulations (including the national security and investment act) and ethics clearances are obtained prior to the start of funding. The exact nature of these vary based on the needs of the specific research or innovation programme, but we expect award holders to implement best practice. UKRI have
recently funded the Alan Truing Institute and Ada Lovelace Institute, with Exeter University, to publish recommendations on ethics approvals for research institutions and for funders8.
Policy and legislation in this space must be informed by robust scientific evidence and advice. Past experience suggests that blanket regulation of a fast-evolving platform technology is problematic since it is unlikely to be robust to new applications or technology developments. Regulating applications where necessary is more likely to support rapid use for public benefit while mitigating potential harms
Whilst we do not suggest a particular body or bodies for oversight, any such body should have independent advice which draws on a breadth of research and innovation disciplines and backgrounds. Embedding active researchers, across all relevant disciplines, with policy makers for development of any future regulations will ensure that they can be agile and applicable.
Lack of clarity across both what legislation is involved in governance, and how that legislation will be enforced, is a barrier to research and innovation. In the introduction of any legislation, there will be a period of change and adjustment, but additional guidance and direct engagement between those working in policy and enforcement, and those researchers and innovators developing technologies will reduce the length and severity of any adaptation period. We recognise that guidance will never be able to cover every potential scenario but ensuring there are routes to advice and best practice will help significantly in any alterations to current arrangements. Guidance should also outline how AI development and application should take into consideration social and environmental impacts – and how it can drive social and economic benefit.
UKRI works with multiple international funding bodies and funds research which is nationally and internationally collaborative. We note that various models for AI governance are being adopted globally, usually taking either a sectoral or a technological approach. All models have benefits and weaknesses associated with them and are largely reflective of the regime by which they are introduced. Ensuring that the UK adopts governance that can be compatible with other international systems and does not impede technological progress in the UK will be important in ensuring our ongoing competitiveness and sovereign capability in AI.
(December 2022)