Written Evidence Submitted by techUK
(GAI0045)
techUK response
With over 900 members across the UK, techUK creates a network for innovation and collaboration across business, government and stakeholders to provide a better future for people, society, the economy and the planet. Our member companies range from leading FTSE 100 companies to new innovative start-ups.
Introduction
AI is an enormous area of growth in the UK, with a continuing expansion of use cases to benefit our everyday lives. The country is among the global leaders of AI,[1] and our world-class industry and research institutions are building and deploying AI technologies that can help us solve some of the most complex social and environmental challenges facing the modern world. In the last few years alone, it has become clear that AI will transform our approach to everything from healthcare to manufacturing to transport and help us live as sustainably as possible.
Efforts to address unintended harms and increase public confidence in use of AI have resulted in a wealth of responsible AI principles, frameworks, guidelines and, increasingly, concrete and sharable measures such as algorithmic audits. The tech industry has made major progress in governance approaches and are continuing to develop more effective ways to ensure the responsible development and use of AI.
This progress should be celebrated and encouraged. There is, however, also broad agreement that greater consistency in approaches would be helpful, both to ensure high standards across industry, and to give people whose lives are affected by AI a chance to understand the safeguards in place to protect them; a measure which should help build trust in AI amongst the public, an important outcome if the full potential of AI is to be realised. Efforts have been made in this direction through policy and best practice guidance aimed at different sectors and domains, and regulatory initiatives are increasing across the world.
techUK therefore recently welcomed the Government’s ambition to clearly articulate the UK’s approach to establishing a governance regime which “supports scientists, researchers and entrepreneurs while ensuring consumer and citizen confidence in AI technologies”, as laid out in the policy paper Establishing a pro-innovation approach to regulating AI. In our response, we also provided feedback on the areas that will be most important to develop and clarify in the forthcoming Government white paper.[2]
techUK welcomes the House of Commons Science and Technology Committee’s decision to focus on AI governance, and appreciate this opportunity to provide evidence. Our response is based on our members’ position on AI governance and regulation as laid out in our feedback on the Government’s policy paper, and we would be delighted to engage further with the Committee on this topic going forward.
Consultation response
Q1: How effective is current governance of AI in the UK?
What are the current strengths and weaknesses of current arrangements, including for research?
The UK is internationally recognised for the quality of our regulators, and several of them have also successfully adapted to the introduction of AI within their sectors and remits, providing helpful guidance and effective oversight. Challenges remain, however, and techUK agrees with the ones set out by the Government itself in the AI regulation policy paper referenced above. In summary, these are a lack of clarity of relevant regulatory mechanisms; overlapping regulations and laws; inconsistent approaches across regulators; and gaps in some sectors where AI has not yet been the subject of regulatory scrutiny.
It would be valuable for the Government to provide a more comprehensive review of the legislative, regulatory and policy landscape, both domestic and international. Although the tech industry is aware of the range of policy and regulatory initiatives, there is a lack of clarity on how these initiatives interact, how approaches and interventions between different regulators diverge and which technical standards exist or are being developed.
It would also be helpful for the government to consider a gap analysis of sectors that currently have little to no oversight over the ways in which AI is deployed. Internationally, it would be useful to provide an analysis of possible interactions with ongoing work on AI regulation and policy in the EU, the OECD, the Global Partnership on AI and the Council of Europe where an AI Act has been proposed.
The recently launched AI Standards Hub Pilot is a positive example of enhanced clarity in one of these areas, and the government may already have carried out a full review of the above as background research for the forthcoming white paper, in which case we urge them to release it. If not already in existence, such research would be valuable to inform the approach going forward, enhancing policymakers’ understanding of how any new initiatives would interact with existing ones, which gaps need to be closed, how horizontal and vertical powers can be balanced and hopefully enabling greater coherence.
Q2: What measures could make the use of AI more transparent and explainable to the public?
Firstly, to increase the transparency and explainability of AI, we would encourage the government to focus on building a robust assurance regime. AI assurance processes would help to increase public trust and confidence in AI systems by evaluating and communicating reliable evidence about their trustworthiness according to relevant criteria. The government has published a roadmap to an effective AI assurance ecosystem[3], and techUK are working with the Centre for Data Ethics and Innovation to identify assurance practices already taking place within industry, to understand what is already working well and where further support to develop a well-functioning market may be required.
The role of international industry-driven standards should be considered as the AI assurance ecosystem is developing. As AI is a complex, global and evolving topic, the ongoing voluntary standardisation work developed by international standardisation organisations will have increasing relevance to AI governance across the world.
There may be lessons to draw from sectors with long-standing risk assessment processes. For example, the cyber security, sector has well-established issue-reporting procedures, aiding consistency across stakeholders. It would be beneficial for those involved at any stage of the AI lifecycle to have tools or services to assess potential AI-associated risks, and clear routes to report issues. This could increase their confidence, as well as their stakeholders.
Crucially, the approach adopted must be proportionate and risk-based, and regulators should only be able to require any form of documentation in cases where the application of AI is deemed to be high risk.
Q3: How should decisions involving AI be reviewed and scrutinised in both public and private sectors?
Are current options for challenging the use of AI adequate and, if not, how can they be improved?
techUK believes that a risk-based approach to the review and scrutiny of AI applications would be the most effective course of action in establishing an innovation-friendly regime. Blanket regulation for AI across whole sectors would create barriers to transformation and innovation and prevent increased efficiency and quality of products and services. Focusing instead on determining those uses of AI that present risks of genuine harm and assessing how those risks can be countered achieves the best balance between oversight and room for innovation.
In terms of identifying risk, the government’s policy paper on AI regulation[4] maintains that regulators should focus on “applications of AI that result in real, identifiable, unacceptable levels of risk, rather than seeking to impose controls on uses of AI that pose low or hypothetical risk”. However, this is somewhat ambiguous and in order to be implemented successfully, individual regulators need access to a common framework which guides them to determine whether the risk arising from an AI application is “real” and “unacceptable”. techUK would suggest the starting point of such a framework should be focused on users of AI technologies or those impacted by its use, especially as pertains to health, safety, equality and civic freedoms. This framework should also ask regulators to consider any harms caused by not using AI, i.e., the risks of the status quo. The framework should not attempt to comprehensively list every area where AI can pose unacceptable levels of risk, but rather support individual regulators by setting out the kinds of questions they should ask to determine whether the use of AI in a specific context warrants regulatory oversight.
The implementation of a clear framework to identify high-risk AI applications would also help ensure stability. The Government’s policy paper highlights that a context-driven approach allows regulators to respond to new and emerging risks. If this is done on an ad hoc basis it could create uncertainty in the market with providers not knowing whether their products and services may be subject to additional ‘high-risk’ requirements at any given time. Changes to the framework, should therefore only be made at pre-planned evaluation intervals, leaving ample time for both providers and deployers before any additional applications can be considered within scope.
An example of providing the opportunity for scrutiny of AI in the public sector is the Algorithmic Transparency Standard which is being piloted by the Centre for Data Ethics and Innovation (CDEI) and Central Digital and Data Office (CDDO) the. The aim of the Standard is to establish a standardised way for public bodies to report transparency information about the way they are using algorithmic tools in decision-making, thereby giving members of the public and independent experts the opportunity to investigate such uses. techUK members provided feedback on the Standard before its launch, with wide agreement that this is a useful voluntary tool to increase public trust, as long as responsibilities are clearly divided, and no sensitive information is shared.
Q4: How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?
Although there are no laws explicitly intended to regulate AI, it is partially regulated through a patchwork of legal and regulatory requirements built for other purposes, which now encapsulates AI. However, techUK welcomes the actions that UK regulators taking to support the growth of responsible AI, for example, the AI Standards Hub and the Algorithmic Transparency Standard. techUK would recommend the Government to reach consensus between regulators on key matters of enforcement. The Government is right to emphasise the need to lean on the domain-specific expertise within each individual regulator to determine how to mitigate risks brought about by AI. These regulators understand the contexts in which AI will be deployed, the kinds of harm that can occur within the sector, the situations where such harms are at highest risk of occurring and existing rules and requirements that could apply to the use of AI. Compared to tasking one single regulatory body with AI oversight, this approach is more likely to see applications segmented accurately, creating the conditions for a regime which enables low-risk AI to flourish unobstructed and safeguards the public against harm in high-risk use cases. Putting this approach into practice will, however, not come without its challenges. Below, we set out some key considerations.
Balancing consistency and flexibility
techUK supports the government’s assessment that a context-drive approach is preferable, even as it comes at the cost of full uniformity. Different approaches will suit different circumstances which speaks to the Government’s preference to leave the choice of requirements fully to regulators. But this also creates the risk of a confusing regulatory environment with somewhat unnecessary differences. An organisation deploying what has been designated as high-risk AI in one sector may have significantly more burdensome and entirely different obligations than an organisation deploying AI in a different sector, even if the risk of harm is comparable in both severity and likelihood.
Implications for AI providers working across sectors and purposes
Providers operating in several different spheres may be expected to either directly comply with regulators’ guidance and requirements or to provide information to clients in those sectors. High levels of divergence will therefore significantly impact providers whose products are deployed in contexts categorised as high-risk in more than one domain or sector. This is likely to affect providers of general purpose AI (GPAI). GPAI refers to a large category of purpose-neutral software that can be deployed for a range of functions, such as detecting patterns, organising data and identifying or translating text or voice recordings in different languages. It is often developed by companies or individuals to be freely accessed, used, modified and redistributed, notably under open-source licences. It is widely agreed that GPAI plays a crucial role in the AI ecosystem, and it would be detrimental to stifle the development of such technologies.
Due to the nature of GPAI, it is often deployed and tailored to many different settings and contexts unknown to the original provider. While AI developers may assess risk when designing their GPAI system, they have little to no oversight over how their products and services are being used and are therefore not able to assess when high-risk safeguards need to be in place.
To solve this challenge, a well-designed framework needs to fairly distribute responsibilities in the AI value chain. While deployers of GPAI will be the ones who can assess the specific context and its associated risks, some regulatory mechanisms may also require specific information which can only be supplied by the developer. There must be a duty on regulators to explain which responsibilities befall different contributors to the value chain and techUK strongly recommends this split of responsibilities is determined in collaboration with stakeholder groups, including industry.
Cross-regulator working groups
There may also be specific AI technologies that would benefit from smaller working groups of regulators to secure even greater levels of alignment (similar to the current Digital Regulation Cooperation Forum’s (DRCF) model for engagement). This could for example apply to certain biometric technologies, such as live facial recognition. Here, agreement between bodies such as the Information Commissioner’s Office (ICO), the Surveillance Camera and Biometrics Commissioner and the Equality and Human Rights Commission on what should be designated as high-risk applications, and which safeguards should be in place for their adoption and deployment, would be incredibly useful. To reach agreement, the working groups should conduct multi-stakeholder engagement, consulting with civil society, industry, academics and other experts. Not only would this help potential adopters understand how to use such technologies responsibly, it would also create clarity across industry in how to make technologies fit for high-risk use cases.
Regulatory AI skills
The National AI Strategy published by the UK Government in 2021 rightly pointed out the need for greater regulatory capacity to strengthen the UK’s ability to govern AI. Such capacity will be crucial for a successful implementation of the Government’s approach, as most decisions and responsibilities are devolved to the regulators. Most regulatory agencies will require AI expertise due to the breadth of AI applications. They will need to understand AI’s primary capabilities and limitations, how and where it is being deployed and guide anyone operating within their fields to assess levels of risk as well as advise on assurance mechanisms, especially those that are either encouraged in official guidance or required.
While some regulators have already started building this capacity, others still have a long road ahead. Investments therefore must be made to hire AI and data science practitioners, as well as provide professional development specifically focused on AI and other data-driven technologies.
Q5: To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?
Is more legislation or better guidance required?
As mentioned in our response to Question 1, the current legislative framework for AI can be confusing. In many cases where legislation does apply to AI it is not clear exactly how this is enforced as it was not created with AI in mind. The Government’s plans to clearly spell out regulators’ responsibilities with regards to the use of AI is therefore welcome, and we urge them to do so with the recommendations outlined above in mind.
Q6: What lessons, if any, can the UK learn from other countries on AI governance?
techUK believes the UK is right to pave its own way on AI governance, and that this provides a real opportunity to strike a balance between enabling a freely flourishing AI ecosystem and protecting the public from potential harms. However, it’s also important that the UK’s approach is interoperable with other jurisdictions.
A recent international initiative of interest is the the Singaporean AI Verify Testing Framework and Toolkit, which aims to help companies demonstrate responsible use of AI and increase transparency between AI developers and all of their stakeholders.[5] This type of initiative could prove to both help AI companies test and refine their products and provide a standardised way of showing customers how the AI system performs across a range of verifiable parameters. In general, real-life industry case studies can help to fully conceptualise how AI governance can work in practice.
(November 2022)
[1] OECD and Stanford’s Institute for Human Centered Artificial Intelligence, Artificial Intelligence Index Report 2021
[2] techUK response to Government paper on AI regulation, 2022
[3] CDEI, The roadmap to an effective AI assurance ecosystem, 2021
[4] Office for AI, Establishing a pro-innovation approach to regulating AI, 2022
[5] Singapore Infocomm Media Development Authority, Singapore launches world’s first AI testing framework and toolkit to promote transparency; Invites companies to pilot and contribute to international standards development, 2022