AI offers significant opportunities but twelve governance challenges must be addressed says Science, Innovation and Technology Committee
31 August 2023
The explosive development of Artificial intelligence (AI) has outpaced the development of policies to ensure that its benefits are achieved and harms avoided.
- Read the Summary (HTML)
- Read the full Report (HTML)
- Read the full Report (PDF) [755KB]
- Inquiry: Governance of artificial intelligence (AI)
- Science, Innovation and Technology Committee
As governments across the world grapple with the question of if and how AI should be governed, the UK is positioned as a centre of AI research and practice with a reputation for creativity and international trust in its regulatory policy and institutions. The November Global AI Safety Summit, to be held at Bletchley Park, provides a golden opportunity for Britain to lead world thinking and practice on AI governance.
An interim report published today by the Science, Innovation and Technology Committee sets out the Committee’s findings from its inquiry so far, and the twelve essential challenges that AI governance must meet if public safety and confidence in AI are to be secured.
The twelve challenges of AI governance that must be addressed by policymakers:
- The Bias challenge: AI can introduce or perpetuate biases that society finds unacceptable.
- The Privacy challenge: AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.
- The Misrepresentation challenge: AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.
- The Access to Data challenge: The most powerful AI needs very large datasets, which are held by few organisations.
- The Access to Compute challenge: The development of powerful AI requires significant compute power, access to which is limited to a few organisations.
- The Black Box challenge: Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.
- The Open-Source challenge: Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.
- The Intellectual Property and Copyright Challenge: Some AI models and tools make use of other people's content: policy must establish the rights of the originators of this content, and these rights must be enforced.
- The Liability challenge: If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
- The Employment challenge: AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.
- The International Coordination challenge: AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.
- The Existential challenge: Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security.
The interim report urges greater international cooperation to address these twelve challenges. It welcomes the November AI summit at Bletchley Park and calls on the UK Government to invite “as wide a range of countries as possible” to “advance a shared international understanding of the challenges of AI as well as its opportunities”.
In its White Paper of March 2023, the Government said that it anticipated the need to legislate to introduce “a statutory duty on our regulators requiring them to have due regard to the principles” of AI governance. Although the Committee broadly agrees with the Government’s approach of building on the work of existing regulators the interim report warns that, with a General Election expected in 2024, such legislation needs to be put to Parliament during its next session.
Delay would risk the UK, despite the Government’s good intentions, falling behind other jurisdictions such as the European Union and the United States who are pressing ahead with legislation. As was the case with GDPR, a different approach becomes established from which it is then difficult to deviate.
Science, Innovation and Technology Committee Chair, Rt Hon Greg Clark MP, said:
“Artificial Intelligence is already transforming the way we live our lives and seems certain to undergo explosive growth in its impact on our society and economy.
“AI is full of opportunities, but also contains many important risks to long-established and cherished rights - ranging from personal privacy to national security - that people will expect policymakers to guard against.
“Our interim report identifies twelve challenges that must be addressed by policymakers if public confidence in AI is to be secured.
“The UK’s depth of technical expertise and reputation for trustworthy regulation stand us in good stead and our Committee strongly welcomes the AI Safety Summit taking place at Bletchley Park in November. However, if the Government’s ambitions are to be realized and its approach is to go beyond talks, it may well need to move with greater urgency in enacting the legislative powers it says will be needed. We will study the Government’s response to our interim report, and the AI white paper consultation, with interest, and will publish a final set of policy recommendations in due course.”