Skip to main content

Call for Evidence

Written submissions

Large language models (LLMs) are a type of generative AI, which have attracted significant interest for their ability to produce human-like text, code and translations. There have been several recent advances, notably OpenAI’s[1] GPT-3 and GPT-4 models. Many experts say these developments represent a step change in capability. Smaller and cheaper open-source models are set to proliferate.

Governments, businesses and individuals are all experimenting with this technology’s potential. The opportunities could be extensive. Goldman Sachs has estimated generative AI could add $7 trillion (roughly £5.5 trillion) to the global economy over 10 years. Some degree of economic disruption seems likely: the same report estimated 300 million jobs could be exposed to automation, though many roles could also be created in the process.[2]

The speed of development and lack of understanding about these models’ capabilities has led some experts to warn of a credible and growing risk of harm. Several industry figures have been calling for urgent reviews or pausing new release plans. Large models can generate contradictory or fictious answers, meaning their use in some industries could be dangerous without proper safeguards. Training datasets can contain biased or harmful content. Intellectual property rights over the use of training data are uncertain. The ‘black box’ nature of machine learning algorithms makes it difficult to understand why a model follows a course of action, what data were used to generate an output, and what the model might be able to do next, or do without supervision. Some models might develop counterintuitive or perverse ways of achieving aims. And the proliferation of these tools will make easier undesirable practices, such as spreading disinformation, hacking, fraud and scams.

This all presents challenges for the safe, ethical and trusted development of large language models, and undermines opportunities to capitalise on the benefits they could provide. 

Regulation

There are growing calls to improve safeguards, standards and regulatory approaches that promote innovation whilst managing risks. Many experts say this is increasingly urgent. The UK Government released its AI White Paper in March 2023. It highlights the importance of a “pro-innovation framework designed to give consumers the confidence to use AI products and services, and provide businesses the clarity they need to invest in AI and innovate responsibly”.[3] Regulators are expected to address key issues using existing powers. The Prime Minister’s Office has expressed an interest in the UK becoming a world-leading centre for AI safety. 

Inquiry objectives

The Communications and Digital Committee will examine what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.[4] This will include evaluating the work of Government and regulators, examining how well this addresses current and future technological capabilities, and reviewing the implications of approaches taken elsewhere in the world.

Questions

The Committee is seeking evidence on the following questions (there is no requirement to answer all questions in your submission):

Capabilities and trends

1. How will large language models develop over the next three years?

     a) Given the inherent uncertainty of forecasts in this area, what can be done to improve understanding of and confidence in future trajectories?

2. What are the greatest opportunities and risks over the next three years?

     a) How should we think about risk in this context?

Domestic regulation

3. How adequately does the AI White Paper (alongside other Government policy) deal with large language models? Is a tailored regulatory approach needed?

     a) What are the implications of open-source models proliferating?

4. Do the UK’s regulators have sufficient expertise and resources to respond to large language models?[5] If not, what should be done to address this?

5. What are the non-regulatory and regulatory options to address risks and capitalise on opportunities?

     a) How would such options work in practice and what are the barriers to implementing them?

     b) At what stage of the AI life cycle will interventions be most effective?

     c) How can the risk of unintended consequences be addressed?

International context

6.  How does the UK’s approach compare with that of other jurisdictions, notably the EU, US and China?

     a) To what extent does wider strategic international competition affect the way large language models should be regulated?

     b) What is the likelihood of regulatory divergence? What would be its consequences?

The Committee invites written contributions by Tuesday 5 September 2023.

 

This is a public call for evidence. Please bring it to the attention of other groups and individuals who may not have received a copy directly.

Detailed guidance on giving written evidence to a Lords select committees is available here: https://www.parliament.uk/get-involved/committees/how-do-i-submit-evidence/lords-witness-guide/

____________________________________________________________________________________________________________________________________________ [1] Open AI, ‘Creating safe AGI that benefits all of humanity’: https://openai.com/

[2] Goldman Sachs, 'Generative AI could raise global GDP by 7 per cent' (5 April 2023): https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html

[3] Department for Science, Innovation & Technology and Office for Artificial Intelligence, 'A pro-innovation approach to AI regulation' (22 June 2023): https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

[4] The main focus of this inquiry will be on large language models. The Committee will also examine wider generative AI capabilities, though in less depth.

[5] The Committee will be focusing in particular on the members of the Digital Regulation Co-operation Forum (Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office and the Financial Conduct Authority).

This call for written evidence has now closed.

Go back to Large language models Inquiry