Skip to main content

Communications Committee launches inquiry into large language models

7 July 2023

The Communications and Digital Committee is launching an inquiry that will examine large language models and what needs to happen over the next 1–3 years to ensure the UK can respond to their opportunities and risks.This will involve evaluating the work of Government and regulators, examining how well this addresses current and future technological capabilities, and reviewing the implications of approaches taken elsewhere in the world.

The Committee invites written contributions by 5 September 2023.

Large language models (LLMs) are a type of generative AI which have attracted significant interest for their ability to produce human-like text, code and translations. There have been several recent advances, notably OpenAI’s GPT-3 and GPT-4 models. Many experts say these developments represent a step change in capability. Smaller and cheaper open-source models are set to proliferate.

The opportunities could be extensive. Goldman Sachs has estimated generative AI could add $7 trillion (roughly £5.5 trillion) to the global economy over 10 years. Governments, businesses and individuals are all experimenting with this technology’s potential. Some degree of economic disruption seems likely, though many new roles could also be created in the process.

The speed of development and lack of understanding about large language models’ capabilities has led some experts to warn of a wider risk of harm. Large models can generate contradictory or fictious answers, for example. The training data can contain harmful content. Intellectual property rights remain uncertain. The ‘black box’ nature of machine learning algorithms makes it difficult to understand why a model decides on a course of action and what it might be capable of doing next. AI systems might develop counterintuitive or perverse ways of achieving aims. And the proliferation of these tools will make easier undesirable practices, such as spreading disinformation, hacking, fraud and scams.

There are growing calls to improve safeguards, standards and regulatory approaches that promote innovation whilst managing risks. Many experts say this is increasingly urgent. The UK Government released its AI White Paper in March 2023. It highlights the importance of a “pro-innovation framework designed to give consumers the confidence to use AI products and services, and provide businesses the clarity they need to invest in AI and innovate responsibly”.

The Communications and Digital Committee will examine what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.

Chair's commtent

Baroness Stowell of Beeston, Chair of the Committee, said:

“The latest large language models present enormous and unprecedented opportunities. Early indications suggest seismic and exciting changes are ahead.

“But we need to be clear-eyed about the challenges. We have to investigate the risks in detail and work out how best to address them – without stifling innovation in the process. We also need to be clear about who wields power as these models develop and become embedded in daily business and personal lives.

“This thinking needs to happen fast, given the breakneck speed of progress. We mustn’t let the most scary of predictions about the potential future power of AI distract us from understanding and tackling the most pressing concerns early on. Equally we must not jump to conclusions amid the hype.

“Our inquiry will therefore take a sober look at the evidence across the UK and around the world, and set out proposals to the Government and regulators to help ensure the UK can be a leading player in AI development and governance.”


The Committee invites written contributions by 5 September 2023.

Further information


Image:  Kohji Asakawa - Pixabay