The UK Artificial Intelligence (AI) ecosystem has made major progress in developing and using trustworthy AI. The adoption of a proportionate AI governance framework is however necessary to further foster public trust and enable the UK to secure the full benefits of AI while mitigating its possible risks.

As the number one patent filer in the UK[1], BT Group has been carefully looking at innovating in AI in a responsible way through our AI research program and our dedicated AI and Data Solutions team within BT’s new Digital Unit[2]. Our goal is to be the most trusted connector of people, devices and machines in the world by 2030[3]. We have adopted responsible tech principles as well as internal check and balances (including training) to ensure we develop, use, buy and sell AI products and services in a way that secures the benefits and minimises the potential harms.

Based on this experience, we understand that building a robust but proportionate AI governance framework involves complex trade-offs. We therefore welcome the opportunity to input into the Science and Technology Committee inquiry on AI governance and offer below BT Group views on to achieve the best balance between oversight and room for innovation.

If you would like us to provide any further information regarding our response, including our tech principles and internal governance framework related to AI, please contact daniel.4.wilson@bt.com and garance.hadjidj@bt.com             


How effective is current governance of AI in the UK? What are the current strengths and weaknesses of current arrangements, including for research?

Regulators, and particularly the ICO, have risen to the challenge of adapting to Artificial Intelligence (AI) and provided useful guidance. We however agree with the challenges identified in the AI policy paper published by the UK Government: a lack of clarity of relevant regulatory mechanisms; overlapping regulations and laws; inconsistent approaches across regulators; gaps in some sectors where AI has not yet been the subject of regulatory scrutiny; and insufficient regulators skills.

A good governance of AI requires a balanced mix of rights, obligations and ethics that will allow innovation to flourish responsibly. We therefore support the proposal of the UK Government in its AI policy paper that focus on:

Good governance of AI can’t however be achieved in isolation. We believe Government should therefore conduct and publish a review of existing policy, guidance, standards and regulation relating to AI governance across sectors and international jurisdictions. The exercise would be particularly useful for companies that might not have the resources to track international developments but that want to scale-up internationally.


What measures could make the use of AI more transparent and explainable to the public?

To make the use of AI more transparent and explainable, policymakers need to:

Those measures will contribute to better consistent AI model documentation and increase AI transparency and explainability, and ultimately public trust in AI. Indeed, currently there is a variety of formats available (Google’s Model Cards, IBM’s AI Fact Sheets, TM Forum’s AI Model Data Sheets) but there is little agreement in what information should be captured and how to do this.

At a general level, public mistrust of AI’s development and use comes mainly from the lack of public awareness of AI and a predominantly negative narrative around AI largely coupled with a recurrent error of presenting AI as sentient and something that develops itself. Based on BT research on AI careers as well as BT/CogX’s AI Ready Nation event which took place on 24th October 2022[4], we believe there is significant potential to increase the public’s understanding of Artificial Intelligence, with potential policy interventions including:


How should decisions involving AI be reviewed and scrutinised in both public and private sectors? Are current options for challenging the use of AI adequate and, if not, how can they be improved?

In our view, it is right to adopt a risk-based approach to regulating AI since it will help avoid unnecessary regulatory red tape and wasting of resources on uses of AI that are low risk, thus increasing public trust. To meet this two-fold objective, further clarity will be required as to the meaning of a regime that focuses on “applications of AI that result in real, identifiable, unacceptable levels of risk, rather than seeking to impose controls on uses of AI that pose low or hypothetical risk”. Indeed, considering the risks posed by AI applications solely within particular sectors may lead to gaps in understanding.

We therefore suggest the introduction of a common framework which will explain the parameters for regulators to assess whether the risk arising from AI application is ‘real’ and ‘unacceptable’. Such a framework will increase transparency and encourage better scrutiny. For example, regulators should be looking at:

The construction of this framework should be:

Finally, for an efficient oversight of AI decisions, a holistic upskilling strategy, for regulators, businesses and the UK’s wider population, is critically needed since the adoption of responsible AI won’t happen if the UK as a whole isn’t AI ready. This strategy should be a cross-pollination effort between Government, regulators (cf. CMA’s concrete experience in building its own data science team would be beneficial for others), industry[5], academia and non-profit as well demonstrated by the Singapore model[6]

For regulators, especially those who have not been engaged with emerging technologies, acquiring the right level of technical know-how as well as keeping up with technology developments, will pose significant difficulties. Different routes can be envisaged – from secondments of regulators to industry, to business/academia-led bootcamps for regulators, creating a talent pool or embedding AI experts into each regulator as done with Chief Scientific Officers. All have merits and could be combined as Government/Regulators test them as part of an iterative process nurtured by regular contact with the wider ecosystem. Finally, upskilling initiatives for regulators shouldn’t be limited to the expansion of technology knowledge but should extend to better understanding of commercial aspects as well as ethics and human-rights norms.

For the wider UK population, the UK should look at scaled-up solutions to make the country AI ready like Singapore or Finland.


How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

BT supports a light-touch regulatory approach which would empower UK regulators to rely on their expertise to develop fit-for-purpose guidance on ‘what is possible and what should be done’, thus enabling businesses to innovate safely. However, this approach entails some challenges and risks that will require a strong coordination between regulators. For this reason, we welcome the Government’s ambition to ‘seek to ensure that organisations do not have to navigate multiple sets of guidance from multiple regulators all addressing the same principle’. Delivery against this will be particularly critical for companies which are seeking to innovate, like BT is, in developing AI products and services across sectors or domains. In such cases, divergent approaches between regulators could stifle innovation.  To ensure coordination, we would recommend:



To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose? Is more legislation or better guidance required?

We believe the UK Government’s proposed approach to AI regulation is heading towards striking an appropriate balance between unleashing innovation to accelerate the growth of opportunities associated with low-risk AI and the need to put in place a framework that protect the users from potential dangers that high-risk AI systems may pose.

To make it fully fit for purpose, we would recommend:


What lessons, if any, can the UK learn from other countries on AI governance?

Based on other countries experience, BT would recommend the following to measure to develop a world-leading AI governance that works for all parts of the AI ecosystem:



(November 2022)



[1]Artificial Intelligence: A worldwide overview of AI patents and patenting by the UK AI sector’, Intellectual Property Office

[2]  BT establishes digital unit to accelerate, next-gen services for customers

[3]  Responsible business | BT Plc

[4] Full recording of the AI Ready Nation event in Bristol (panel ‘Solutions to the AI awareness & image issue) and London (panel ‘Building the diverse & sustainable AI talent pipeline the UK needs’) are available here and here. A summary video of the event is also available here.

[5] For example, BT is supporting upskilling the nation through different initiatives including our partnership with Avado.

[6] Singapore has introduced numerous AI skills initiatives including The Skills Framework (SFw) jointly developed by SkillsFuture Singapore (SSG), Workforce Singapore (WSG), and the Infocomm Media Development Authority (IMDA), together with industry associations, education institutions, training providers, organisations and unions.