Written Evidence Submitted by AI Governance Limited




AI Governance Limited is a purpose-driven consultancy formed in 2020 to inspire organisations to use AI with wisdom and integrity.  We support leaders of all types of organisations to increase their skills and knowledge so they can seize the opportunities AI offers to make their organisations, and whole economies, more efficient and competitive. 


Founding Director, Sue Turner OBE[1], was concerned that too many leaders lack this knowledge so they are more likely to fail to understand the opportunities and threats of AI, and in turn the UK will not fully seize the benefits AI offers.  AI Governance will therefore shortly publish the 2022 AI Governance Report[2] (“the Report”) which provides unique insights into how well prepared, or otherwise, organisations are to govern their use of AI, together with case studies to inspire further action


The Report analyses the responses from 723 leaders who completed the AI Governance survey in July and August 2022 probing how skilled and ready leaders are to control AI use in their organisations and supply chains.  Respondents were from a wide range of sectors.  All respondents anonymously answered four core questions about their organisation’s understanding and control of data and AI risks.  A subset of 52 respondents was asked an additional five questions to elicit more detailed information. 


The majority of responses came from organisations based in the UK with 94% of respondents having their main operation here.


The purpose of this submission to the Science and Technology Select Committee is to highlight relevant key findings from the Report to inform the Committee’s current enquiry. 



How effective is the current governance of AI in the UK?


Three findings from the Report shed light on how effective governance of AI is currently in the UK.


  1. Knowledge about AI potential and risks

The Report found that:




The organisations with some Board expertise have a base for spotting opportunities to grow, innovate or improve efficiency through using AI, as well as some knowledge to assess the related risks.  The majority of organisations, however, lack the knowledge of how to get the best out of AI whilst controlling the potential negative impacts; they are more likely to make mistakes by not spotting potential problems with AI use. 


Furthermore, 10% of respondents reported that AI knowledge was not an issue for their organisation.  The AI Governance team regularly run workshops, seminars and webinars where we begin by asking attendees whether their organisation uses AI and find that many of them say they are not.  Attendees are surprised, after we explain some of the everyday uses of AI, to realise that AI is already being used in their organisations. 


It is important that leaders become better informed so they understand that they cannot seal themselves off from AI use; without improved knowledge they risk finding out that it does affect them only after something goes wrong.


  1. Controls and governance

The Report found:





To control the use of AI effectively – whether the tools are developed in-house, through contracts or in the supply chain – Boards need structures and mechanisms which may go beyond their familiar ways of working such as risk registers.  One size, however, does not fit all; it might suit a public sector organisation to create an AI Ethics Committee, whilst this approach could be anathema to an entrepreneurial business. 


Boards need either to revise and update their existing ways of assessing and controlling risks (such as financial and IT risks) to cover AI appropriately or set up new structures or ways of working.  Whereas the standard approach to risk assessment and mitigation is well understood by Board members, AI brings new areas to consider which likely require new ways of working.  For example, to get as full a picture as possible of AI ethics considerations, it is essential to bring together interdisciplinary teams which may include data scientists, engineers, marketing and legal expertise as well as external stakeholders and advisers in ethics or sociology.  This will be an alien concept to many leaders, particularly in the private sector where Boards are likely to be reluctant to give any meaningful power to groups like this. 


Regulations and corporate governance practice give Directors the responsibility to act in the interests of shareholders, which is often interpreted as meaning they have to retain full control.  Yet, unless Boards create an informed, independent voice for assessing and commenting on the opportunities and risks of AI use, they will struggle to corral enough diverse points of view to foresee pitfalls and to use AI wisely.


Failure to engage with diverse stakeholders and interests when considering using AI increases risks, including increased likelihood of unforeseen or unintended consequences, reputational damage, harm to society and lack of trust in AI because of poor outcomes.


  1. Executive Education

The quality of “AI for business” courses is extremely variable, as are the costs which can be as high as £29,000 for a six-week course.  The costs and study time requirements can place these courses out of reach of many small-medium sized enterprises (SMEs), and make them unaffordable for almost all the not-for-profit sector


Providing support so that SMEs, charities and social enterprises can gain the skills needed through training they can afford is vitally important.  Without this they will not gain the knowledge they need to understand the opportunities and threats of AI and in turn the UK will not fully seize the advantages AI offers.

The Government has stated its “ambition is to support responsible innovation in AI”[3].  AI responsibility topics such as transparency and explainability are popular amongst AI ethicists and researchers but gain little traction with business leaders who need practical, actionable advice. 


The Government and regulators need to make clear commitments to provide support and guidance to upskill Board members and management.  Government and regulators are well placed to:


Government, regulators and business schools all have a key part to play in supporting and providing affordable courses to reskill the leaders of today and tomorrow.


AI Governance recommends clear action from the Government and regulators to support accessible training. 


Improving transparency about AI use


AI is increasingly being used in organisations, but AI Governance is concerned that too often AI tools are hidden inside other products.  Organisations should be clear when users, and particularly the public, are interacting with AI, whether the technology is in the form of tools developed by the organisation in-house or bought in as products or services from suppliers.


AI Governance talked recently to the Managing Director of a financial services business that had installed a new telephone system, including an inbound customer call recording facility.  They were unaware that the system was using AI until a report appeared warning of inappropriate behaviour from a call-handler.  On investigation, the company found the system used AI-based language modelling and sentiment analysis.  The system had flagged the call as an incident since the call-handler appeared to be angry with the customer.  In fact the customer had significant hearing loss so the call-handler had (rightly) raised their voice in order to help the customer. 


The presence of AI was never mentioned during the process of contracting to purchase the telephone system.   This business has a human-in-the-loop, as well as a strong culture towards training and supporting team members, so there was no danger of automated decision-making penalising the call-handler.  But a different business might automate penalties on call-handlers where the AI tool detected a breach of accepted practices. 


We recommend that the presence of AI tools should have to be declared by suppliers and vendors to avoid their accidental deployment without adequate safeguards


Enhancing people’s right not to be subject to solely automated decision-making


The General Data Protection Regulation (GDPR) provides people with the right not to be subject to solely automated decision-making or profiling where this will have a legal or similarly significant effect on them, but this right is not well known by the public. 


AI Governance asserts that, with AI enabling a major increase in automated decision-making and profiling, the GDPR should be strengthened to mandate the provision of clearer information to people of when they are in a situation where this right applies, coupled with simple mechanisms for exercising the right for human involvement in decision-making.


The Report and AI Governance’s consultancy work find that too many leaders are unaware that AI is already all around themIn order for this proposed enhanced GDPR right to be implementable, organisation leaders will need to be much more aware of how and when their organisations are using automated decision-making that has significant effects on people


A recent AI Governance workshop for HR leaders found that most did not realise the extent of their use of AI powered tools.  For example, when an HR Director challenges their team to reduce hiring costs, the probability increases that they will bring AI into their organisation to achieve this goal, potentially without realising that AI is involved.  This may lead to automated decision-making with no information being provided about how to request human intervention. 


Take the case of the use of Natural Language Processing and graphing which gives a numerical representation of how close the applicant’s skills are to the job requirements.  This could be sold to the HR team as a job applicant scoring system, without reference to the AI techniques sitting inside the service.  If the HR team rely on this numerical score to decide whether or not to interview the applicant, they are subjecting the person to automated decision-making.  AI Governance asserts that the applicant should be informed in advance that this is the process and be given the opportunity to opt for a human review of their suitability for interview rather than solely automation.  Furthermore opting out of the automated process should not be detrimental to their application.


No organisation can build a wall to keep AI outIn our experience, most leaders have reasonable levels of understanding about their GDPR obligations and take them seriously.  Similarly, if people are to be given information about how AI-based decisions may affect them, enhancing the GDPR provisions on automated decision-making will be the spur to leaders to gain better understanding of where AI sits in their systems and what their obligations are to those affected by it.


Regulation of AI


Despite the flurry of recent regulatory activity in different jurisdictions with which experts in the field are familiar, the Report found that 59% of leaders were unaware of what AI-related regulations were in the pipeline.


Chart, waterfall chart

Description automatically generated


The Report also found that 26% of leaders would welcome regulations that provide a level playing field.  Leaders may be slow to adopt AI tools, however, or use them in harmful ways, without guidance from law makers and regulators. 


In the UK, the proposed light touch regulatory regime relies on regulators to co-operate to achieve consistency.  It is a concern that the Government’s current proposal for a multiplicity of regulators forming their own regulations will be slow to reach decisions, fail to achieve consistency and fail to convey the different approaches in a meaningful way to organisations. 


To reduce the confusion for businesses of having to apply different regulatory regimes to essentially the same technology, AI Governance asserts that it will be more effective for the Government to enact overarching legislation that brings together common threads from regulators under a unified approach.  This would ease the compliance burden too.


Developing new laws and guidance takes time so in the interim regulators need to provide clarity about their direction of travel. 


These two questions illustrate the point:




Summary of recommendations


  1. The Government and regulators should support affordable education programmes for leaders to understand the opportunities and risks of using AI, as well as inspiring them to consult and involve diverse communities and stakeholders in discussions about proposed AI use. 


  1. It should be mandatory for the presence of AI tools in systems and services to be declared by suppliers and vendors to avoid their accidental deployment without adequate safeguards. 


  1. The public should be made aware when automated decision-making is affecting them with enhanced opportunities to opt out and request human involvement.


  1. Regulators should give early indications of the direction of travel of their regulatory regimes and the Government should enact overarching legislation to reduce the compliance burden.

Appendix 1

Biographical Details
Sue Turner OBE



Sue Turner is dedicated to using her expertise in AI and data governance and ethics to support organisations to use AI with wisdom and integrity.  One of the first 14 people globally to be accredited in the Foundations of Independent Audit of AI systems, and with an MSc in Artificial Intelligence and Data Science, she established AI Governance Limited to advise businesses and policy makers on pragmatic AI, data ethics and governance issues and making a positive societal impact. 


She is Chair of the Faculty of Clinical Informatics and Non Executive Director for a financial services mutual and for a waste management company.  Her career spans entrepreneurial private businesses and not for profit organisations, including ten years with the Confederation of British Industry. 


She has led significant organisational growth, raised £27 million for charity and collaborated to shift power to help people improve their prospects. 


She was awarded the OBE in 2021 for Services to Social Justice. 


(November 2022)

[1] See biography at Appendix 1

[2] When published, the report will be available at https://aigovernance.co.uk/services/research/


[3] Establishing a pro-innovation approach to regulating AI”, DCMS, July 2002.