Written Evidence Submitted by Committee on Standards in Public Life
(GAI0110)
Background on Committee
1. The Committee on Standards in Public Life is an independent, non-departmental public body that advises the Prime Minister on the arrangements for upholding standards of conduct across public life in England. The Committee does not consider individual cases or have investigative powers. Annex A sets out the Committee’s remit.
2. The Committee articulated the Seven Principles of Public Life – commonly referred to as the Nolan Principles – in its first report in 1995: honesty; objectivity; openness; selflessness; integrity; accountability; and leadership. These principles apply to all public office holders, including those who are elected or appointed, and to private providers of public services.
3. The Committee is pleased to give evidence to this House of Commons Science and Technology Committee inquiry into the Governance of Artificial Intelligence, which is timely. Our evidence is based on the findings and recommendations of our 2020 report, Artificial Intelligence and Public Standards, which is summarised below.
4. In 2020, the Committee published a report, Artificial Intelligence and Public Standards, which looked at the risks and opportunities for public standards posed by AI, and examined the existing regulatory and governance landscape to assess whether it was fit for purpose.1
5. The report found that while the Nolan Principles remain strong, relevant and do not need reformulating for AI, AI posed a particular challenge to three principles: openness, accountability and objectivity. On openness, we found there was a lack of information about the government’s use of AI; on accountability, we found AI could make it difficult to hold individuals and organisations accountable for their decisions, and for public officials to provide explanations for decisions made by AI; and on objectivity, we found that the prevalence of data bias in AI risks embedding and amplifying discrimination in everyday public sector practice.
6. It was clear from the evidence we received that there is nothing inherently new about the governance needed for AI, and that public standards can be upheld with a traditional risk management approach (see paragraph 13). However, the need for
1
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/868 284/Web_Version_AI_and_Public_Standards.PDF
more comprehensive and accessible guidance based on sound ethical principles for public bodies on how to adapt their governance arrangements for AI was clear (see paragraph 12).
7. We found that a robust and coherent legal and regulatory framework for AI in the public sector is still a work in progress. For instance, although AI is subject to the provisions of the GDPR2, the Equality Act and sections of administrative law, there is uncertainty about how the law applies to automated decision-making in practice (see paragraph 29).
8. The Committee did not recommend the creation of a new AI regulator but that all regulators should consider and respond to the regulatory requirements and impact of AI in the fields for which they have responsibility. However, given the complexity of AI, we felt government and regulators would require guidance from a central body about issues associated with AI. We recommended, in line with the government’s published intention, that the Centre for Data Ethics and Innovation take on this responsibility (see paragraphs 21-23).3
9. The report’s recommendations are set out in full at Annex B for information.
10. The Committee continues to maintain a watching brief on AI in the public sector.
How effective is current governance of AI in the UK?
11. In 2020, we found governance arrangements for AI are underway but are a work in progress. For example, there has been significant progress around the establishment of expert bodies and government departments, including the Office for AI; the AI Council; and the Centre for Data Ethics and Innovation. However, the specific functions of some of those bodies remain unclear (see paragraph 24).
12. Similarly, there has been good progress in establishing ethical principles and guidance for data and AI. For example, the Office for AI, the Government Digital Service, and the Alan Turing Institute has published a comprehensive guide to using AI in the public sector; the Department for Culture, Media and Sport has updated the Data Ethics Framework; and the Office for AI has published guidelines for AI procurement. However, there are too many sets of ethical principles for AI, which can be confusing. We also found that some of the guidance assumes an unreasonably high level of technical awareness, which undermines the practicality of the guidance.
2 At the time of writing, the EU GDPR had direct application in UK law through the Data Protection Act 2018. The provisions of the EU GDPR have since been incorporated directly into UK law as the UK GDPR. According to the ICO, “there is little change to the core data protection principles, rights and obligations”.
3
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/757 509/Centre_for_Data_Ethics_and_Innovation_-_Government_Response_to_Consultation.pdf
13. Because decisions on implementing AI across the public sector lie with government departments and individual public bodies, each public organisation should establish appropriate governance mechanisms to manage ethical risks and regulatory compliance. In 2020, we found that while some organisations in healthcare and policing have thought extensively about AI governance, most public bodies are just beginning to consider these issues. As such, we recommended a number of specific risk management mechanisms that public bodies should put in place before AI deployment and when using AI: using AI in ways that are legal and legitimate; assessing the impact of AI on standards at the design stage; maximising diversity at each stage of the AI lifecycle; allocating meaningful responsibility for AI; monitoring AI systems to ensure they operate as intended; setting appropriate oversight mechanisms for AI; and providing a right of appeal for AI decisions.
What measures could make the use of AI more transparent and explainable to the public?
14. Evidence we received in 2020 suggests that the government and public bodies are not sufficiently transparent about their use of AI, with most information resulting from Freedom of Information Requests and procurement data.4 We also heard that transparency is further complicated by the use of private sector commercial organisations in the development and provision of AI in the public sector, who may use commercial confidentiality arrangements to avoid certain forms of disclosure. This poses a clear risk to public standards.
15. In our report we considered whether the requirements for proactive disclosure under the Freedom of Information Act 2000 are sufficient to increase transparency around the use of AI in the public sector. The ICO told us that the obligation on public bodies to publish information that is in the public interest does not sufficiently enforce transparency because assessing compliance across the public sector is difficult, and said that more could be done to encourage proactive disclosure through openness by design.5 We said that an expectation on public bodies to think about openness is not enough to change behaviour, and recommended that the government establish guidelines for public bodies about what information to disclose about AI. We also recommended, as we have done previously, that the government consults on extending the application of the FOI Act to private providers of public services.6
16. In its 2019 report on Algorithms in the Criminal Justice System, the Law Society recommended that a national register of algorithmic systems in use in criminal justice be established.7 We ruled out the establishment of a national AI register as we heard that this would likely be an extensive and overwhelming bureaucratic challenge, with no guarantee that it would be accessible to the public.
4
https://s3.documentcloud.org/documents/5993565/2019-05-08-TBIJ-Government-Data-Systems-Publi shed.pdf
5 https://ico.org.uk/media/2615190/openness_by_-design_strategy_201906.pdf
6
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/705 884/20180510_PSP2_Final_PDF.pdf
7 https://www.lawsociety.org.uk/topics/research/algorithm-use-in-the-criminal-justice-system-report
17. We also heard that some more complex forms of AI are not explainable (eg. you cannot see how decisions are made). AI systems that are opaque in this way are often referred to as “black boxes”. Contributors to the review told us that most AI used in the public sector will be processing simple data meaning that less complex and more explainable AI systems can and should be used. Where less explainable AI systems are used, public bodies should justify why certain trade-offs have been made. Given that we heard that the technical obstacles to providing explanations for automated decisions are small, it should be possible for public bodies to provide meaningful explanations of AI-decisions. To achieve this, public bodies will need to consider explainability in the early stages of AI design and development, and during the procurement process, where requirements for transparency could be stipulated in tenders and contracts.
How should decisions involving AI be reviewed and scrutinised?
18. Human oversight over an AI system and its decision-making process and outcomes is a standards imperative. Responsibility for AI will likely be shared by individuals across an organisation, which should be clearly allocated.8 Senior leadership should have oversight over the whole AI process, from making decisions about procuring AI systems to reviewing the impact of automated decisions. In high risk policy areas, such as health or policing, independent oversight bodies, such as ethics committees, are useful tools for ensuring that ethical challenges relating to AI are given proper consideration, and for providing independent scrutiny. Public bodies using AI should also establish processes to monitor and evaluate issues relating to the performance of the technology.
19. We are of the view that existing appeals processes can be used for appeals against automated decisions.9 Public bodies should continue to make available fair and transparent avenues of redress for decisions and ensure that mechanisms for redress are proportionate and lead to lawful and timely outcomes, whether AI is used or not. AI systems will need to be transparent enough to trace the way an automated decision was made, so decisions made by AI can be explained and justified at each point of the appeals process.
How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?
20. As noted above, our 2020 report found that a coherent regulatory framework for AI across the UK public sector was still a work in progress. For example, healthcare practitioners told the Committee they were confident that AI could be implemented safely and ethically in medicine because it operates within a well regulated system,
8 See table on distribution of responsibility on page 60, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/868 284/Web_Version_AI_and_Public_Standards.PDF
9 Most public bodies have complaints procedures in place. Where complaints cannot be resolved, individuals usually have access to independent advice through an ombudsman scheme, and almost all decisions made by public bodies that have an impact on citizens carry a statutory right of appeal. Individuals may also be able to challenge decisions through judicial review.
where there are professional standards in place for testing and implementing new technologies, and for reporting and research. In contrast, the same established regulatory framework does not exist in policing, which has recently led to the “unlawful and unethical” use of facial recognition technology by the police, according to researchers at the University of Cambridge.10 Evidence submitted to the 2020 review suggests that the use of AI in policing is more representative of the overall use of AI in the public sector, suggesting the need for better scrutiny and oversight in some areas.
21. Some contributors to the 2020 review suggested that a new system of regulation was necessary for the use of AI in the public sector. For example, the Committee heard that a statutory arms-length public body, similar to the Human Fertilisation and Embryology Authority, could play a role in licensing technology and leading on standards.11 Justice of the Supreme Court, the Rt Hon Lord Sales, also called for an independent AI regulator, arguing that the government lacks the technical capacity to safeguard against the legal and ethical challenges posed by AI.12
22. We agree with the rationale for independent scrutiny and advice on issues associated with AI. However, most contributors to the review argued that an AI regulator was impractical.13 Any system of ethical regulation for AI in the public sector will require sector based oversight to account for the specific risks and challenges of automated decision-making across sectors. A new AI regulator would inevitably overlap with existing regulatory bodies, who already have to regulate AI within their sectors and remits. As such, we are of the view that the UK does not need a new AI regulator. Instead, we recommend that existing regulators consider and respond to the regulatory requirements and impact of AI in the fields for which they have responsibility. We encourage existing regulators to be explicit and transparent about the work they are doing to consider and assess this in the bodies they regulate.
23. However, government and regulators will need guidance from a central body about issues associated with AI because AI will likely create unforeseen issues for regulation where technical expertise is necessary. In 2020, we recommended that the Centre for Data Ethics and Innovation take on this responsibility, in line with the government’s published intention that the CDEI oversee and anticipate gaps in AI governance and regulation, set best practice, and advise government on AI policy and regulation.14 We also supported the government’s intention to establish the Centre on a statutory footing to safeguard its independence, advising government to
10
https://www.cam.ac.uk/research/news/uk-police-fail-to-meet-legal-and-ethical-standards-in-use-of-faci al-recognition
11 Written evidence 12, Dr Emma Carmel https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/103 9117/Artificial_Intelligence_and_Public_Standards_-_Written_Evidence.pdf
12 https://www.supremecourt.uk/docs/speech-191112.pdf
13
https://www.gov.uk/government/publications/artificial-intelligence-and-public-standards-roundtable-tra nscripts
14
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/757 509/Centre_for_Data_Ethics_and_Innovation_-_Government_Response_to_Consultation.pdf
“act swiftly to clarify the overall purpose of the Centre for Data Ethics and Innovation” before doing so.15
24. We are glad to see that the functions of the Centre for Data Ethics and Innovation were consulted on as part of the National Data Strategy in late 2020.16 The government told us in 2021 that advice would go to Ministers on the future functions and governance of the Centre in April 2021.17 However, the specific functions of the Centre remain unclear. At present, it is described as “a government expert body enabling the trustworthy use of data and AI”18, and makes no mention of its intended role in identifying and addressing “areas where clearer guidelines or regulation” are needed.19 It is not clear when or if the CDEI will be placed on a statutory footing. This means there may still be a significant gap in the regulatory landscape for AI.
25. Procurement processes can act as a form of soft regulation. Contributors to the review also emphasised the importance of “ethics by design”. For example, to ensure that an AI system is accountable, public bodies may need to “build in” the capacity for it to produce explanations for its decisions.
26. Government should use its purchasing power in the market to set procurement requirements that ensure private companies developing AI for the public sector address public standards. For example, procurement processes should be designed so products and services that facilitate high standards are preferred and companies that prioritise ethical practices are rewarded. As part of the commissioning process, the government should set out the ethical principles expected of companies providing AI services to the public sector. Adherence to ethical standards should be given an appropriate weighting as part of the evaluation process, and companies that show a commitment to them should be scored more highly than those that do not.
To what extent is the legal framework for the use of AI fit for purpose?
27. As noted above, we believe efforts to establish a strong legal framework for AI in the public sector remain a work in progress. We do not recommend new legislation, but feel there is a pressing need for guidance that translates existing legislation into practical standards and policy for public bodies using AI. Public bodies should be required to publish a statement on how their use of AI complies with relevant laws
15
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/868 284/Web_Version_AI_and_Public_Standards.PDF
16
https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy#ministeri al-foreword
17
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/988 185/Government_Response_to_the_Committee_on_Standards_in_Public_Life_s_2020_Report_AI_a nd_Public_Standards Accessible_version_.pdf
18 https://www.gov.uk/government/organisations/centre-for-data-ethics-and-innovation/about
19
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/757 509/Centre_for_Data_Ethics_and_Innovation_-_Government_Response_to_Consultation.pdf
and regulations before AI is deployed in public service delivery, to prevent public bodies relying on tenuous and piecemeal legal bases to legitimise the use of AI.20
28. In 2020, we found that the EU GDPR, which at the time of writing had direct application in UK law through the Data Protection Act 2018, creates an extensive legal framework for any organisation processing personal data with AI. We found that insofar as automated decision-making involves the processing of personal data, all of the provisions of the GDPR, including lawfulness, fairness and transparency; purpose limitation; accuracy; and accountability, apply. Although data protection law is technology neutral, several provisions protect against “solely automated
decision-making” and profiling21, and arguably provide a legal “right to explanation” for decisions made by automated systems.22 However, we have not taken evidence on the extent to which the UK GDPR, which has since replaced the EU GDPR, continues to provide this legal framework.23
29. The evidence we received in 2020 suggested that data bias could cause AI to produce decisions and policy outcomes that are discriminatory, which may breach the Equality Act 2010.24 Contributors to the review told us that the Public Sector Equality Duty, established under the Equality Act in 2011, is the “single best tool available” for dealing with data bias if used correctly.25 For example, many public bodies already undertake Equality Impact Assessments to consider the potential impact of policy decisions on protected characteristics, and the same could be done for automated decisions. However, contributors also told us there was uncertainty about how the Equality Act 2010 applies to automated decision-making in practice. There is currently no guidance for public bodies using AI on how to comply with
anti-discrimination law. We feel strongly that public bodies need to know how the Equality Act 2010 applies to discriminatory outcomes enabled by AI. We recommended that The Equality and Human Rights Commission develop guidance on how public bodies using AI should comply with the Equality Act 2010. We are glad to see that the EHRC has committed to doing so.26
What lessons can the UK learn from other countries on AI governance?
20 Written evidence 20, Professor Karen Yeung https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/103 9117/Artificial_Intelligence_and_Public_Standards_-_Written_Evidence.pdf
21
https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regul ation-gdpr/automated-decision-making-and-profiling/what-does-the-uk-gdpr-say-about-automated-dec ision-making-and-profiling/
22 https://ico.org.uk/media/about-the-ico/consultations/2616434/explaining-ai-decisions-part-1.pdf
23 At the time of writing, the EU GDPR had direct application in UK law through the Data Protection Act 2018. The provisions of the EU GDPR have since been incorporated directly into UK law as the UK GDPR. According to the ICO, “there is little change to the core data protection principles, rights and obligations”.
24 https://www.libertyhumanrights.org.uk/issue/policing-by-machine
25 https://www.equalityhumanrights.com/en/advice-and-guidance/public-sector-equality-duty
26
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/988 185/Government_Response_to_the_Committee_on_Standards_in_Public_Life_s_2020_Report_AI_a nd_Public_Standards Accessible_version_.pdf
30. In 2019, Singapore’s Personal Data Commission published a model framework for AI governance27, which we highlighted in our 2020 report as “a useful starting point for thinking about the kinds of mechanisms that public sector organisations in the UK should adopt when using AI technology”.28 It states that the risks associated with AI can be managed by adapting existing governance structures to incorporate values, risks and responsibilities relevant to AI decision-making. As noted above, we agree that the effective governance of AI does not require an overhaul of traditional risk management, and think the UK could learn from this approach.
31. Our 2020 report found that the absence of a compulsory standards risk management tool is a gap in the UK’s current AI governance framework. Most contributors to the review argued that a mandatory AI impact assessment would fill this gap. Some contributors spoke favourably about the Canadian Algorithmic Impact Assessment, which covers the social, environmental and human rights impact of an AI system.29 There may be lessons to be learned from Canada when considering how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards in the UK.30
27 Since updated
https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframew ork2.ashx
28
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/868 284/Web_Version_AI_and_Public_Standards.PDF
29
https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/resp onsible-use-ai/algorithmic-impact-assessment.html
30 We recommended in our 2020 report that “government should consider how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards. Such assessments should be mandatory and should be published.”
Committee on Standards in Public Life
The Committee on Standards in Public Life is an independent, advisory
Non-Departmental Public Body (NDPB). The Committee was established in October 1994, by the then Prime Minister, with the following terms of reference:
To examine current concerns about standards of conduct of all holders of public office, including arrangements relating to financial and commercial activities, and make recommendations as to any changes in present arrangements which might be required to ensure the highest standards of propriety in public life.
The Principles of Selflessness, Objectivity, Integrity, Accountability, Openness, Honesty and Leadership remain the basis of the ethical standards expected of public office holders and continue as key criteria for assessing the quality of public life.
The remit of the Committee excludes investigation of individual allegations of misconduct.
On 12 November 1997, the terms of reference were extended by the then Prime Minister:
To review issues in relation to the funding of political parties, and to make recommendations as to any changes in present arrangements.
The Committee’s terms of reference were further clarified following the Triennial Review of the Committee in 2013. The then Minister of the Cabinet Office confirmed that the Committee:
Should not inquire into matters relating to the devolved legislatures and Governments except with the agreement of those bodies. Secondly the Government understands the Committee’s remit to examine “standards of conduct of all holders of public office” as encompassing all those involved in the delivery of public services, not solely, those appointed or elected to public office.
Committee membership:
● Lord Evans of Weardale KCB DL, Chair
● Rt Hon Dame Margaret Beckett DBE MP
● Ewen Fergusson
● Baroness Simone Finn
● Professor Dame Shirley Pearce DBE
● Professor Gillian Peele
● Rt Hon Lord Stunell OBE (term of appointment ends on 30 November 2022)
Annex B 2020 report on Artificial Intelligence and Public Standards recommendations Recommendation 1: Ethical principles and guidance
There are currently three different sets of ethical principles intended to guide the use of AI in the public sector – the FAST SUM Principles, the OECD AI Principles, and the Data Ethics Framework. It is unclear how these work together and public bodies may be uncertain over which principles to follow.
a. The public should know which high level ethical principles govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use.
b. The guidance by the Office for AI, the Government Digital Service and the Alan Turing Institute on using AI in the public sector should be made easier to use and understand, and promoted extensively.
Recommendation 2: Articulating a clear legal basis for AI
All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery.
Recommendation 3: Data bias and anti-discrimination law
The Equality and Human Rights Commission should develop guidance in partnership with both the Alan Turing Institute and the CDEI on how public bodies should best comply with the Equality Act 2010.
Recommendation 4: Regulatory assurance body
Given the speed of development and implementation of AI, we recommend that there is a regulatory assurance body, which identifies gaps in the regulatory landscape and provides advice to individual regulators and government on the issues associated with AI.
We do not recommend the creation of a specific AI regulator, and recommend that all existing regulators should consider and respond to the regulatory requirements and impact of the growing use of AI in the fields for which they have responsibility.
The Committee endorses the government’s intention for CDEI to perform a regulatory assurance role. The government should act swiftly to clarify the overall purpose of CDEI before setting it on an independent statutory footing.
Recommendation 5: Procurement rules and processes
Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards.
This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements.
Recommendation 6: The Crown Commercial Service’s Digital Marketplace
The Crown Commercial Service should introduce practical tools as part of its new AI framework that help public bodies, and those delivering services to the public, find AI products and services that meet their ethical requirements.
Recommendation 7: Impact assessment
Government should consider how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards. Such assessments should be mandatory and should be published.
Recommendation 8: Transparency and disclosure
Government should establish guidelines for public bodies about the declaration and disclosure of their AI systems.
Recommendation 9: Evaluating risks to public standards
Providers of public services, both public and private, should assess the potential impact of a proposed AI system on public standards at project design stage, and ensure that the design of the system mitigates any standards risks identified. Standards review will need to occur every time a substantial change to the design of an AI system is made.
Recommendation 10: Diversity
Providers of public services, both public and private, must consciously tackle issues of bias and discrimination by ensuring they have taken into account a diverse range of behaviours, backgrounds and points of view. They must take into account the full range of diversity of the population and provide a fair and effective service.
Recommendation 11: Upholding responsibility
Providers of public services, both public and private, should ensure that responsibility for AI systems is clearly allocated and documented, and that operators of AI systems are able to exercise their responsibility in a meaningful way.
Recommendation 12: Monitoring and evaluation
Providers of public services, both public and private, should monitor and evaluate their AI systems to ensure they always operate as intended.
Recommendation 13: Establishing oversight
Providers of public services, both public and private, should set oversight mechanisms that allow for their AI systems to be properly scrutinised.
Recommendation 14: Appeal and redress
Providers of public services, both public and private, must always inform citizens of their right and method of appeal against automated and AI-assisted decisions.
Recommendation 15: Training and education
Providers of public services, both public and private, should ensure their employees working with AI systems undergo continuous training and education.
(December 2022)