AIG0009

Written evidence submitted by Big Brother Watch

Brother Watch

Big Brother Watch is a civil liberties and privacy campaigning organisation, fighting for a free future. We’re determined to reclaim our privacy and defend freedoms at this time of enormous technological change.

We’re a fiercely independent, non-partisan and non-profit group who work to roll back the surveillance state and protect rights in parliament, the media or the courts if we have to. We publish unique investigations and pursue powerful public campaigns. We work relentlessly to inform, amplify and empower the public voice so we can collectively reclaim our privacy, defend our civil liberties and protect freedoms for the future.


 

 

 

 

 

 

 

 

 

Introduction

1.      We welcome the opportunity to provide evidence to the Public Accounts Committee’s inquiry into the use of artificial intelligence (AI) in government.

2.      Big Brother Watch works to defend civil liberties in the context of new and emerging technologies. We research and publish ground-breaking reports on the use of AI in government, including on the use of automation in welfare,[1] in electronic surveillance, in law enforcement and the immigration system’s use of facial recognition[2] and government units monitoring online speech.[3] As such, our response will focus on the risks of AI adoption in government to demonstrate how government use of AI impacts individuals and their data and human rights.

3.      To reflect the ways in which this terminology is commonly used, we have followed a wide interpretation of AI, including machine learning, which concerns the imitation of human intelligence in an artificial manner, by computer programs, systems or algorithms. This technology can be used to analyse data and make decisions.

4.      We are responding to the Committee’s request for views on the government’s progress on strategy development and governance arrangements and on the risks and opportunities of AI adoption in government.

Progress on strategy development and governance arrangements

5.      The March 2024 report by the National Audit Office was critical of the current approach to the adoption of AI in government:

“Government action to support adoption of AI in the               public sector following the publication of the 2021 National AI Strategy lacked a coherent strategy or supporting governance arrangements. Activities referenced in the strategy to support its aim for the public sector to become an exemplar of safe and               ethical deployment of AI sit across many bodies and               have not been underpinned by robust oversight structures with clear accountabilities, an implementation plan, or performance measures to track progress.”[4]

6.      The report stressed the failure within government to establish clear responsibility and accountability for AI, with departments taking their own approach to the deployment of the technology. This lack of transparency, oversight and regulation of AI in the public sector has already led to harmful outcomes in education,[5] healthcare,[6] policing,[7]welfare[8] and beyond.

7.      The current legal framework governing AI is inadequate. The government acknowledged in its AI White Paper that “intervention is needed to improve the regulatory landscape.”[9]  However, the government does not propose any new legislation to oversee the development and use of AI systems. Instead, the White Paper set out “a principles-based framework for regulators to interpret and apply to AI within their remits”.[10] This principles-based ‘regulation’ is part of the government’s pro-innovation approach to AI, which seeks to diminish legal obligations and instead asks regulators to “consider lighter touch options, such as guidance or voluntary measures.”[11] Far from a “pro-innovation” approach, this will create an uncertain environment that is out of step with the clearer legislative approach being taken in the rest of Europe.

8.      Further, the Government is seeking to pass the Data Protection and Digital Information Bill (‘DPDI’), which threatens to further complicate the regulatory landscape for the use of AI and diminish existing safeguards for individuals. Data protection is fundamental to regulating AI but, instead of strengthening existing standards, the DPDI Bill threatens to drastically dilute them. The Equality and Human Rights Commission said that:

“We are particularly concerned about the proposals in the [DPDI] Bill as they relate to Artificial Intelligence (AI). The Bill must be read alongside the Government’s approach to regulating AI, set out in the White Paper A pro-innovation approach to regulating AI […] In the White Paper, Government set out five principles for regulating AI, including transparency and explainability, fairness, and contestability and redress. Proposals in the Bill undermine these principles.”[12]

9.      One of the key ways the Bill undermines such principles is through weakening safeguards around the use of automated decision-making (ADM). In fact, it inverts existing protections: where ADM is currently broadly prohibited with specific exceptions, the Bill would widely permit its use and only restrict it in very limited circumstances.[13] This reversal further erodes the already inadequate legal safeguards that should protect individuals from discrimination or disadvantage by AI systems. Further, the Bill weakens existing safeguards regarding the requirement for data controllers to notify data subjects that a decision has been made using ADM. These reforms will make it easier to make automated decisions about people without their knowledge and without seeking their consent, which risks exacerbating power imbalances by hiding an individual’s own rights from them. Through these changes, amongst others, the Bill will downgrade existing safeguards around AI and machine learning use, directly impacting the people who it is used to make decisions about.

10.  Governance arrangements are currently inadequate, leaving government departments without clear guidelines within which to operate. Harmful uses of AI by public bodies in the UK and internationally have led to unfair outcomes and have damaged public trust. In one case, the irresponsible use of AI in the Netherlands’ welfare system resulted in families being pushed into poverty due to false debts, children being wrongly removed from their families and ultimately led to the resignation of the Dutch government.[14] Where government uses of AI involve personal data, individuals should feel confident that their data is not being misused, and that decisions made about them are transparent, fair and open to challenge. Without proper legal safeguards, government departments using novel forms of AI to make decisions about members of the public could have unfair and damaging outcomes.

Risks and opportunities of AI adoption in government

11.  'AI’ encompasses a huge number of technologies, many of which could be useful tools for the public sector. The Deputy Prime Minister announced that AI would be used to “transform” government efficiency, “from correspondence to call handling, from health care to welfare”.[15] However, as the Science, Innovation and Technology Committee warned:

“[AI] should not be viewed as a form of magic or as something that creates sentient machines capable of self-improvement and independent decisions. It is akin to other technologies: humans instruct a model or tool and use the outputs to inform, assist or augment a range of activities.”[16]

12.  There is no absence of evidence about current and future risks posed by AI and in the UK, harms arising from irresponsible, unethical and unlawful uses of AI are already occurring.

Department for Work and Pensions: Bank spying

13.  At the time of writing, the Government is legislating to permit suspicionless monitoring of bank accounts through algorithmic surveillance. Clause 128 and Schedule 11 of the DPDI Bill would permit the Department for Work and Pensions (DWP) to compel banks and other third parties to trawl all customer accounts in search of 'indicators' of fraud and error in the welfare system.[17] Third parties will have to pass information relating to flagged accounts back to DWP, as well as information of any linked accounts holders including landlords, charity trustees, or potentially even business accounts. These accounts, as well as the holder's other personal accounts, will be eligible for DWP investigation.

14.  This process will inevitably use algorithms.[18] Third parties will have to process the data of all bank account holders and run automated surveillance scanning according to secret criteria supplied by DWP, which means huge swathes of people’s personal information will be trawled as part of the surveillance. Furthermore, there is no oversight of the secret scanning criteria. Such a covert use of algorithms fails to satisfy any of the five principles for regulating AI set out by the Government in the White Paper of transparency, explainability, fairness, contestability, and redress.[19] Most pressingly, it makes it virtually impossible to hold the responsible party to account – be it DWP, banks, or other third parties - when it goes wrong. This raises serious questions around transparency, as well as people’s access to remedy.

15.  The context in which these powers would be used is vital. Although the automated surveillance would impact all customer accounts, it is directly targeted at people who receive social security payments. Using algorithms to scan millions of accounts is highly likely to result in mistakes. It is in the very nature of social security that, when used to target claimants, this will engage protected characteristics. Sickness and disability benefits engage disability; pensions engage age; benefits relating to children may engage age and also indirectly engage sex; and so on. The National Audit Office has warned that machine learning risks bias towards certain vulnerable groups and people with protected characteristics.[20] Algorithmic error in such a high-risk context could lead to wrongful benefits investigations. These can result in demanding documentation requirements and failure to comply accurately and on time can lead to benefits being incorrectly withheld, potentially leaving innocent people unable to afford basic necessities such as food, medicine, or heating bills. The Horizon scandal laid bare the risks of relying on automated systems in decision-making contexts.[21] We are concerned that the use of automated surveillance in pursuit of DWP policy aims risks replicating this tragedy in a context of nearly 23 million people, many of whom are vulnerable.

Department for Science, Innovation and Technology: Use of AI to detect disinformation

16.  Big Brother Watch’s investigation into opaque counter-disinformation units operating within government revealed that they were working with AI companies to detect and log disinformation online.[22] One unit, the Counter Disinformation Unit, now the National Security Online Information Team, is run out of the Department for Science, Innovation and Technology and continues to monitor and flag online speech which they believe breaches platforms’ terms and conditions. We are concerned that this government interference with lawful speech represents extra-judicial censorship.

17.  Given the scale of online content, the Department for Science, Innovation and Technology has contracted companies which use AI to identify harmful content. One such company is Logically, an internet monitoring firm which employs AI, open source techniques and fact-checkers that allow governments to “identify and mitigate harmful content”.[23] In reality, this led to Logically conducting trawls of online political speech, flagging content with the Counter Disinformation Unit which could not reasonably be described as ‘disinformation’. Instead criticism of the government, its policies and specific Ministers was flagged and contained within ‘disinformation reports’, which were circulated within government. The then Department for Digital, Culture, Media and Sport spent over £1 million on contracts with Logically for these services.[24]

18.  Relying on AI to undertake complex analysis of political and social debate online is deeply troubling, particularly when outcomes can result in the censorship of content. AI is a blunt tool for content moderation, which deals with nuanced areas of speech, law and the adjudication of individuals’ rights.[25] The technology often cannot account for context or nuance, leading to mistakes. With live topics and fast-moving debates, it is incumbent on the government to refrain from overzealously labelling content as harmful or disinformation, given the risk that legitimate minority views could be mistakenly labelled and consequentially removed. Given these concerns, the Culture, Media and Sport Committee recently recommended “that the Government commission and lay before Parliament an independent review of the activities and strategy of Counter Disinformation Unit within the next 12 months.”[26]

Home Office: Biometric surveillance in immigration and policing

19.  The Home Office’s approach to the use of AI-powered surveillance technology, particularly facial recognition technology, poses a serious risk to the privacy and freedom of expression of the public. Despite a lack of legislative oversight, and concerns over algorithmic bias, inaccuracy and risks to human rights, police forces have been encouraged to invest in this technology by the Home Office.[27] There are  reportedly plans to establish constant AI-powered facial recognition surveillance in transport hubs, a significant expansion in the use of this technology.[28] A Home Office Biometrics [HOB] Matcher, sometimes called the Strategic Matcher, is also being built. This technology platform will offer biometric search, identification and verification across fingerprints and facial scans for both law enforcement and immigration purposes.[29] This investment in AI-powered biometric surveillance without an explicit legal framework is deeply concerning, and takes the opposite approach to our allies in the European Union, where the Artificial Intelligence Act singles out remote-biometric identification technologies for prohibitions and restrictions on their use.

20.  To date, whilst parliamentary committees have provided scrutiny and made important recommendations – such as the Science and Technology Committee’s 2019 recommendation that “the Government [should] issue a moratorium on the current use of facial recognition technology and no further trials should take place until a legislative framework has been introduced”[30] - recommendations have not been taken forward by the Home Office. On the contrary, facial recognition technologies have been procured with public money and operationally deployed at pace in the “regulatory lacuna” the Science and Technology Committee warned of in 2019, resulting in legal challenges.[31]

21.  The embedding of AI-powered surveillance in policing and immigration is taking place with minimal safeguards and democratic oversight, and is a key example of this government’s failure to consider the rights of individuals when investing in AI.

22.  The government’s AI proposals do little to address current, well-known and well-documented risks.[32] It should be ahead rather than behind the curve, seeking to take an ambitious regulatory approach that prohibits the most dangerous uses of AI and upholds citizens’ rights. The current approach to governance, oversight and accountability puts the rights of the public at risk, and should be reconsidered.

Recommendations:

         The legal framework governing AI is insufficient. The government must establish appropriate legislation and governance frameworks that enable transparency, oversight, and regulation of AI before considering whether to adopt it within the public sector. This legislation should prioritize transparency, fairness, and accountability, aligning with European standards and upholding citizens' rights.

         Personal data must not be misused or subjected to invasive surveillance practices. Individuals should have confidence that their data is being handled ethically and transparently, with mechanisms in place for redress in case of misuse or error.

         Some uses of AI are too intrusive, inaccurate or dangerous to be used in government. These uses, such as live facial recognition surveillance, should be banned.

May 2024


[1]Poverty Panopticon – Big Brother Watch, July 2021: https://bigbrotherwatch.org.uk/wp-content/uploads/2021/07/Poverty-Panopticon.pdf

[2]Biometric Britain – Big Brother Watch, 23 May 2023: https://bigbrotherwatch.org.uk/wp-content/uploads/2023/05/Biometric-Britain.pdf

[3]Ministry of Truth – January 2023: https://bigbrotherwatch.org.uk/wp-content/uploads/2023/01/Ministry-of-Truth-Big-Brother-Watch-290123.pdf

[4]Use of artificial intelligence in government – National Audit Office, SESSION 2023-24, HC 612, 15 March 2024: https://www.nao.org.uk/wp-content/uploads/2024/03/use-of-artificial-intelligence-in-government.pdf

[5]Ofqual's A-level algorithm: why did it fail to make the grade? - Alex Hern, the Guardian, 21st August 2020: https://www.theguardian.com/education/2020/aug/21/ofqual-exams-algorithm-why-did-it-fail-make-grade-a-levels

[6]DeepMind faces legal action over NHS data use – BBC News, 1st October 2021: https://www.bbc.co.uk/news/technology-58761324

[7]Facial recognition use by South Wales Police ruled unlawful – BBC News, 11th August 2020: https://www.bbc.co.uk/news/uk-wales-53734716

[8]Calls for legal review of UK welfare screening system which factors in age – Robert Booth, the Guardian, 18th July 2021: https://www.theguardian.com/society/2021/jul/18/calls-for-legal-review-of-uk-welfare-screening-system-that-factors-in-age

[9]A pro-innovation approach to AI regulation – Department for Science, Innovation and Technology, GOV.UK, 29th March 2023, para 30: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

[10]A pro-innovation approach to AI regulation – Department for Science, Innovation and Technology, GOV.UK, 29th March 2023, para 36: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

[11]Establishing a pro-innovation approach to regulating AI An overview of the UK’s emerging approach - Department for Digital, Culture, Media and Sport, 18th July 2022, p. 2: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1092630/_CP_728__-_Establishing_a_pro-innovation_approach_to_regulating_AI.pdf

[12]Data Protection and Digital Information Bill, House of Lords Committee Stage - Equality and Human Rights Commission, 20 March 2024: https://www.equalityhumanrights.com/our-work/advising-parliament-and-governments/data-protection-and-digital-information-bill-house-1

[13]Data Protection and Digital Information Bill – DSIT, clause 14: https://bills.parliament.uk/publications/55222/documents/4745

[14]AI Decoded: DUTCH CHILDCARE BENEFITS SCANDAL SENDS WARNING SIGN TO EUROPE – MELISSA HEIKKILÄ, Politico,  30 March 2022: https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/

[15]Seizing the Opportunities of AI in Government: Deputy PM speech to government training conference on programming, AI and data science – Cabinet Office, GOV.UK, 20 December 2023: https://www.gov.uk/government/speeches/seizing-the-opportunities-of-ai-in-government

[16]Governance of artificial intelligence: Interim report - Science, Innovation and Technology Committee, Ninth Report of Session 2022–23, HC 1769, 19 July 2023: https://committees.parliament.uk/publications/41130/documents/205611/default/

[17]Data Protection and Digital Information Bill – DSIT, clause 128/Schedule 11: https://bills.parliament.uk/publications/55222/documents/4745

[18]Third Party Data Gathering Impact Assessment – DWP, September 2023, para 69: https://assets.publishing.service.gov.uk/media/6564bab01524e6000da10168/DWP_third_party_data_impact_assessment_november_2023.pdf

[19]A pro-innovation approach to AI regulation – Department for Science, Innovation and Technology, GOV.UK, 29th March 2023, para 10: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#part-3-an-innovative-and-iterative-approach

[20]Report on Accounts, Department for Work & Pensions – National Audit Office, July 2023: https://www.nao.org.uk/wp-content/uploads/2023/07/dwp-report-on-accounts-2022-23.pdf

[21]Post Office scandal explained: What the Horizon saga is all about – BBC, 10 April: https://www.bbc.co.uk/news/business-56718036

[22]Ministry of Truth – Big Brother Watch, January 2023: https://bigbrotherwatch.org.uk/wp-content/uploads/2023/01/Ministry-of-Truth-Big-Brother-Watch-290123.pdf

[23]Logically Case Studies, accessed 19 August 2022: https://www.logically.ai/use-caseshttps://web.archive.org/web/20220810084637/https://www.logically.ai/use-cases

[24]See Ministry of Truth – Big Brother Watch, January 2023, pg 6: https://bigbrotherwatch.org.uk/wp-content/uploads/2023/01/Ministry-of-Truth-Big-Brother-Watch-290123.pdf

[25]For further analysis and examples see The State of Free Speech Online – Big Brother Watch, September 2021: https://bigbrotherwatch.org.uk/wp-content/uploads/2021/09/The-State-of-Free-Speech-Online-1.pdf

[26]Trusted voices, Sixth Report of Session 2023–24 - Culture, Media and Sport Committee, 12 April 2024, HC 175: https://publications.parliament.uk/pa/cm5804/cmselect/cmcumeds/175/report.html

[27]Police urged to double AI-enabled facial recognition searches – Home Office, GOV.UK, 29 October 2023: https://www.gov.uk/government/news/police-urged-to-double-ai-enabled-facial-recognition-searches

[28]How facial recognition technology has changed policing – Ben Ellery, the Times, 5 April 2024: https://www.thetimes.co.uk/article/facial-recognition-technology-changed-policing-london-met-n5m3vwng2

[29]Home Office Biometrics Programme Briefing Paper - Home Office, 17 July 2019: https://privacyinternational.org/sites/default/files/2020-08/OP1071%20-%2017072019%20Item%208.1%20LEDSHOB%20Open%20Space%20-%20HOB%20Programme%20Briefing_0.pdf

[30]The work of the Biometrics Commissioner and the Forensic Science Regulator – Science and Technology Committee, House of Commons, 18 July 2019, Recommendation 8: https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/1970/197002.html

[31]Bridges v South Wales Police

[32]A pro-innovation approach to AI regulation – Department for Science, Innovation and Technology, GOV.UK, 29th March 2023,  box 3.1: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper