Written Evidence Submitted by Which?
(GAI0049)
Which? welcomes the opportunity to respond to the Science and Technology Committee’s call for evidence for the governance of artificial intelligence (AI).
AI is already transforming consumers' experiences in a variety of markets, bringing with it positive opportunities including greater choice for products, services and personalisation. Which? welcomes the government's ambitious ten-year plan for the UK to remain a global AI superpower as part of the National AI Strategy and recognises the cross border benefits of a pro-innovation approach to AI.
However, there are risks of consumer harm associated with the development and proliferation of AI. Of particular concern to Which? is where programme bias enters into decision making and where a lack of transparency could lead to consumer’s feeling their privacy has been violated. As the UK’s consumer champion, Which? wants consumers to benefit from AI’s advance, while ensuring their consumer rights are being robustly protected from its current, and potential future risks, of which governance plays a key role.
We would like to respond to the following two key themes in the call for evidence:
● Making AI transparent and explainable
● Regulatory oversight of AI
○ Current
○ Future
We recommend:
● Ensuring that appropriate cross-sectoral principles, including transparency and explainability, are enforced on a mandatory, not voluntary basis.
● That regulators make clear for both businesses and consumers where to seek redress and which entities are responsible for providing redress.
● That an advisory group of AI experts is set up to work with regulators to provide expert advice and ensure that regulation is harmonised between sectors.
The Centre for Data Ethics and Innovation (CDEI) found that “when asked to describe their feelings about the future of AI, the dominant responses from consumers are those of caution, confusion, worry, and concern… That said, many also express ambivalence or a lack of knowledge”.[1] Which? would like to highlight that consumer ambivalence can lead to bad business practice going unnoticed or worse, becoming the status quo. Consumer confusion and lack of engagement is not a valid reason for light touch regulation or unclear redress mechanisms. Consumer protections should be at the heart of AI governance in order to increase consumer confidence in the technology.
In January 2021, the AI Council[2] published a roadmap with 16 recommendations for how the UK can develop a revised national AI strategy. Which? agrees with their statement “The UK will only feel the full benefits of AI if all parts of society have full confidence in the science and the technologies, and in the governance and regulation that enable them.” We urge the government to reflect this ambition in its overall approach to AI by ensuring that it is sufficient to build consumer trust in AI technologies.
Making AI transparent and explainable
Our 2021 research into consumer’s views on automated decision making and cookies found that the lack of transparency of AI systems is a critical issue, and many consumers feel that there are already elements of AI that are very opaque. Some consumers believe it should be a moral obligation to disclose when AI is being used, others want the option to opt-out of its use in decisions made about them, or to be given the choice of which data is being used in autonomous decisions.[3] This shows that consumers are conscious and wary of the power AI can have over their product and service choices.
AI applications can be considered against a scale of potential harms to consumers. For example, tailored search results could limit the range of choice a consumer has, but an AI decision on insurance, mortgage or other financial product would have a significant real-life impact. Regulations for transparency should reflect this for consumers to ensure they are clear where AI is being used solely or partially to make a decision.
In the same Which? research, participants viewed AI as mainly benefiting companies, such as in efficiency savings and increased productivity. There were prominent fears from participants about what data AI was analysing and they doubted whether it could be trusted to be fair. One participant wrote at length about their concerns in this area:
“Are they identifying people and helping them to save money with their best interests at heart? Or are they hiding behind the anonymity provided by technology in order to exploit these people for profit?... If we are aligned to speaking truthfully and acting and communicating honestly I believe we are headed for great things. If not, we decline. I feel that both are going on and it is sometimes hard to know which organisation is truthful and which organisations again exploit whatever, to try to appeal to more people, or get more money.”
This quote articulates the importance of companies explaining what and how consumer data is used in AI. This information is vital to support consumers to challenge decisions or actions which they deem are unfair. Which? believes that further research is needed to discover whether consumers understand where previous data consent, or where data that is deemed open (such as social media) can affect automated decision making.
Which? notes that Canada’s Artificial Intelligence and Data Act (AIDA)[4] includes a set of transparency requirements. Under these, the person who makes a high-impact AI system available for use must make a plain language description of the system that includes a publicly available explanation of measures, including how the system is intended to be used and the types of content that will be used to generate decisions. Which? believe this will increase consumer trust in how their data has been used, as well as allow organisations to better understand the AI they use and assess how AI models should be integrated into their processes. We recommend further research is conducted in this area.
Current regulatory oversight of AI
Our research participants felt the ability to challenge decisions made using AI was a right, not a privilege and its removal would be “dehumanising”. They believed that there was “no consumer benefit” to removing the right to challenge a decision made by AI and anticipated a range of risks, such as discrimination, financial impacts, the loss of privacy and potentially the loss of freedom and justice.[5] This is an important consideration for the current legislative oversight, as without significant support and guidance, consumers will be unsure of which regulator takes precedence when the consumer is in conflict with a business on a decision made by AI.
We are aware that AI systems often cross multiple industries and regulators in single user journeys and intended outcomes. Where a consumer harm occurs, the investigation into this may need to span several regulators. The current landscape is complex for consumers and we note the CMA’s research paper into algorithms[6] acknowledges the opacity of systems and operations, which made it hard for consumers to effectively discipline firms and seek redress. Which? question the ability of regulators to be able to communicate redress routes clearly to businesses to pass onto consumers to make it easy to report harms.
Which? are also concerned at the dominance of BigTech in the development of consumer facing AI where walled garden data sources, and decision-making coding is classed as intellectual property and whether there is sufficient governance scrutiny over this across regulators on behalf of consumers. The CMA research paper also pointed to algorithmic systems being used by dominant firms to deter competitors from challenging their market position, for example by featuring listings that are more profitable for a particular platform. This has the potential to create a vicious circle of AI dominance preventing growth and trapping consumers.
Future regulatory oversight of AI
Which? believe that whilst the current legislative protections via the Data Protection Act 2018, UK GDPR, EU Equality Law and the Consumer Protection from Unfair Trading Regulations 2008 are clear on what consumer rights are regarding AI, the development of the technology has outstripped the expertise and resource of regulators and the ability for businesses to confidently remain compliant.
We urge the government to consider setting up an advisory group of AI experts as a governance and scrutiny support for regulators. In the EU AI Act, a specialised and independent body of technical AI experts is proposed to assist with the technical aspects of an investigation at national or EU level and issue nonbinding opinions about specific cases brought up by the national authorities.
A similar approach in the UK would ensure that AI regulation has a clear level of authority for scrutiny and challenges. Which? is concerned that via the regulatory route, nuanced challenges could be lost where the consumer does not recognise which regulator or where the AI ranges across regulators.
Which? recognises the government’s intention, through the DCMS policy paper published in July 2022,[7] to encourage diversity, growth and innovation with a contextual and light touch approach to AI regulation. The paper did not describe in detail what light touch meant. However, Which? is concerned that the lack of reputational risk involved as a result of voluntary principles will lead to a high risk of non-compliance that will threaten consumer wellbeing.
Guidance and voluntary measures have previously proven not sufficient to generate compliance in certain markets. The government’s Product Security and Telecoms Infrastructure (PSTI) Bill is an example of how previous voluntary guidelines on the security of smart products failed to have the expected market impact.
A robust regulatory framework for AI governance with mandatory cross-sectoral principles, sufficient powers given to regulatory bodies and a board of experts to provide advice would help to ensure consumers have the right protections in place while allowing for innovation. We note that the EU AI Act does not currently include a collective redress mechanism and the UK government must make this a priority for consumers.
About Which?
Which? is the UK’s consumer champion. As an organisation we’re not for profit - a powerful force for good, here to make life simpler, fairer and safer for everyone. We’re the independent consumer voice that provides impartial advice, investigates, holds businesses to account and works with policymakers to make change happen. We fund our work mainly through member subscriptions. We’re not influenced by third parties – we never take advertising and we buy all the products that we test.
(November 2022)
[1] “Public attitudes to data and AI: Tracker survey (Wave 2)”, Centre for Data Ethics and Innovation, November 2022, https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-2/public-attitudes-to-data-and-ai-tracker-survey-wave-2
[2] The AI Council is an independent-expert committee made up of government, academia, third sector and big tech, https://www.gov.uk/government/groups/ai-council
[3] “The Consumer Voice: Automated Decision Making and Cookie Consents proposed by “Data: A new direction"” Which?, November 2021, https://www.which.co.uk/policy/digital/8426/consumerdatadirection
[4] “An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts”, Parliament of Canada, https://www.parl.ca/legisinfo/en/bill/44-1/c-27
[5] “The Consumer Voice: Automated Decision Making and Cookie Consents proposed by “Data: A new direction"” Which?, November 2021, https://www.which.co.uk/policy/digital/8426/consumerdatadirection
[6] “Algorithms: How they can reduce competition and harm consumers”, CMA, March 2022, https://www.gov.uk/find-digital-market-research/algorithms-how-they-can-reduce-competition-and-harm-consumers-2021-cma
[7] “Establishing a Pro-Innovation Approach to Regulating AI”, DCMS, July 2022, https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement