Written Evidence Submitted by Trustpilot
(GAI0054)
Introduction
- The increasing use of Artificial Intelligence (AI) brings great potential and opportunity to UK businesses and society in general. However, to ensure that everyone reaps the benefits, it is vital that AI is used in a proper, ethical and fair manner, and with sufficient transparency and accountability. To this end, Trustpilot welcomes the opportunity to respond to the Science and Technology Committee’s inquiry into the governance of AI.
- Trustpilot is a leading online consumer reviews platform that brings businesses and consumers together to foster trust and inspire collaboration. Our vision is to be a universal symbol of trust. We are responding to this consultation as a company that uses AI in a range of ways to benefit both our own business, and the UK consumers and businesses who use our platform.
Machine learning at Trustpilot
- As part of our commitment to fostering trust between consumers and businesses, we use a range of automated tools, such as machine learning and algorithms. For example, to protect consumers against fraud in the form of fake reviews, which a small minority of bad actors seek to place on our platform, we use machine learning to detect fake reviews precisely, and at scale. Every single review is run through our automated technology, which analyses over 128,000 reviews every day using multiple data points. Our systems are able to process large amounts of data and analyse behavioural patterns, and we can use these to identify anomalies and remove fake reviews, sometimes before they even appear on Trustpilot. This lets us enhance trust for many more people than we could have without AI assistance.
- Our approach includes humans in the loop to train, enhance and validate our machine learning models, and we also have in place appropriate human safeguards, including human oversight and a redress mechanism for disputing decisions. Notwithstanding this, machine learning is at the core of our methods to improve the safety in our platform and increase trust in the integrity of the online review content we host. We believe that responsible use of machine learning delivers positive benefits, and for us plays a key role in making sure that the online reviews that consumers read and rely on are genuine.
- It is therefore important that governance approaches, while addressing the complex challenges involved, can also recognise and enhance the benefits of AI in order to not only accommodate, but also to foster and reward innovation.
How effective is current governance of AI in the UK?
What are the current strengths and weaknesses of current arrangements, including for research?
- One potential weakness is the lack of a clear and detailed overview of the current state of play. In order to deliver an effective regime, it is imperative that the existing AI governance landscape is fully understood, so that future changes can be implemented in an informed manner, with an approach that focuses first on areas with the highest risk and where the most impact can be made. On the legislative side, AI is currently addressed by a patchwork of different rules and regulators, principles and bodies, which makes it challenging to navigate.
- To turn this challenge into a “strength”, we would therefore suggest that a more detailed review of the existing landscape is conducted to provide a holistic picture of the current domestic and international guidance, standards and regulations that relate to AI. Such a step would enable a full appreciation of the present overlaps, gaps and contradictions. This would also create a solid foundation to assess and further build on with targeted legislation, as needed.
- There is also a large range of useful best practices and suggested AI governance frameworks around the development and deployment of AI systems that businesses can draw on to adapt to their specific circumstances. These can guide the creation of, or be applied as, internal frameworks, and touch on many of the same issues that AI legislation seeks to address — such as evaluating potential risks, ensuring safety and fairness, considering the quality of training data and any potential biases, the need for robust testing and validation etc. There is no shortage of useful information, but the challenge is drawing this together in a way that is clear, consistent and navigable for businesses. To address this and fully harness the value of this information, it could be beneficial to consider segmenting it by sector or industry to help the identification of useful practices applicable to specific contexts. There are a range of bodies that could take the lead, and direction from the government on this would be beneficial.
- Where guidance is provided or will be provided by UK regulators, there is an opportunity to coordinate and streamline it, to create an effective forum alongside ongoing cross-regulator engagement and communication. As part of such a coordinated approach, it is key that regulators can align on important terms and principles such as ‘fairness’ and ‘transparency’ so that an equal and consistent approach is taken to these foundational aspects across sectors.
- Finally, a potential challenge across any and all of the guidance, standards or legislation is balancing a pragmatic approach with the inevitable administrative burden that comes with processes and documentation, especially for areas of AI that are not considered high risk. A challenge and an opportunity for the UK, is to create a framework that is effective, useful and practical, and supports innovation. That is, an approach that effectively balances formal requirements such as documentation with the need to ensure that such processes do not become an administrative exercise that hampers, rather than enhances, the innovative development and deployment of AI.
What measures could make the use of AI more transparent and explainable to the public?
- An erosion of public trust in emerging technology, particularly in recent years, provides important context for questions around AI transparency and explainability. Globally, people’s trust in businesses in 2022 is reportedly[1] higher than in traditional institutions like governments, media or NGOs, yet digital technology and black-box tools like algorithms can be viewed with suspicion rather than curiosity.
- In the UK and across Europe, several high-profile public and private sector AI- or algorithm-related mishaps have underscored a fear-driven narrative, while for the tech sector specifically, perceived untrustworthy practices by digital platforms and Big Tech companies have helped to fuel a “techlash” that serves to solidify and perpetuate negative views.
- It can be argued that, against this backdrop, there is a risk that the benefits of AI could be lost due to a lack of trust. There is therefore a need to rebuild and earn trust in certain types of technology, especially AI systems, and particularly at the intersection of data and AI so that the benefits can be realised by all. This will require actively addressing people’s uncertainties and could take a range of forms.
- We suggest that the following may be useful:
● Continue to highlight the benefits of AI, especially where it is applied in a measured way with transparency, appropriate safeguards, sufficient human oversight, and respect for the technology’s limitations. Highlight a range of illustrative case studies that look at some of the details, placing AI within useful context. Bring to light stories from some of the smaller or lesser known players across different sectors or industries, to complement information already produced by more established actors.
● Seek to counter oversimplified, fear-based narratives. While it is relevant to understand the challenges of using AI and recognise the potential risks involved, a nuanced distinction between higher-risk and lower-risk applications and their different potentials for harm should be drawn to avoid increasing public distrust, scepticism and fear of all types of algorithmic processing generally.
● Promote clarity, not just transparency. For the diverse range of businesses who use AI and their myriad business models and features, a tailored or even custom approach to explaining technology use may be necessary to selectively provide clear and useful information that avoids overwhelming users. Transparency can help empower people to understand where and how algorithmic processing is being used, but volume and detail does not necessarily translate to clarity. In practice, more information is not always better. Transparency guidelines for certain sectors could be useful in providing a baseline for explaining algorithmic systems, but context and the level of complexity involved will be different in each case — so flexibility is imperative. A particular limitation for businesses in this regard is the need to protect certain information, such as that which may enable bad actors to game the system, or exploit any security risks. It is also important that there is the ability to innovate in this space so that companies are not bound by static rules and can continue to refine and improve how they inform audiences about their AI use.
● Encourage informed dialogue on the contentious issues relevant to AI decision making. A nuanced discussion of the use of algorithmic systems and their benefits and limitations is desirable in this context. Where a lack of information fuels distrust and scepticism, this can lead to the assumption that an unexpected or undesirable outcome must indicate the presence of a harmful algorithm. Yet, even perfect algorithms cannot accommodate simple differences of opinion on subjective areas. It should also be acknowledged that businesses are often operating within a global context, across different cultures, and that even if the UK has clearly advocated for one set of principles, companies operating within a global tech space may need to accommodate multiple frameworks. Improving people’s digital literacy and familiarity with new technologies will also help set expectations and conditions for useful, constructive dialogue, and for informed critique of decisions.
How should decisions involving AI be reviewed and scrutinised in both public and private sectors?
Are current options for challenging the use of AI adequate and, if not, how can they be improved?
- A level of focus on accountability — including on redress and contestability of decisions involving AI is sensible, given the need to build trust in the use of AI. Routes to deliver this should be set out clearly whilst also recognising that many businesses may already provide comprehensive redress mechanisms as a matter of transparency and trust.
- In order to build public trust against a backdrop of suspicion in technology and given the multiplicity of AI systems any one organisation in the public and private sector could use, it is important to scrutinise AI use and decision making in the round, within the context of an overall balanced approach to AI governance. This, paired with other measures such as building a base level of digital familiarity all have a role to play in securing public trust in AI. Where people have more confidence in technology, it will be easier for them to understand explanations about how AI systems work, and to better discern where challenging a decision is warranted.
How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?
- We believe the following considerations are relevant to deliver effective oversight:
● A context-driven approach is more suitable than horizontal legislation, since the absence of static rules is likely to support rather than restrict future tech innovation. Such an approach should also enable room for low-tech approaches to continue unaffected, given their limited relevance and risk.
● Regulation that is appropriate for the sector, and the actors within that sector, will have the best chance of success. Given the vast array of companies using AI and how varied the use cases are, having a context-driven approach which enables flexibility and accommodates the understanding of sector-specific or even business-specific nuances is crucial if it is to succeed.
● Principles-based over system-specific: Any future policy should take a principled approach to AI rather than a system-specific one. This would be far better suited to the many ways in which AI is used across sectors, enabling universal application and a flexible regime which can adapt to future innovation. In contrast, a rigid structure which marks each decision and then decides if the system works would bring concerns around suitability.
● Consistency, especially if multiple regulators are involved. There may be advantages to separating regulatory responsibility between different bodies; for example, it helps meet the need for specialised sectoral knowledge. However, the potential challenge this brings is ensuring consistency of policy and application between the different regulators. An extreme example might be if one business within the remit of two regulators finds one regulator deeming their use of AI as low risk, whilst another deems it as high risk. If regulators diverge significantly in their interpretation and application of policy, then an undesirable outcome is reached where the application of such a policy is correspondingly inconsistent across sectors.
● Flexibility, balanced with the need for legal certainty. If a flexible regulatory approach is adopted, this also sees a trade off with legal certainty. To help address this, it is important that, where possible, the direction set is as clear cut and unambiguous as possible to enhance legal certainty.
● Clarity on key definitions. For example, one fundamental challenge with AI governance is the difficulty in defining what AI is and is not, given that different definitions bring different types of technology within scope. Precise technical definitions are not understandable to everyone, but plain language definitions can fail to capture some of the important nuances. Definitions need to be both sufficiently clear to provide a level of legal certainty, yet flexible enough to be future-proof in light of the increasingly rapid pace of technological and digital advancement.
To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?
Is more legislation or better guidance required?
- As stated above, an overview of the current fragmented rules and a consolidation of best practices is likely to be a helpful step in highlighting where existing frameworks and guidance is sufficient, and where gaps can and should be addressed — in a proportionate way — by legislation.
- In particular, areas that are considered high-risk in terms of AI may be more appropriate to address via legislation, whereas other sectors or industries may be sufficiently covered by the current diverse array of guidance.
What lessons, if any, can the UK learn from other countries on AI governance?
- The UK should compare, contrast and use learnings from the differing approaches between various jurisdictions to AI governance or legislation.
- While many national governments have recognised the need for AI that respects clear values and ethics, and have therefore developed AI strategies or plans, the EU is notable as creating one of the first specific legal frameworks for AI. Positives from the EU’s draft AI Act include taking a risk-based approach that ensures the most significant risks will be addressed from the outset, and low-risk AI systems may be able to voluntarily adopt standards as relevant. On the other hand, challenges with the EU approach include that the original definition for AI proposed in the draft Act was particularly broad and could risk capturing systems that are low-tech. Also, the EU’s strong focus on preventing harm could hamper innovation, and proposals to introduce provisions regulating general-purpose AI systems carry a particular risk of stifling the collaborative development of open-source software.
- The UK has an opportunity to take a lighter, more flexible approach to AI than the EU, and one that addresses gaps in the current regulatory framework, and consolidates best practices to help guide the relevant actors concerned, such as businesses. Drawing on learnings from the EU, the UK’s AI governance framework should target high-tech (rather than low-tech) AI technology, and avoid bringing non-AI systems within scope. It should prioritise tackling the areas that carry the most risk first, in order to have a real and tangible impact and improve consumer confidence in AI. The UK’s approach should be specific and targeted, and avoid blunt tools. It might incorporate a consistent cross-regulator approach that allows for sector-specific expertise to be applied. Overall, it is critical that the adopted approach is, in practice and not just in theory, supportive of current innovation and flexible enough to adapt to future innovation.
November 2022