Written Evidence Submitted by RELX

(GAI0033)

 

 

Introduction

 

RELX welcomes the opportunity to respond to the Science and Technology Committee’s inquiry into the Governance of AI. The inquiry is timely given the government’s recently published policy paper on AI regulation, and an anticipated white paper next year.

 

RELX plc is a UK-based global provider of information-based analytics and decision tools for professional and business customers, enabling them to make better decisions, get better results and be more productive. We operate across a range of sectors, including financial services, science, technology, medical, healthcare and energy. We employ around 36,000 people, including nearly 6,000 in the UK, and support customers in 180 countries. We utilise the latest technology to help our customers improve their decision-making, helping scientists make new discoveries, doctors and nurses improve the lives of patients, lawyers win cases, prevent online fraud and money laundering and help insurance companies evaluate and predict risk.

 

We are both a significant user and developer of the latest technologies including AI. We annually invest approximately £1.5 billion in technology and employ around 10,000 technologists across our business. The combination of our rich data assets, technology infrastructure and knowledge of how to use next generation technologies, such as machine learning and natural language processing, allows us to create effective solutions for our customers.

 

As a technology business we see the benefits offered by AI to improve our offerings to customers, who in turn use our solutions to make better decisions. We are also cognisant of the risks of AI. There are some concerns about the development of new technologies and the impact they may have on both the economy and society more generally. These include the impact of AI solutions on humans, the potential presence or exacerbation of bias, transparency and explainability concerns and access to redress.

 

We take these challenges seriously. We recently published the ‘RELX Responsible AI Principles’ which provides guidance to those within RELX working on designing, developing, and deploying machine-driven insights. They build on pre-existing policies and processes, and we expect them to iterate over time based on customer and employee feedback. They demonstrate our ongoing commitment to Responsible AI and can be found here.

 

The committee’s inquiry is an important part of assessing the evolving approach to governance of AI in the UK. The UK government has issued policy statements about its intended approach but is yet to finalise its plans for AI regulation, as the committee will be aware. We hope that our responses to your questions assist the committee in providing input into that process. We would be delighted to support your inquiry further if we can be of any additional assistance.

 

 

 

 

 

 

 

 

Responses to Committee’s questions

 

1)      How effective is current governance of AI in the UK?

 

The UK has a long-held reputation for developing proportionate, coherent, and effective regulatory frameworks which tackles risk and protects citizens’ rights, while encouraging the adoption and development of new technologies.

 

It is through this lens we should view the UK’s current approach to AI. It is often claimed that AI is unregulated in the UK and elsewhere. That is not the case. The UK, like other jurisdictions, has numerous pre-existing regulations which are relevant to AI. These have evolved over many years to mitigate risks and protect rights.

 

Latterly there have been significant advancements in AI technology and these need to be understood and addressed appropriately. However, many of the challenges and risks associated with the technology have existed for many years. For example, bias and discrimination are not new phenomena. The difference with AI is the scale of this risk, rather than the nature of the risk itself.

 

The UK’s current approach to governing AI has largely depended on applying the existing regulatory infrastructure and standards to AI applications. The UK should maintain this approach and regulate AI in a context-specific and sector-based manner, rather than seeking to establish a new specific horizontal regulatory approach to AI. Experienced regulators are best placed to respond to emerging risks within their domains.

 

2)      How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

 

We believe that a context-based and sector-specific approach to AI regulation is the best way to secure the benefits AI offers to the economy and society, while mitigating the real risks that can be associated with its adoption.

 

As a company which operates across numerous sectors including financial services, insurance, healthcare, legal, agriculture and aviation, we see that context is key when it comes to determining risks. That is why when we established our own Responsible AI Principles, we decided that while these would apply consistently across the Group, responsibility for detailed implementation and determination of risk would be done within the individual business divisions.

 

The context in which an AI solution is delivered has a material impact on its risk level. For example, automated decision-making, whereby a human has little to no involvement in the system, might be considered high risk in one sector but may not be in another. Similarly, risks within different sectors vary. Healthcare might generally be considered a high-risk sector for AI applications, but there will still be instances of low-risk AI being used in a healthcare setting. For example, an AI system which autonomously makes a clinical decision is clearly a high-risk application of AI. However, a framework in which multiple AI systems provide support to help a human physician decide is quite different in terms of risk level.

 

Precision and expertise are required to determine where the use of AI may lead to unacceptable or damaging risks. That is why it is right to allow existing regulators, who have developed their domain knowledge, to be responsible for identifying those risks.

 

This approach should also have the benefit of avoiding duplication. One of the biggest challenges of introducing new specific regulatory frameworks for AI is that they cut across existing systems, causing compliance and policy challenges.

 

Trusting individual regulators to tackle the risks of AI does not mean there is no role for government. There are several key actions that government must take to ensure that a sectoral system of regulation is effective, including:

 

 

The UK government published a policy paper in July 2022. The paper, ‘Establishing a pro-innovation approach to regulating AI’, set out an approach which would be based on existing sectoral regulation, be context-specific and underpinned by cross-sectoral principles. RELX agrees with the approach set out in this document. That said, we do believe there are some challenges with the proposed approach, albeit ones which we believe can be managed through the adoption of the following enhancements:

 

 

 

 

As noted, it will be important for the government to ensure that wider policy and regulatory frameworks, which have an influence on organisations’ abilities to deliver effective AI systems, are also managed carefully. In some areas of policy, such as data protection, there is a case for streamlining to enhance the capacity of organisations to use data to improve their governance approach. In other areas, such as copyright, the approach we recommend should be defined by a focus on stability and avoidance of tinkering:

 

 

 

However, the UK’s reputation as a world leader in the protection of copyright is currently put at risk by a proposal to introduce a new exception to copyright which would allow copying of content for the purposes of text and data mining (TDM) for any use i.e., including commercial use. If enacted this new exception would prevent rightsholders from issuing licences to interested corporate entities to access and perform TDM and related AI-development on their content. This would go further than the current exception for non-commercial research and undermine the incentive to produce content for commercial use, including use in AI systems.

 

The proposal has been put forward as a pro-innovation policy that would benefit the development of AI in the UK. As a company that develops and invests in AI, we believe the proposed exception would lead to the opposite outcome. It would significantly disrupt the market for data and content and likely lead to increased costs to access data for training AI and reduce consumer choice in a content market for AI use that has been developing at pace in recent years. It would drive investment in AI technologies into jurisdictions where such volatility does not exist, which is exactly opposite to the desired effect. We also believe it is in contravention of the UK’s international obligations under key IP treaties and would make the UK an outlier in weakening IP protections.

 

Beyond specific regulatory approaches the UK has also been taking some welcome measures on non-regulatory interventions on AI governance. This includes the work of the Centre for Data Ethics and Innovation on AI Assurance and algorithmic standards. RELX welcomes efforts to make progress on developing AI standards which can contribute to addressing concerns about AI.

 

3)      What measures could make the use of AI more transparent and explainable to the public?

 

RELX is strongly committed to both transparency and explainability as they relate to AI and other data science initiatives. RELX’s Responsible AI Principles include a commitment to explain how our solutions work and to maintain an appropriate level of transparency, depending on the application and users involved.

 

It is important to remember that AI is highly context specific, and as such the level of transparency and explainability required will vary according to the unique nature of the AI application in question. Different contexts and audiences require different explanations, and as part of the design process of an AI system the level of transparency needed should be considered. At RELX we are implementing an ‘explainable-by-design’ approach across our businesses. This is an ongoing and iterative approach, with different parts of the organisation taking different approaches.

 

Recognising that transparency and explainability are highly contextual is an important part of helping the public understand and trust AI systems. It is important that different audiences receive a level of information or explanation that is suitable. For example, AI systems used in a health setting would need to provide different information or explanations to a clinician compared to a patient. Similarly different uses within healthcare may require different responses.

 

There is no one-size-fits-all approach to transparency or explainability and it would be a mistake for government to try and create one. Doing so would likely lead to some AI applications facing unnecessary barriers to entry and could undermine trust in AI if a blunt approach led to either incorrect or unnecessary information descending on end users. Instead, governments should focus on establishing principles and adaptable standards relating to transparency and explainability. They could do this by further exploring ‘explainable-by-design’ principles.

 

 

4)      How should decisions involving AI be reviewed and scrutinised in both public and private sectors?

    1. Are current options for challenging the use of AI adequate and, if not, how can they be improved?

 

AI offers significant benefits to the economy and society and can assist with important public interest objectives. At the same time there are understandable concerns about the impact AI can have on individual rights. Redress and accountability must therefore be at the centre of AI systems to ensure they are trusted and allowed to work effectively.

 

At RELX we recognise that accountability of AI systems, particularly those that can have a significant impact on human’s lives, is very important. That is why accountability and human oversight feature in RELX’s Responsible AI Principles.

 

RELX’s technology is often used to assist our customers in making better informed decisions in areas such as financial services and healthcare. We recognise that those decisions are likely to have an impact on end users, even if we have no direct relationship with them. We therefore believe that an appropriate level of human oversight is necessary throughout the lifecycle of our solutions. This is core to ensuing the quality and appropriate performance of our solutions. We also work closely with our customers to ensure that our solutions are controlled by an agreed set of terms and conditions to avoid unintended use.

 

We have taken this approach based on what we believe is right for our customers and end-users given the contexts our solutions are used in, rather than in order to comply with a specific piece of regulation. It is important that legislative environments are sufficiently permissive to allow companies to develop their own responsible practices.

 

5)      To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

    1. Is more legislation or better guidance required?

 

RELX’s tools and analytics are used by our business and professional customers to assist and improve in their decision making. While we do not make these decisions for our customers, we acknowledge the impact our tools may have on citizens. Our Responsible AI Principles make clear that we recognise our responsibilities go beyond our direct customers and that we must consider the real-world impact of our solutions on people. This is part of our wider company mission to benefit society by developing products that help researchers advance scientific knowledge; doctors and nurses improve the lives of patients; lawyers promote the rule of law and achieve justice and fair results for their clients; businesses and governments prevent fraud; consumers access financial services and get fair prices on insurance; and customers learn about markets and complete transactions.

 

By understanding the context in which a particular solution might be used, and how individuals might be affected, we are able to consider the impact of our products on people. This enables us to create trustworthy solutions in line with our company values.

 

To the extent that additional guidance is needed in this space it must acknowledge that the context in which decision-making solutions are being deployed is key. There is no one size fits all approach that will suitable address risks associated with AI decision-making systems. Various factors must be considered including, but not limited to, whether the AI system is automatically making a decision or contributing to a human decision; how many systems are feeding into a decision and what weighting is given to a particular AI in that process; what impact is the decision likely to have on an individual; and what access to redress does the individual have if they are adversely affected by the decision?

 

AI offers significant opportunity to improve human decision-making, but it is important for those developing such systems to recognise the risks involved. Any policy intervention by government must recognise that context is key and that the influence or impact of AI in a decision-making process can vary significantly.

 

 

 

6)      What lessons, if any, can the UK learn from other countries on AI governance?

 

It is a fascinating time to be considering how the UK should govern AI. RELX operates across multiple jurisdictions, with customers in over 180 countries. It was in part our international footprint that led us to publish RELX’s Responsible AI Principles as these speak to how we approach product development using machine-driven insights at a global level.

 

Given our global reach we have been monitoring the development of differing AI governance mechanisms around the world. Countries are asking similar questions, but they are coming up with different answers and over different timeframes.

 

One of the key factors we look at when considering differing approaches to AI governance is how compatible they are with one another. For global companies such as RELX, highly diverging regimes can be considerably more difficult to navigate than regimes which have some common themes and approaches. Helpfully, we would see the approach recently set out by UK government as supporting our current governance procedures, although of course this will largely depend on how individual regulators here approach their duties.

 

More generally, we support and encourage international collaboration on AI policy and governance as far as possible. The UK’s role within international groups such as the Global Partnership on AI and more established fora including the OECD and the UN is crucial. Bilateral discussions with countries that are also considering how to approach AI regulation will also be important. Through these engagements the UK has an opportunity to influence the global discussion on AI regulation in a way which is more compatible with the UK’s approach, delivers high standards and protections, and allows the UK to act as a bridge between different systems.

 

(November 2022)