Written Evidence Submitted by Wayve

(GAI0061)

Introduction to Wayve

Wayve is a London-based British startup pioneering a scalable way to bring automated vehicles (AVs) to the UK and beyond. Wayve’s unique ‘AV2.0’ approach – driven by data and AI – enables our fully driverless technology to learn and adapt its 'driving intelligence' to any city or vehicle type, much like how people learn to drive, but with the benefits of greater reliability and safety. Our Automated Driving System (ADS) is trained on vast amounts of data to continually improve driving performance and enable our system to adapt to changes on the road. We are showcasing the promise of our AI-powered technology via daily testing on public roads across the country, on electric cars and light commercial vans. We are designing our technology to be sustainable, as our AVs will always use EV platforms, and scalable – it’s our ambition to be the first to deploy in 100 cities globally.

Major businesses recognise the potential of our technology, both here in the UK and for export globally, demonstrating the huge opportunity for foreign investment that AVs offer. We have signed commercial partnerships with Ocado Group, Asda and DPD, and this year raised $200 million in Series B funding, bringing our total equity raised to over $258 million since our inception in 2017. Our investors include world-leading UK institutions and angel investors such as Balderton Capital, Firstminute Capital, Baillie Gifford, Virgin, Microsoft and Zoubin Ghahramani. Our last-mile delivery trials began this year in London.

 

Overview

 

Wayve welcomes the pro-innovation and sector-specific approach to governing AI that the UK Government intends to adopt. AI is an evolving, cross-purpose technology that should be regulated according to its use. Regulation must therefore be devised in situ by sector experts in consultation with industry, such as the Vehicle Certification Agency (VCA), the Driver and Vehicle Standards Agency (DVSA) and AV developers in the case of AVs.

 

Regulators must be equipped with the AI skills they need to devise sector-specific measures. Regulators such as the VCA and DVSA must be equipped with the technical skills to assure an ADS which uses machine learning, for example. Building teams with the skills to assure the safety of AI systems is not trivial. Regulators should be supported with Government-led schemes to ensure that they have access to these sought-after skills.

 

The Government’s Connected and Automated Mobility (CAM) 2025 roadmap details plans for regulation which will require machine learning to be assured for safety in AVs. We welcome the alignment between the cross-sectoral principles and what sector-specific regulation is aiming to achieve, including measures to ensure safety and clarify liability in the case of incidents involving AVs. We welcome this approach and look forward to working with the VCA and DVSA.

 

Any additional guidance should ensure there is clarity for regulators and avoid creating confusion between regulatory frameworks. AI is unlocking huge benefits across the UK economy and society. It is being employed in a range of applications, including revolutionising mobility, improving health and reducing emissions. We must ensure that regulators across sectors, from transport and energy to health, are equipped with the tools to regulate AI.

 

AI Governance and Regulation

 

Wayve welcomes the Government’s intended approach of regulating based on the use of AI rather than the technology itself, and as such avoiding a detailed universally applicable definition of AI. Supporting regulators to consider the core characteristics and capabilities of AI, and defining at individual application level, will give regulators the freedom to ensure flexibility. This is crucial to avoid unnecessary burdens on businesses that are catalysts for the UK economy - the Government recognises that the AV market alone is predicted to be worth £41.7 billion by 2035 and create up to 38,000 UK jobs[1]. Cross-sectoral regulation could ultimately make it harder to realise the benefits of these technologies and most importantly, it will miss some of the biggest risks in the use of AI if examined by non-domain experts outside of the intended domain.

 

Regulators are already putting in place context-based frameworks for regulating AI, and paying due regard to the risks that the Government’s proposed cross-sectoral principles aim to address. Consider the CAM 2025 roadmap for example. Taking the proposed principles in turn:

 

Ensure AI is used safely. Safety will be central to the future authorisation scheme for AVs. As the CAM roadmap argues, “safety is the first priority for the development and introduction of self-driving vehicles.”[2] Wayve has submitted its feedback on the Government’s proposed safety standard for AVs, and looks forward to feeding into discussions around the National Safety Principles. In future, in order to authorise self-driving vehicles for commercial on-road use in the UK, an Authorised Self Driving Entity (ASDE) will need to submit a Safety Case to the VCA, and a key part of this will be demonstrating the safety of any machine learning used in the Automated Driving System (ADS), within wider vehicle safety.

 

Ensure that AI is technically secure and functions as designed. The extent to which AVs “reliably do what they intend and claim to do” will also be central to the future authorisation scheme for AVs.[3] As set out in the CAM roadmap, the Government intends to take into account considerations of context when approving AVs for use on UK roads, noting that self-driving vehicles will likely operate within a defined operational domain. The Government will also take a proportional approach, reserving the power to set “deployment conditions” for AVs which allow them to operate under certain conditions, but do not prevent overall authorisation.[4]

 

Make sure that AI is appropriately transparent and explainable. The Government acknowledges in the CAM roadmap that the ability of a vehicle to “explain” its actions will be a fundamental component of in-use regulation, and has committed to working further to understand how explainability requirements will impact deployment. Wayve would welcome work to develop explainability requirements that are based on reaching an acceptable level of confidence in our systems, including clarification of which audience, whether public or specialist, an ADS will be expected to be explainable to. The suggested transparency requirement of data sharing is also expected to be part of the in-use regulatory scheme for AVs. An ASDE and No User In Charge Operator (NUiC Operator) may be required to share data with the proposed Road Safety Investigation Branch (RSIB).

 

Embed considerations of fairness into AI. The Government’s proposed list of broad duties for an ASDE include that the ASDE should make an assessment of fairness, including considerations of data bias, which we are keen to work with the Government on to understand further.

 

Define legal persons’ responsibility for AI governance. As noted above, the Government intends to establish new legal actors in legislation which will be liable for incidents and disputes involving AVs. Wayve looks forward to working with partners to understand the liability requirements for AVs in more detail under the new safety framework.

 

Clarify routes to redress or contestability. Wayve expects the in-use regulator to enjoy powers to issue redress orders, as well as impose a range of other regulatory sanctions.

 

In Practice

 

Any additional guidance issued by the Government must maintain clarity for regulators who are already implementing frameworks for regulating AI technologies with regard to the risks that the cross-sectoral principles aim to address.

 

The success of the sector-specific system will rest on regulators having the skills and expertise to regulate AI in context. For example, the VCA must be supported with the means to build the expertise to assure an ADS. Elements of an ADS are frequently based on machine learning.

 

Wayve welcomes the VCA’s commitment to take steps to understand how it can approve and certify an ADS that employs AI and machine learning in its 2022-23 Business Plan, and stands ready to support the VCA’s Automated Vehicle Technology Group. Yet other regulators and agencies, from the Medicines and Healthcare products Regulatory Agency to the Drinking Water Inspectorate, must also be supported as they assess the safety of new AI-based technology.

 

Support from the Government, such as the extension of the Regulators Pioneers Fund, which has allowed regulators to grow an understanding of the impacts of AI, will be essential. The Government should also take steps to start rotations or secondments from and into industry, to help the diffusion of skills. These measures will ensure that regulators are able to effectively respond to new developments in an emerging AI economy, ensuring the benefits of this multi-purpose technology accrue to the UK.

 

Transparency and Explainability

 

Wayve cautions against the Government’s recent assertion that “opacity regarding decision-making and corresponding lack of public trust” are potential regulatory impacts.[5] We believe that the decisions made by our AVs will be explainable to a level that can provide confidence about our systems. It may even be easier to ascertain the sequence of events leading up to an incident involving a Wayve AV than a traditional vehicle.

 

Wayve AVs will record 360-degree video data, which could support accident investigation and those involved in any incident to effectively determine liability. It will also be possible to interrogate the behaviour of the ADS and determine whether any actions were taken that led to an incident. Importantly, beyond the behaviour of the ADS, it will also be possible to determine whether defective software or a system failure caused an incident.

 

We expect the issue of assigning liability in the event of incidents to be supported through the creation of new legal entities including, an ASDE, User-in-Charge (UIC) and NUiC Operator as part of the Government’s proposed regulatory framework for AVs. We expect these entities to be established in upcoming legislation.

 

 

(November 2022)

 


[1] Department for Transport, ‘Connected and Automated Mobility 2025’, 2022. https://www.gov.uk/government/publications/connected-and-automated-mobility-2025-realising-the-benefits-of-self-driving-vehicles.

[2] Department for Transport, ‘Connected and Automated Mobility 2025’, 2022. https://www.gov.uk/government/publications/connected-and-automated-mobility-2025-realising-the-benefits-of-self-driving-vehicles.

[3] Department for Business, Energy & Industrial Strategy, Department for Digital, Culture, Media & Sport, Office for Artificial Intelligence, ‘Establishing a pro-innovation approach to regulating AI’, 2022. https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement.

[4] Department for Business, Energy & Industrial Strategy, Department for Digital, Culture, Media & Sport, Office for Artificial Intelligence, ‘Establishing a pro-innovation approach to regulating AI’, 2022. https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement.

[5] Department for Business, Energy & Industrial Strategy, Department for Digital, Culture, Media & Sport, Office for Artificial Intelligence, ‘Establishing a pro-innovation approach to regulating AI’, 2022. https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement.