Written evidence submitted by Professor Jack Stilgoe, UCL, Professor Alan Winfield, UWE, Dr Chris Tennant, UCL, Professor Peter Jones, UCL, Professor Graham Parkhurst, UWE and Dr Sally Stares, City University of London (SDV0009)

 

Driverless Futures? ran from 2019-2022, funded by the Economic and Social Research Council. The project (https://driverless-futures.com/) was a collaboration between UCL, UWE Bristol and City, University of London, with researchers from science and technology studies, robot ethics and transport studies. In addition, Jack Stilgoe been working with the Centre for Data Ethics and Innovation and Centre for Connected and Autonomous Vehicles to advise on ethical rules for AVs, following the Law Commissions’ review. He is a co-investigator on the new Responsible AI for Long-term Trustworthy Autonomous Systems (RAILS) project, funded by the Engineering and Physical Sciences Research Council.

 

We welcome the Committee’s new inquiry and would be happy to help. In this submission, we present those project findings that seem most relevant.

 

The research on which this submission is based includes one large public dialogue exercise (report here: https://sciencewise.org.uk/projects/connected-and-autonomous-vehicles/) (another public dialogue exercise funded by CCAV is ongoing), 50 expert interviews with developers and stakeholders and a survey of more than 4,000 members of the UK public (supplemented by a small sample of experts taking the same survey (n=80) and a sample of the US public (n=1,800); full reports here: https://driverless-futures.com/2022/05/09/survey-reports/).

 

Our response uses the headings provided in the call for evidence:

 

 

The US has led the way in SDV development and cars have been the priority vehicle. The developers’ priority has been the technical question of how to get a vehicle to drive itself rather than the possible use cases for the technology. Public views should be at the heart of good policy for SDVs. Our survey asked people to engage with the possibilities of SDVs as potential users but also as citizens.

 

There is public scepticism that the technology would bring widespread benefits. 71% of UK respondents agree that the companies developing the technology will be the primary beneficiaries, and only 13% of people agree that poor people will benefit more than the rich. Respondents tend to think that the technology would make more sense for public transport rather than for private vehicles. Of seven different suggested transportation modes, only self-driving buses (shuttle or regular buses) and small delivery pods are considered useful by a majority, while self-driving taxis, self-driving private cars, car clubs and self-driving lorries are more likely to be rated not useful than to be rated useful. This preference for public transport deployment of the technology is weaker amongst the US respondents, who also tended to rate the seven different modes of deployment more highly.

 

 

In 2015, the Council for Science and Technology recommended the establishment of a ‘real-world lab’ within the UK.[1] At the same time, the Government published a code of practice for testing AVs.[2]  A 2017 Act of Parliament sought to ‘ensure the next wave of self-driving technology is invented, designed and operated safely in the UK’. The Government’s Centre for Connected and Autonomous Vehicles has enabled a series of trials to take place in the UK. UK start-ups such as Wayve, Five AI and Oxbotica have tested their technologies on public roads. Meanwhile, trials in some parts of the US, notably Phoenix and San Francisco, have expanded rapidly. Waymo and Cruise have turned their trials into de facto deployments. Waymo now have more than 700 vehicles equipped for self-driving. In the US, some Tesla customers have been enrolled in a poorly-regulated trial of their latest software – FSD Beta – with little guidance and few rules governing its use. In a 2022 earnings call, Tesla claimed that 100,000 drivers had access to the software, which had driven more than 35 million cumulative miles.[3]

 

 

The dominant narrative surrounding self-driving vehicles is that they will drive like humans (but more safely), on roads and with signage that is currently designed to be human-readable. The reality is that SDVs will have multiple dependencies and conditions that define their safe operation.[4] SDVs require, and will continue to benefit from, particular road types, highly maintained traffic signs and road markings, smart infrastructures and high-definition digital maps. In the future, as SDVs prove themselves in easy road environments, pressure will be exerted on other places to upgrade their infrastructures to accommodate vehicles. Infrastructure planners will need to consider, in deciding upon upgrades, whether such investments will provide wider benefits across modes of transport, or if the benefits will accrue disproportionately to developers and users of SDVs. There are particular issues for low-technology modes of road use, notably walking and cycling, that cannot easily be brought into the vehicle-to-vehicle connectivity scenarios that some mobility futurists imagine.

 

 

The recommendations of the Law Commissions have gone some way towards building a framework for moving from testing to deployment of systems. This will require new regulatory functions, which should form part of proposed UK legislation in 2022/3. A clear majority of our survey respondents favour external regulation: 73% supporting regulation by national governments and 81% supporting international standards, with 41% supporting the idea of technology companies regulating themselves. However, there is a widespread lack of confidence in such organisations to make decisions about SDV introduction and regulation (52% expressing medium or high confidence in international standards bodies; 43% in national governments; 51% in technology companies).

 

In addition to the engineering and legal questions surrounding liability and safety assurance, there remains an important political question – how safe is safe enough? Some technology developers have presumed that their target should be to make their systems safer on average than human drivers. However, average improvements in, for example, deaths and injuries per million miles are unlikely to convince those who are harmed by incidents involving SDVs. There is a need for more social research on the acceptability of risk. Evidence from other transport modes suggests that users and bystanders may want systems where users rely on professional operators (driving buses, trains and flying aircraft) to be as much as 100 times safer than conventional driving. And, in the absence of independently-verified historical data, establishing benchmarks for safety will not be straightforward.[5] Qualitative social research will also be required on the types of risk associated with SDVs and the perceptions of fairness and trust that underpin public views.

 

In our survey, we asked UK respondents ‘How safe do you think self-driving vehicles should be?’: 61% wanted a high bar for safety (either ‘Much safer than the safest human driver’, or ‘Never causing a serious collision’), while 18% set a low bar (either ‘As safe as the average human driver’ or ‘It doesn’t matter’). When asked the open question, ‘what first comes to mind when you hear the term SDVs?’, 44% mentioned safety concerns. When asked an open question on why the technology should be developed, only 12% suggested safety benefits.

 

A forthcoming report for the Centre for Data Ethics and Innovation will help fill out principles and approaches for the safety assurance and ethical governance of SDVs.[6]

 

Much of the governance discussion has focussed on short-term questions of liability and safety. Most technology developers are focussed on getting their systems to work and to demonstrating safety. Questions of business models, use cases and fit with other transport modes are being deliberately postponed. The default business model is of ‘robotaxis’, in which a fleet can be controlled by a single company, but the economics of this model look unstable.[7] If self-driving vehicles are the answer, what’s the question? Who is likely to benefit from these technologies? What problems do they solve? What else will be required to build systems that help places meet their transport needs?

 

Our research suggests that the public can articulate their views on SDV safety in various ways – not only thinking about outcome measures such as accident statistics, but also the characteristics of SDV behaviours, and how these might make other road users feel. Survey respondents tend to express a desire for SDVs to behave predictably (only 17% agree that SDVs should be allowed to break the formal rules of the road in some situations, though 42% agree that human drivers sometimes need to do this) and to drive more cautiously than human drivers (77% agree they should be programmed as such; 66% reject the idea that SDV fast reaction times should give them license to drive more closely than human drivers do). Members of the public tend overwhelmingly to want to know when they are encountering an SDV: 86% public survey respondents agree that it must be clear when a vehicle is driving itself, in contrast to only 46% of our expert survey respondents. Worries about the safety of ‘Level 3’ handover scenarios, well documented by human factors researchers, are echoed in public survey research, with 69% drivers and 66% non-drivers saying they would be worried that SDV riders would not be able to react quickly enough if asked to take control.

 

In addition to the analysis described above,[8] there are important questions of on-road interaction, particularly with vulnerable road users, and road rules that have received little attention.[9] Our survey respondents tend to express the desire for other road users to be able to ‘communicate’ with SDVs to negotiate interactions, and opinions are mixed in terms of how safely and well SDVs would handle everyday scenarios such as passing a cyclist or zebra crossing. Technology developers’ attempts to reduce such questions to algorithmic calculations (e.g. Intel’s Responsibility Sensitive-Safety) are technically and politically problematic.

 

 

There is a clear need for additional regulatory functions, which Government recognises. Government and its agencies will also need to maintain a watching brief as the technology scales to new places and uses cases. Many of the scenarios that have been drawn for future use are speculative and take the technology (in the simplified form that is advertised by its developers) for granted.[10] Such scenarios and their implications for congestion, pollution, safety etc. hinge on assumptions about whether SDV use will substitute for car use, public transport or active travel. Despite early enthusiasm that SDVs would contribute to decarbonising road transport (one of the largest contributors of carbon emissions), scenarios suggest either improvement or exacerbation depending on patterns of use, types of vehicle etc. So-called ‘second order’ effects (e.g. how future cities and lifestyles evolve around SDVs)[11] are radically uncertain and contingent upon future policy.

 

There is often an assumption that SDVs will be used communally, in the manner of a shared taxi, and that this will lead to significant reductions on car ownership and overall car kilometres. But, outside inner-city neighbourhoods, there is no substantive evidence that this will be the case. More than one transport demand and supply modelling exercise has indicated that, even with maximal sharing, traffic and congestion would be slightly worse, whereas anything other than maximal sharing would significantly increase total car kilometres and urban congestion as people shift from active travel and public transport and vehicles run empty as they relocate to other passengers or to park. [12] How this would work for suburban and rural areas – where the majority of UK citizens live – has not been given much consideration.

 

The current narrative of ‘autonomous’ vehicles suggests that the relevant change will be the replacement of human drivers with computers. In reality, SDVs are far from autonomous. They have multiple relationships with the world around them, including, most clearly, infrastructure and other road users, relationships that will proliferate as the technologies scale.[13] Policy can and should play an important role in shaping the technology and its environment so that SDVs can contribute to alleviating rather than exacerbating social problems.

 

 

August 2022

 

Endnotes


[1] Council for Science and Technology, Letter to the Prime Minister on how the UK can get the greatest value from the autonomous and connected vehicles industry. 23 July 2015

[2] Department for Transport, 2015, The pathway to driverless cars: A code of practice for testing

[3] https://tesla-cdn.thron.com/static/EIUQEC_2022_Q2_Quarterly_Update_Deck_J8VLIK.pdf 

[4] See Tennant, C., & Stilgoe, J. (2021). The attachments of ‘autonomous’ vehicles. Social Studies of Science, 51(6), 846-870.

[5] See Stilgoe, J. (2021). How can we know a self-driving car is safe? Ethics and Information Technology, 23(4), 635-647. A study from Peng Liu and colleagues with a small sample of public respondents (n=499) suggested a “tolerable risk” threshold of four to five times as safe as human driven vehicles and a “broadly acceptable risk” threshold of 100 times safer. See Liu, P., Yang, R., & Xu, Z. (2019). How safe is safe enough for self‐driving vehicles?. Risk analysis, 39(2), 315-325.

[6] This piece of work has Jack Stilgoe and Professor John McDermid from the University of York as expert advisers.

[7] Nunes, A, Hernandez, KD (2020) Autonomous taxis & public health: High cost or high opportunity cost? Transportation Research Part A: Policy and Practice 138(August): 28–36.

[8] Stilgoe, J. (2021). How can we know a self-driving car is safe?. Ethics and Information Technology, 23(4), 635-647.

[9] Tennant, C., Neels, C., Parkhurst, G., Jones, P., Mirza, S., & Stilgoe, J. (2021). Code, culture and concrete: Self-Driving Vehicles and the Rules of the Road. Frontiers in Sustainable Cities, 122.

[10] Cohen, T., Stilgoe, J., Stares, S., Akyelken, N., Cavoli, C., Day, J., ... & Wigley, E. (2020). A constructive role for social science in the development of automated vehicles. Transportation Research Interdisciplinary Perspectives, 6, 100133.

[11] See, for example, https://www.ben-evans.com/benedictevans/2017/3/20/cars-and-second-order-consequences

[12] Parkhurst, G. and Seedhouse, A. (2019). Will the ‘smart mobility’ revolution matter? Chapter 15 in: Docherty, I. and Shaw, J., eds. (2019) Transport Matters. Bristol: Policy Press pp359-390. Available from: https://uwe-repository.worktribe.com/output/845686

[13] Tennant, C., & Stilgoe, J. (2021). The attachments of ‘autonomous’ vehicles. Social Studies of Science, 51(6), 846-870.