HoC 85mm(Green).tif


Transport Committee

Oral evidence: Self-driving vehicles, HC 519

Wednesday 8 March 2023

Ordered by the House of Commons to be published on 8 March 2023.

Watch the meeting

Members present: Iain Stewart (Chair); Mike Amesbury; Mr Ben Bradshaw; Jack Brereton; Ruth Cadbury; Paul Howell; Karl McCartney; Grahame Morris; Gavin Newlands; Greg Smith.

Questions 188232


I: Lisa Johnson, UK Director of Public Affairs, Starship Technologies; Dr Siddartha Khastgir, Head of Verification and Validation for Connected and Autonomous Vehicles, Warwick Manufacturing Group; and Peter Stephens, Policy and External Affairs Director, Stagecoach.

II: Ed Houghton, Head of Research and Service Design, DG Cities; Simon Morgan, Chair of Traffic Signs Panel, Institute of Highway Engineers; and Christian Wolmar, author.

Written evidence from witnesses:

Starship Technologies

Warwick Manufacturing Group


DG Cities

Institute of Highway Engineers

Examination of witnesses

Witnesses: Lisa Johnson, Dr Khastgir and Peter Stephens.

Q188       Chair: Welcome to today’s session of the Transport Select Committee where we are continuing our inquiry into self-driving vehicles. Before we start the questions, I invite the panel to introduce themselves, stating their name and organisation, for the purposes of our records.

Peter Stephens: My name is Peter Stephens. I am the policy and external affairs director for Stagecoach.

Lisa Johnson: Good morning, everyone. My name is Lisa Johnson. I am the director of public affairs for Starship Technologies.

Dr Khastgir: Good morning, everybody. I am Professor Siddartha Khastgir from WMG University of Warwick. I also sit on the Department for Transport’s science advisory council. I was the lead author of the first international standard for highly automated vehicles.

Q189       Chair: Thank you for your time and for taking our questions this morning. I will start with our two operators. Can you give us a little explanation of how your vehicles work? I have the advantage of Starship delivery robots, which are currently trundling around the streets of my constituency in Milton Keynes. I have on my phone an app with which I can select my groceries. I can select what time they are delivered and what tune the robot plays when they arrive—everything from “Happy Birthday” to “Baby Shark”, and various things in between.

Lisa Johnson: I apologise for “Baby Shark” being on your phone.

Q190       Chair: For my colleagues’ benefit, can you explain what then happens and how that delivery vehicle works in practice?

Lisa Johnson: We are a business-to-consumer delivery mode, with a personal delivery device or robot. Our little robots are just above knee height and about as wide as a pedestrian. You go on to the app, as the Chair described, look up what retailers are offering goods to you, choose what you want and pay. Our little robot will wend its way to you, and you unlock it with your mobile phone.

We have done 4 million of these deliveries so far globally and travelled over 1.7 million miles in the UK alone, crossing a road or driveway every 3 seconds. We do not have a warehouse full of stuff; we move stuff. We are the delivery device that delivers things to people’s residential address.

Q191       Chair: Can you say a little more about how it works? It works on the pavement or on what in Milton Keynes are called redways—segregated cycle and pedestrian lanes. They are not on the public road, other than when they cross it. Can you say a little more about how it operates in interacting with pedestrians, bikes and street furniture?

Lisa Johnson: We operate on the pavements. I always say that we behave like a cautious pedestrian. We are at level 4 autonomy, so our robots are autonomous 98% of the time, backed up by a remote assistant. If you order something, our robots travel down the pavement at an average speed of 3 mph, going up to 4 mph, in built-up areas. They have in-built obstacle avoidance technology, as you would expect from something that is autonomous, and will avoid anything in their path. We are built to be super-cautious.

When we go somewhere, its not like we say, “Hey, guys. The robots are here. Here’s robots.” We do a very sophisticated mapping exercise before we go. The robots are autonomous, but only within the confines that are set by us. We map where we go and tell them where they are allowed to cross and what confines they are allowed to be autonomous within. That helps us if there are problems locally. We can say, “Maybe theres an accessibility issue here. Were not going to take that route any more. Were going to close it down.” If we cross on a bridleway and that is causing problems to horse riders, for example, we say, “Were not going to do that anymore.” As far as the robot is concerned, that route no longer exists.

Milton Keynes is a use case that is somewhat different from other areas because we have the redways. We operate in Northamptonshire, Cambridgeshire and Bedford. We have just launched in Leeds. We will be looking at Greater Manchester next week. We have the redways in Milton Keynes, but we operate in other areas, too. We also operate in Finland, Estonia and lots of places in the United States.

If you are walking down the street, the robot will trundle along very slowly, especially if it sees things around you. It takes a cautious approach and always tries to avoid everything in its path. It will navigate around street furniture and just wend its way. Essentially, they become normalised, as in Milton Keynes. People treat our little robots as if they are part of the community. When we were talking outside, we said that they treat them as pets or small children that they have adopted; they talk to them and help them out if they get stuck.

They avoid anything in their path. They go very slowly around things they are cautious about. If they have a problem, they call on a remote assistant to help them to figure out what is around them, but 98% of the time they are autonomous and just wend their way from the shop they are taking the goods from to the person’s house and then go on to the next order.

Q192       Chair: Mr Stephens, can you tell us a little about your autonomous bus?

Peter Stephens: We did a project a few years ago that looked at depot operations. It was in our depot, looking at when buses come in in the evening. Those buses would then be driven to be fuelled, go through the bus wash and be parked. The interest for us was that it was a high-traffic area, and if we could take people out of what is potentially a health and safety environment, we could see some safety benefits.

The project we are working on at the moment is called CAVForth, which is looking at a local bus service running from a park and ride site through to Edinburgh Park, a route of about 14 miles. It is a regular bus service, so ultimately it will be a licensed local bus service. We have not started that bus service, but it will do all the things that a local bus service would do—pick up passengers, drop them off and navigate to their bus stops.

We have a couple more projects in the pipeline that were announced earlier in the year and got funding. They are looking at slightly different use cases. One of them is a city centre shuttle. The other is looking at an on-demand service around an industrial park, linking into a transport hub. We are looking at how autonomous or self-driving technologies can help us with local bus services.

Q193       Chair: Will the Edinburgh example be operated on a public road, or is it in a segregated lane?

Peter Stephens: It is a public road. It is a public bus service. It will interact with other road users.

Q194       Chair: In that case, will it have an operator on board, in the same way as on the docklands light railway? That is normally operated automatically, but there is someone on board, should there be an issue.

Peter Stephens: For the purpose of this trial, there will be two on-board staff. You will have a safety driver, who will be in the cab and ready to intervene. That is a regulatory requirement. It is a safety requirement.

The other thing we are piloting is the role of what we call a bus captain. Longer term, when you have a driver who is freed up from having to be in the cab, what other functions can they operate? We are very clear that our drivers do a brilliant job on a variety of things. It is not just about the vehicle operation. It is about giving customers the right ticket and handling the revenue side of things. It is about helping them on and off the bus. It is about giving them information about where they are going. It is also a public safety role. We see that as being really important. One of the things we want to do as part of this trial is to understand what that role will look like when you have the driver freed up from the cab. How can they then support bus passengers more effectively?

Q195       Chair: But for the initial stages of this, there will be someone in the cab, with a steering wheel, all the pedals and everything else?

Peter Stephens: Yes.

Q196       Chair: Does the liability still lie with the human being, ultimately?

Peter Stephens: Within the current regulatory framework, yes.

Chair: That is helpful.

Q197       Mike Amesbury: Good morning, everybody. Siddartha, what are the biggest practical and technical challenges to the widespread adoption of self-driving or autonomous vehicles and technology?

Dr Khastgir: The biggest challenge we have right now is the public perception of safety of this technology. We can have the safest technology at a technological level, but if we cannot convince the public that it is safe and get them to trust the system, they will never use it, so we will never reap the benefits of the system and these technologies.

The research that we and other universities in the UK and internationally have done has shown a direct correlation between trust and safety of the technology. We need to work a lot on getting people to accept and trust the technology. A key aspect of that is that we need to take a true systems-thinking approach to technology development, where people are part of the system. We should not develop the technology in isolation and just hope that people will accept it. We need to bring people along as part of the technology development process, to understand what the requirements are and how to get them to trust these systems. That is at a societal level and is the first challenge we have.

Once we capture that, the biggest challenge at the technological level is trying to establish the operating conditions of the technology itself. For example, you might say, “I have a self-driving vehicle that can operate on UK motorways, but that is actually a very bad definition of an operating condition. Does it mean that you can operate during the day, at night and in all weather conditions—the snow or rain? We need to have a very detailed definition of the operating conditions of the technology itself. At a technological level, the term we use for this is operational design domain—ODD. We need a very detailed definition of the ODD and to do it in such a way that the end user understands it, so that they can use this technology correctly.

To do that, we have come up with the concept of what we call informed safety. We want people to understand the limitations as well as the capabilities of the technology. That is very important when it comes to safety. You can establish safety, but you also need to communicate the limitations of the technology to the end user.

Q198       Mike Amesbury: Both Lisa and Peter referred to ambitions to roll out particular products further afield in the UK. Is that being hindered by any of the challenges Dr Khastgir has referred to?

Lisa Johnson: Social acceptance is really important. We spend a lot of time on it at Starship because it matters. You cannot just turn up somewhere and say, “Here’s autonomous technology. Youre having this. It’s happening in your community.” We work closely with communities on making it work. Everywhere we go, we have had such a positive response. The response is not, “We don’t like this.” It is, “When can you come to my area?” We have people travelling into our operational areas because they want to use the robots, especially with their kids, because kids love them and they sing “Happy Birthday.” Who wouldn’t love that?

Our biggest challenge, honestly, is that we have more places wanting us than we can get to at the moment. The issue is regulation. At the moment, we are operating in a grey area legislatively, especially because we operate on pavements. We are at level 4 autonomy, with no user in charge. We are desperate for regulation. We will invest in the UK. We will create the jobs. We do some great work in schools. We have a Starship schools programme, where we talk to kids about STEM. I tell you what—its really easy to get kids interested in science when you have a robot, especially when its singing “It’s Coming Home” during the World cup.

We have lots of stuff that we can do with social benefit. We want to be everywhere. We want to bridge the digital divide. We want to make sure that tech and innovation is for everybody, not just for certain people. To do that, you have to be everywhere, but we can’t. We can deliver anything, basically. We do small amounts of shopping, in a way that is environmentally and ethically sustainable. It is a niche that other people cannot really fill; it is very difficult to deliver £5-worth of shopping to someone’s home ethically and sustainably.

We are crying out for regulation because we want to expand. We want to be everywhere. We want to bring tech to the towns and cities, and to bring some rural solutions for people who struggle with delivery, but we have to get regulation. As a business, we are in the weird situation of saying, “Please regulate us.” That is our problem. Investors will not have confidence until they know what the regulatory conditions look like. We need clarity, consistency and certainty in order to be able to invest.

Peter Stephens: From our point of view, there are four things that need to be true. That is the way I would put it. First, it is around the customer experience. These technologies need to offer a better customer experience—a smoother, safer experience. Ultimately, our customers have a choice. We will have to convince them that this technology is safe, that it offers something for them and that it is a better experience. We have to take those customers with us.

The second thing is around the safety case. We believe this technology has the potential to be safer. That is something we want to understand from the trial as well, in terms of avoiding accidents. The third thing is that we need to understand what the business case is. What are the costs and benefits? We think there may be some benefits around fuel savings and, as I said, around safety. We need to understand that.

The final thing we need to understand is what it is like to operate these services practically. What is it like in terms of reliability on a day-to-day basis? How do you maintain the vehicles? Given how many miles our vehicles do every single day, we need to make sure the technology is able to cope with that.

Those are the four things in which we are particularly interested. Those are the reasons why we are running these trials and some of the things we are looking for from them, to understand more about what the technology can offer.

Q199       Mike Amesbury: Siddartha, with existing self-driving and automated technology, how easy is it to transfer from one mode of transport to another?

Dr Khastgir: That is a very interesting question. We believe there are quite a few research questions that are common across the different transport modes when it comes to autonomy, be it land-based systems, aviation or marine. Over the past 12 months, we have been working with lots of stakeholders in those three domains, and with the regulators of those domains, to come up with a common safety assurance methodology that we can adopt across the three domains. There will always be some questions that are very specific to aviation, very specific to marine and very specific to land, but there are aspects of policy, regulation and research that we can make common across the three domains.

For example, if we take the concept of operating conditions definitions, every mode of transport, be it aviation, marine or land, will need to define accurately the operating conditions of the automated system. The approach to doing that could be similar across the three domains. There is a lot that the three transport domains can learn from one another if we can create co-ordinated activity across those domains. We have been, by definition, very siloed in the three transport domains—land, air and marine—but autonomy provides us with an opportunity to bring them together so that they can learn from one another and so that we can make better use of Government resources and budgets.

Q200       Mike Amesbury: Is that the future? Do we expect that all vehicles will be autonomous, self-driving vehicles?

Dr Khastgir: I would like to think that is the utopian future, but I don’t think it is going to happen in the short to mid-term.

Lisa Johnson: We like to say that we are the future of delivery today. I think you will be looking at multimodal solutions for some of this stuff. I have described what we do. With a booming on-demand sector, where people want small amounts of stuff really quickly, is there a way of delivering that ethically and sustainably when you want to pay a fair wage and make sure that people are looked after? That is where we come in.

There will be things that humans should always do. Potentially, there will be environments that humans will always be the best people to navigate. If you are on a high street with really heavy footfall, our little robots will take a long time to get anywhere because they are super-cautious and every time something comes near them they will stop. If you have ordered fish and chips but you are going down Oxford Street, you will be waiting a while for them and they will probably be a bit cold by the time they get there because we are super-cautious. That is the approach we take.

I think it is the future—or potentially the now, if we can get the regulation right—but it will be multimodal. We don’t see it as, “The robots are coming; theyre going to take over everything.” It will be appropriate in certain situations. As autonomy gets better, there will be more situations where it is appropriate, especially at times of labour shortagefor example, some of the stuff you can do with autonomy in fields and fruit picking, where jobs are not necessarily desirable or profitable. There is a huge use case for some of these things. I do not think we should write humans out of the picture entirely, but there are certain cases where it is not even necessarily the future; it is something we should be looking at right now.

Peter Stephens: From our point of view, the technology is promising, but for the foreseeable future you will need to have a bus driver on board because they fulfil a wider role. We are entering these trials to understand better what the business case is and how we can meet the needs of our customers, but for the foreseeable future we think there will need to be a driver because we have a very high safety bar that we have to meet.

Q201       Chair: Before I pass over to Greg, can I pick up Lisa’s point about the need for better regulation? Are you looking at nationally set regulations, or is it more locally determined? What is appropriate for the streets of Milton Keynes will be very different from what is appropriate for a cathedral city, where the streetscape is very different. What balance do you see between national and local?

Lisa Johnson: That is a really important question and debate. There should be a national framework. There needs to be some sort of national regulation that underpins local decisions. We have invested a lot of time and energy in our safety case. For example, our head of safety and risk comes from civil aviation, so the processes we have would not be uncommon for an airline. That is how seriously we take safety. If we do not have regulation, what is to stop somebody who is not as safe as us coming into the market?

As Siddartha said, people are looking at us. If there is an accident with a car, people expect that and it is not really news. If there is an accident with an autonomous device, it is headline news and it suddenly becomes a reason why we cannot do this thing. We are really conscious of safety, but we are not sure that everybody will be, so we want some basic national standards that underpin what we do and make people feel comfortable and say to them that this is a regulated industry that is safe.

That said, we operate on pavements, which are a very grey area. We do not want a free-for-all, and we do not think local authorities or residents want a free-for-all. Local authorities know what is best for their areas. The ultimate decision making can be underpinned by a national framework that says what is safe, but we should let local authorities decide what is best for their local area. That would work for us, and it would work for communities, because at least then it is locally accountable. If people don’t like it, there is a very easy way to fix it for them locally. It brings the decision making much closer to the people it impacts.

Chair: I think colleagues will want to return to that point later. That is a helpful clarification.

Q202       Greg Smith: I have a brief supplementary on the point Lisa made about expanding to rural. I am very familiar with the robots in Milton Keynes—my constituency borders Milton Keynes—but my constituency is full of small country lanes. When I see the robots scuttling along the grid system on the redways in Milton Keynes, I cannot see how they would suddenly transfer on to a single-lane rural road that goes between villages where you could not get two cars past each other. You probably could not get a car and a robot past each other. How would it be feasible, without someone’s fish and chips being not just a bit cold but stone cold by the time they got through, to make that transition rurally on wheels, rather than by putting drones in the air or whatever it may be?

Lisa Johnson: We have to be realistic. I could sit here and say that we are perfect and can do everything. In that situation, maybe we can’t. We want to do what is appropriate and what works.

For rural communities, at the moment we are looking at a place in Cambridgeshire where we are up against that exact challenge. The problem is not really distance; it is the very narrowness of the pavements. People are saying, “Please come. We want the robots,” but we are looking at it and thinking, “The pavement is quite narrow. We can do this area, but if there is something oncoming, how do we square that circle?” Part of it is working with the local community so they understand how the robots are going to behave. We can be perfect, but people need to understand what the robots are going to do. We can be as perfect as you want, but humans are going to get involved and that will change the situation.

We want to do rural. As battery life gets better, we can do rural. We can go 18 hours backwards and forwards on our battery life at the moment. It is a good, low-carbon solution to some of this stuff. We mostly deliver top-up shopping. Your fish and chips might get cold, but thats fine; we might be delivering your newspaper, your milk and your bacon and eggs for your Sunday morning breakfast, which we can do.

The biggest challenge we have is that if you only want two robots operating in a rural setting to deliver around a relatively confined area, what is the charging environment? We do not yet have the infrastructure nationally for them to go and charge themselves or plug in. There is an infrastructure challenge. We can do rural and we are working on it, but it comes with its challenges. It is a lovely challenge that we would like to overcome. It is the sort of thing we are working on trying to do, but we cannot do it until we scale. It is a thing we will meet once we are able to scale and the unit economics work for us to be able to do it.

Greg Smith: I think the Chairman should declare what song he chooses on his deliveries.

Chair: That is covered by a D-notice.

Q203       Paul Howell: I declare that I have been to see Starship as well, so I am very familiar with what you are talking about. I am in a rural-type context. I have mining villages on hills, with different challenges. Again, there is a potential in the same space. But this is not an advertising game for Starship today. We need to get into other things.

I want to start with Dr Khastgir and the steps that can be taken to make sure these vehicles are safe. To me, to be brutally honest, it should be easier to make a small carriage vehicle of a size that goes along the pavement at 3 mph safe than a bus going though the middle of a pedestrianised area or something like that. There are different routes to getting it safe.

We listened to some discussions yesterday where people talked about the different ways of designing how the thing works, whether it is using mapping or whether it is trying to look out, like a person would, to see what is around. What do you think should be done by these sorts of people to make sure the technology is safe?

Dr Khastgir: Again, that is a very interesting question. We believe you can take a similar approach to safety assurance whether you are trying to prove that Starship robots are safe or whether you are trying to prove that Stagecoach buses are safe. You can take the same approach; it is just about the level of rigour that would be required in the accepted level of safety.

The accepted level of safety for Starship robots in the number or type of scenarios in which it would be used is very different from the scenarios you would test a Stagecoach bus against. At one level of abstraction, the approach to safety assurance—trying to prove safety—can be the same. That is what our ambition should be. We do not want to create bespoke safety frameworks for different use cases. That would confuse industry. We want to create, at a level of abstraction, a framework that is the same for all types of use cases. If we do it like that, what would be specific for Starship or for Stagecoach would be their operating conditions definition. You were just having a conversation about the fact they can operate in Milton Keynes, but that it is likely to be challenging on single-lane rural roads. That is essentially a definition of the operating conditions.

Q204       Paul Howell: Does this come back to what you called an operational design domain? You are focusing on it from that side, rather than the technology side.

Dr Khastgir: Exactly. Regulations need to focus on that level of abstraction, as otherwise you will need to start creating regulations for every technology. We can take some inspiration from the work that has happened internationally. The European Commission has published a regulatory act, which was adopted in August last year, that is also taking a technology-neutral approach—an operational design domain-based approach—to proving that a system is safe. That would provide you with inputs on the kinds of scenarios you should be using, how many scenarios you should use and so on.

Q205       Paul Howell: Is there anything you want to elaborate on at this point regarding the concept of operational design domains? Do you think we have covered them enough?

Dr Khastgir: The only thing I would like to mention here is that you could define the operational design domain—essentially, the operating conditions that your technology can handle—at various levels of detail. You could very easily say, “My operating conditions are UK motorways,” or you could say, “My operating conditions are UK motorways that have hard shoulders and sunny days only.” There are different levels of detail.

What we are trying to suggest from a regulatory perspective is that we should go for detailed operating condition definitions, because no technology can handle everything. You cannot say, “I can handle all types of rainfall and all types of snowfall.” One key message I would have on the operating conditions definition is that you should go for a detailed definition. There should be some level of regulation or mandating required for a minimum level of detail when it comes to operational design domain definitions. That needs to be done in a manner that not only the regulator but the user understands, because they should be able to use it in the correct manner, based on that definition.

Q206       Paul Howell: You have answered my next point. It is exactly that. If you start talking in detail, how do you make sure the users are appropriately aware and understand it? You have covered that quite nicely.

I will turn the coin a bit now and talk to Lisa and Peter about the steps they have taken to ensure their technology is safe. I will put specific questions to each of you on that. Some things have been covered already.

Peter, I want you to elaborate slightly on things you have already touched on. If you have self-driving, how do you make sure your more vulnerable passengers are safe and get everything they need? It will probably flow from what you said about the fact there will still be a person on the bus, but is there anything else you see fitting into that?

Peter Stephens: Earlier in the year, we took our first passengers on the service. Some of the feedback was that it was just like a normal bus. It was perhaps a bit smoother. Everyone was hoping that it would be whizzo and starshippy, but actually it was a regular bus.

Paul Howell: Everything should be starshippy and whizzy.

Lisa Johnson: I wouldn’t object.

Peter Stephens: I wasn’t trying to do a plug.

It is about making sure that it is a smooth service. One of the things we want to understand through the pilots is what it is like for those who require extra assistance. One thing I would add, and I suspect you may get on to it with the second panel, is that roadside infrastructure is important. We can make our services accessiblebuses are one of the most accessible forms of transport, if not the most accessiblebut there are challenges to do with where bus stops are located. Are they lit? Are they safe? Can you draw up to them easily? The point is that partnership is needed. It is not just about the involvement of Government, operators and technology; we also need to work with local authorities, which have responsibility for both transport and highways.

Q207       Paul Howell: That leads nicely to a point I am going to bring to the other two panellists in a second. In terms of your determination of that safe environment, and establishing the different situations people would experience, how diverse is your workforce?

Peter Stephens: In terms of—

Paul Howell: You talk about vulnerable people and people with accessibility needs. Do you have those people in your design teams, looking at the problems you are trying to face?

Peter Stephens: We are working with user groups to get that user input. In terms of our staff, our drivers have to meet certain medical criteria, but we are keen to get accessibility input through the user groups and the trials. We want to make sure we are getting feedback on what it is like for those who require extra assistance.

Q208       Paul Howell: Lisa, I want to ask you the same question about your understanding of the different population groups you are impacting, but I would also like you to touch on what interactions you have experienced between your robots and people, and what changes have evolved from that. Are there geographical or environmental differences? Could you then talk a little about your use of focus groups or whatever to support your approach? Then I will come to Dr Khastgir to wrap it up.

Lisa Johnson: Because of the legislative grey area in which we operate, when we go to an area we do it with the permission of, and in partnership with, the local authority, so even if we did not want to be safe it is not an option. We take it incredibly seriously. In terms of regulation, we are not trying to go on motorways at 80 mph. We are low speed and low weight, and if an accidental impact happened the kinetic energy transfer would be equivalent to something like 58 joules, which is like someone knocking into you when you are walking down the street. That is an aspect of it, but I will set that to one side, because we do not really do that.

I spend a lot of my time with accessibility groups and older people’s forums. We speak quite regularly to the RNIB. Before we set up, we did pilots with, for example, Guide Dogs UK to make sure the robots and the guide dogs got on okay. They just treat each other like another obstacle and don’t take much notice of each other. We changed the flag design because the RNIB said our initial flag was too pale and did not stand out for people with visual impairments. Alongside everything we do, we review every incident or near incident; we look at them and say, “What can we learn from that?” We have machine learning and AI, so our robots get safer with every interaction, which means that after 4 million deliveries and a road crossing every three seconds they are pretty safe. We have learned quite a lot, doing what we do; but we have what we call an accessibility pipeline.

We avoid anything in our path because we have inbuilt obstacle avoidance technology and will steer out of people’s way, but we want to put belt and braces on top of that. Our robots now recognise mobility devices. In the vast majority of situations, if there is a problem with a mobility device, our robots know, “Oh no, this is an emergency situation for us.” We do not want to impact on people who have an accessibility need. It will escalate straight to the top of the remote assistance customer call list. It will be: “Sort this out really quickly, because this is a serious situation.” We are working on that with a range of scenarios. We are working with the British Horse Society to make sure robots understand horses. It is not something that comes up very much, if I am honest. We might face it in Cambridgeshire, but we will not face it as much in Greater Manchester, but we have to have those interactions for the robots to be able to learn. In terms of safety, we avoid everything; then we can add layers and layers on top to build the safety case even further and, as I say, we review it.

On interactions, we have 90-year-old customers, as well as kids who see the robots at school and want to order. A Kinder egg was one we got in Leeds the other day; there was one Kinder egg trundling down the street to a kiddie. The parents wanted it as a surprise when they got home from school. The interactions are really positive. One of the things that has surprised me is that a quarter of our customers—just over 25%—have a disability or live with someone who has one. Neurodiverse kids especially seem to have taken to the robots. We do not really understand it yet. We are doing some more work on that. The interactions we have are overwhelmingly positive in those terms, but it comes down to the work you do beforehand, integrating your technology in a community, having conversations and having somebody readily available on the ground or in the area when people have questions. Beforehand, there will be people in Starship T-shirts wandering around while we are doing the trials and tests, so that people know they can ask stuff, and responding very quickly. It is not a case of everybody loving the tech straightaway, and “Hurrah, its all here.” You have to do the work. That is why we want the regulation, to be honest, because we do not want people to turn up who do not do the work.

Q209       Paul Howell: Dr Khastgir, is there anything you would like to expand on—maybe the discussion about the different applications?

Dr Khastgir: Building on what has already been said, there are three additional things that I would like to happen, when it comes to the accessibility and diversity aspects. The first is to take a true systems-thinking approach to designing the systems. Traditionally, in the automotive sector, most of the tech has been done as, “We’ll do the technology first and then bolt the user requirements on top.” We need to move away from that and have the user as part of the system design process. That will automatically give us the requirements of accessibility, inclusion, disability and so on.

Secondly, bringing it back to the operational design domain, the kinds of actors you will experience where you deploy need to be part of your operational design domain definition. Will you experience mobility scooters or horses? That needs to be part of it at the very beginning, at the requirement specification level, when you are defining your operational design domain.

Lastly, we need collaboration between different stakeholders. The developers on their own will not be able to do this right. They need to speak to local authorities, user groups and so on. Those are the three things I think we need.

Q210       Ruth Cadbury: You said, “If there are likely to be scooters or horses.” Even down Oxford Street there could be horses. Okay, there will not be horses and scooters—or hopefully not—on motorways, but anything is possible in our road environment, so all of that has to be factored in all of the time. Some of the technologies, as well as relying on mapping and reactive testing, also rely on visuals, but I suppose that does not apply if you have a complete level 5 vehicle, so why would you start defining as you say? You gave a good example about motorways but only on sunny days or when there is a hard shoulder, neither of which is the norm on a lot of our motorways.

Dr Khastgir: That is a brilliant intervention, because you could define your operational design domain or operating conditions in a factory setting. You could definitely have a self-driving vehicle in a factory setting, which is a controlled environment. You will not have horses and you may not have mobility scooters. Yes, in the case you give, you will have to factor in horses and mobility scooters if you are in Oxfordshire or Cambridgeshire.

Q211       Ruth Cadbury: Anywhere. Any public road.

Dr Khastgir: Any public road, absolutely. But what if you have a very different use case? Let’s say airside operations. There is a Coventry-based organisation called Aurrigo that is developing self-driving vehicles to be used airside. They will not use that definition. Again, it depends on what the use case is.

Q212       Chair: Before I turn to Gavin, can I ask you both, Lisa and Peter, whether you have had any instances of personal injury in your trial areas?

Peter Stephens: Not that I am aware of.

Lisa Johnson: Very, very occasional. We have done 4 million deliveries and travelled over 7 million km globally so, as you would imagine, there is an occasional incident, but because we are so low weight and low speed the amount of injury or damage that we can cause is limited. We are not able to cause that much concern to people. There is very limited incidence and these things tend to be bumps and bruisesmaybe the mapping has not been correct or something—or the occasional scratched car.

Chair: I was meaning more human injury than vehicle injury.

Q213       Gavin Newlands: We have heard a fair bit of evidence on safety. The Institution of Engineering and Technology claimed that, for every 10,000 errors that a human driver would make, an automated vehicle would make one. That being said, and turning to you first, Dr Khastgir, what are the biggest potential dangers posed by self-driving and automated vehicles? I am sure there are a number. An example would be when the automated vehicle wants to switch control back to a human driver, for example. What are the main issues?

Dr Khastgir: One of the biggest issues right now across the different types of use cases is the definition of what is safe enough. Right now, as a country and in the ecosystem internationally, we have not been able to come up with a definition of the minimum level of safety required. We can have all the frameworks that we might want to create, but we still do not have that threshold. The Government have come up with the concept, for the minimum threshold, of a “careful and competent” human driver. That is what we are calling it, but we do not know how to translate that abstract concept into something that can be implemented by engineering. That is the biggest challenge we have right now. Whatever we come up with as something that we say is safe will be unsafe if we do not define the threshold correctly.

The second thing arises when you are looking at partial automation, or gradual levels of automation. You pointed out nicely that the transition of control back from the automated system to the user remains a challenge. It might be the user inside the vehicle, or it might be a remote operator. There is a huge human factors challenge related to that. A lot of research has happened in the UK. The University of Leeds has done it, as have Warwick and Nottingham. The problem is that we do not have consensus on the minimum number of seconds we should wait before handing over. If you look at research papers, you get numbers of between two and 40 seconds. A lot can happen in that duration, so there is more work to be done in that space. One thing from a systems viewpoint is how you can make the driver more attentive when the transition is happening, or how you can keep the driver always in the loop even though they are not actually driving the system. There are visual and haptic cues, and design implementations can be made.

The other challenge we have is about the common agreement in the industry that we will use some level of simulation to test the systems. Virtual testing would be required, because you cannot do 11 billion miles on public roads. That is similar to 20,000 times to the moon and back. That is not happening, so we will be using virtual testing; but right now we do not know how to prove that the virtual test itself is valid. How do we trust the results of the simulation? We do not know. Those, at the very top level, are the challenges we have in this space.

Q214       Gavin Newlands: If you could make human drivers more attentive when they are in control of a vehicle, that would be quite useful, too. Lisa, in the work that you guys have been doing, what have been the biggest risks, which you might not have foreseen, throughout the period you have been operating?

Lisa Johnson: We are in kind of a different situation, aren’t we, because we have a little robot that could not possibly have a human in it? I tried to ride it, and it didn’t go that well. We do not have a user in charge or anyone driving it, so we have removed that human element entirely. We know that our stuff works and that we are good at what we do with autonomy and how we operate.

The biggest risk for us is how we integrate, because not everybody around us understands how we operate. In a small use case scenario, when we go somewhere, we end up with people stopping at crossings to wait for our robots to cross the road, because they like the little guys. They say, “You go, robot; you go,” but the robot is not standing there waiting for someone to say, “You go, robot.” The robot has seen a car so it’s not moving anywhere. It is not going to do the human thing of “I can make it if I run.” It is just going to wait. I suppose that is not a safety risk; but it is the biggest challenge we come across—how humans accept us into their environment. Ultimately it takes a couple of weeks, but eventually people know that they are supposed to drive, because the robot will just wait. It is not going to move.

Also, there are the more challenging environments. Milton Keynes redways do not happen everywhere. The bigger risk for us is when we go into more challenging environments. Every time we go somewhere more complex, the robots have to learn more. We have to do more mapping, and more engagement with the community so that they understand us better. The bigger risk for us is human interaction and the more challenging environments. For us, the more challenging environments are, as you would expect, places where there are more things, narrower roads and faster things moving around you; but by and large, because we have been operating for so long, we are kind of okay with it. Of course there is going to be an incident every now and again. Iain asked whether there has ever been an incident. Of course there has. With pedestrians, cycles, cars or anything there will be an incident every now and again. Again, that is a challenge; it is convincing people that we are safe, when one incident becomes a test case for the entire technology.

Q215       Gavin Newlands: Peter, for Stagecoach the risk is at a level above Starship’s operation. What is Stagecoach’s biggest fear as to the biggest risks involved?

Peter Stephens: We have not started running the services yet, so we are trying to design out all the risks at this stage. Lisa touched on an interesting point about the interaction with other road users and how much implicit communication there is between other road users. For example, the bus will look like a regular bus and will be marked up as autonomous on its livery, but it is a bus; quite often, some subtle interaction takes place between a human driver and pedestrians and other road users. There is a question about how to replicate that when you start to use the self-driving technology. We will need to look at that in future. It is not currently part of the project, but there is a question about how to get some of that implicit signalling as well as the standard vehicle operation.

Q216       Gavin Newlands: I am conscious of time, and there are still a few questions to get through. Dr Khastgir, what cyber-security risks do self-driving vehicles and automated tech pose?

Dr Khastgir: We need to be careful when talking about cyber-security. If we are talking only about automated vehicles that have no connectivity—no connection with anything—the risks are very minimal. With connected and automated vehicles, yes, there are specific cyber-security risks. Again, taking a systems-thinking approach, cyber-security has traditionally been bolted on to existing systems. You design the system and suddenly realise, “Oh, we haven’t done any security analysis.” You do it, and you bolt on some cyber-security. We need to move away from that. We need to take a proper, secure-by-design approach where security requirements are in-built at the very beginning of the system.

Having said that, there will always be people, malicious or otherwise, who try to hack into your system, so you need to implement a mechanism where you put the system into a safe state to limit the damage when you detect that someone is trying to hack. That is very important, because you will never be able to guarantee that you cannot be hacked. What you can guarantee is: “I will detect that I am being hacked and implement a design mechanism that makes the system safe.”

Q217       Gavin Newlands: To move on to public awareness and confidence in automated and self-driving vehicles, surveys often show that the public are quite wary of them at the moment. I hold my hands up, as I have said a couple of times in this inquiry that I was one of them. I think I was at a dinner with you when I said the same thing, although, to be fair, I am a bit more confident than I was. How important is the perception of safety to public acceptance, and what level of safety should the public accept, compared with regular driving?

Peter Stephens: It is a very valid question. In terms of customer safety, they have to be convinced. Our customers have a choice about whether they want to take a bus or use another mode of transport, and if they are not convinced, they won’t. We need to convince them that it is a better experience in terms of both safety and being a smoother, more reliable bus journey. At the moment, you are right, there is not a lot of knowledge about it. Part of it will come from experience. When we carried our first passengers, very quickly it was, “Okay, this is how it normally is.” There is something about experience. As people experience self-driving technologies—potentially in passenger cars—that may well give them confidence and more understanding of what the technology is like in buses and other contexts.

Q218       Gavin Newlands: Lisa, obviously you deal with robots at the moment, but do you have any comment on the overall question?

Lisa Johnson: It depends on how you engage with people and where you go. We have this conversation with local authorities quite a lot. Are we going to consult people before we put the robots on the ground? Sure, do that, but people will not understand them yet. We do our own customer surveys that show our customers love us. Of course our customers love us, but what do the residents think?

Four weeks into any sort of trial that we do in a local area, local authorities do surveys independently of us. What has come back is that the residents like us, too. That is part of carefully integrating and then asking people what they think. If you said to them up front, “We are going to do autonomous robotic delivery, driving around your pavements,” I am not sure how that would go. A month in, when we have explained it carefully and they have seen it happen and have interacted, everyone is really comfortable and happy, and it becomes part of normal life.

It is how you do it and how you talk about it. Our little guys are quite cute and friendly, which helps. It is a different kind of conversation from talking about buses on roads, but it is how you communicate with people. The regulation is important in that, but so is the social responsibility in communities where they operate.

Gavin Newlands: On the overall question of communication, the Government do not have a particularly great track record on communicating some of these issues to the public.

Dr Khastgir: That is the most important thing we need to do to make this technology a success. If we do not do that, there is no way people are going to use it and no way we will get the economic benefits of the technology. Another important thing is that people expect the world from any technology, so it is very important to clearly articulate to them what it can and cannot do, and get them to understand that and bring them on that journey. Research has shown that if you do that they will trust it and accept it. However, you also need to turn their expectation levels, in engineering terms, to the safety threshold. Earlier I mentioned the “safe enough” definition, and that needs to be a function of how safe people want the technology to be. If they want it to be as safe as them, that is the benchmark we will try to work to. If they want it to be safer than them, that is the benchmark we want to work to.

Going back to an earlier points, this is not a question that self-driving cars on land are trying to answer on their own. Aviation and maritime have the same issue. People want to accept it for drones and autonomous vessels, and we will be answering the same questions again for them when we have similar discussions about those technologies. Why do we need to do that? Can’t we learn from the domains themselves? That is what we have been doing over the past 12 months. We will launch a report on 21 March with all the results across the domains.

Q219       Gavin Newlands: This is a quick question for you, Dr Khastgir. What level of knowledge would the public need to safely use and—a point that has been made quite often—interact with automated technology?

Dr Khastgir: If we do it right, we should not expect the public to have full knowledge of what an SAE level 3 or 4 is. There is a way of communicating a technical concept in a much more accessible manner, and that is what we need to look at. The Centre for Connected and Autonomous Vehicles and the SMMT have come up with a group called AV-DRiVE, which is trying to create a set of principles on how to do that communication. Our assumption should be that we do not expect people to be engineers or technologically very savvy. It is our duty to convey that message in an accessible manner.

Q220       Mike Amesbury: Today is International Women’s Day, and harassment of women on public transport is a big issue. There could also be disabled passengers and pensioners. How will the role of bus captains relate to that?

Peter Stephens: That is exactly it. We recognise that having someone on board the bus is really important for a sense of personal safety. It is a shared space. How do we make sure that more vulnerable passengers and customers feel safe? Very much at the heart of the bus captain role is being able to have a reassuring presence on board, being able to intervene if there are any unacceptable incidents, and giving people a sense of safety on board.

Q221       Mr Bradshaw: Lisa, how much do your robots cost to use?

Lisa Johnson: We do not have a minimum delivery value. People can order entirely what they want. It is distance-based. It goes from 99p for people in the immediate vicinity, up to about £3.99 for people further away.

Q222       Mr Bradshaw: Are people using it at the moment because it is cheaper than traditional methods or for the novelty factor?

Lisa Johnson: We see a spike in orders when we go somewhere, because, obviously, the novelty value is, “I’m going to get stuff from a robot.” That dies down and we have quite a steady user base from there. In the sort of areas where we operate, we tend to see that 60% of what we do is taking cars off roads. It is people who would have done small shops, or who forgot something for tea and need to pop back out to the shops, within a 3 km radius. The majority of what we do is that, in all honesty.

We are able to do that because we don’t have a minimum basket value and because we don’t have to pay a living wage; the robots are quite happy little workers. It is not something that would be sustainable for people like the Leeds resident who ordered a Kinder egg. You can’t do that, can you? That is why retailers like us as well, because an element of it is that people would not have bought the stuff they get. They are buying extra things because they do not have to get into the car and go to the shops.

Q223       Mr Bradshaw: It is great if you are taking unnecessary three-mile car journeys off the road, that’s for sure. I want to ask you about infrastructure. You mentioned narrow pavements, particularly in rural areas, and Peter, you mentioned the interface between the road, the pavement and bus stops. What changes are going to be needed to the infrastructure on our streets, pavements and road spaces to help the system to operate properly?

Peter Stephens: I know you have more witnesses coming who will pick this up. For the systems we operate, it is lines, road markings, traffic lights and the interaction with them, bus stop infrastructure and, finally, road condition. Potholes are important. It is not special technology as such; it is just making sure that we can safely use the existing road infrastructure.

Lisa Johnson: Yes, it is the same for us. We can deal with the environment that we are in, and we will not necessarily be conscious of where we are, in terms of narrow pavements and such things.

The biggest infrastructure challenge for us is charging. We are working on different solutions with the partners we work with. Other than that, it is traffic lights, because our robots don’t have arms and can’t press the button. Quite often, they say, “Could you please press the button for me?” They are very polite and they ask that. Someone will press the button and they will wait; or if they are waiting a long time, remote assistance will say, “It’s okay to go; there’s no traffic.” For us it is traffic management, essentially, but we are working on solutions to that with local authorities, because it is not that hard to plug autonomous and connected devices into the traffic management system. It is quite easy to overcome. We do not have huge infrastructure challenges other than charging and making sure traffic management works okay for us crossing roads.

Q224       Mr Bradshaw: Are there any extra bits of digital infrastructure that would be helpful, or that you might need?

Lisa Johnson: We need to ensure good connectivity, obviously. We are autonomous 98% of the time, but if we want someone to check in remotely, we need good connectivity. We have also been working on that with a couple of UK companies. We don’t tend to struggle. We are doing okay on that, to be honest.

Peter Stephens: We operate with existing digital infrastructure—4G networks and GPS networks that we have on board, and maps to back up when the signal drops out.

Q225       Ruth Cadbury: Dr Khastgir, I have some questions about legislation. By legislation we generally mean what affects public roads, but you gave some examples of uses in closed environments, such as Heathrow or industrial estates. Leisure parks would be another. Is legislation needed to use autonomous vehicles in those contexts, or is legislation not a problem?

Dr Khastgir: It is needed, again, to give some guidance on what is good. Right now, because it is so new, nobody knows what is good, so we have to take people at face value. Legislation provides a benchmark that everybody has to meet.

Q226       Ruth Cadbury: Legislation is required even in closed environments?

Dr Khastgir: Yes.

Q227       Ruth Cadbury: Moving to the wider public road environment, the Government published a White Paper in August 2022. Do you believe the Government’s approach was correct? Could they do more? Could they change their approach?

Dr Khastgir: At the level of abstraction in the White Paper, I would fully support what the Government are trying to achieve. The ambition and direction of travel is correct; the timeline of 2025 seems to be reasonable. The only issue I have right now is that, because of the level of abstraction, we are still talking in terms of primary legislation, because that is what the White Paper focuses on. One of the challenges we have right now is how we would approve this technology if we had primary legislation today. For example, if we had a magic wand and tomorrow we had primary legislation, we still could not approve the technology because we do not know what the detailed safety requirements are for the technology. The White Paper provides an ambition and a direction of travel but it does not provide the next level of detail that we need when it comes to approving this in terms of safety thresholds and benchmarks. That is somewhere that I feel they need to do more.

Q228       Ruth Cadbury: Obviously, legislation follows a White Paper after consultation, generally. What consequences would there be if the Government did not move ahead with legislation fairly quickly?

Dr Khastgir: The biggest issue we have is that it does not give a good sense of security to industry; it does not give confidence or reassurance to industry that, “This is coming, so please keep investing in the UK. If that does not happen, we lose high-value jobs and an opportunity to be a world leader in bringing in this technology. The biggest issue we have is that we do not give a signal to industry that the legislation is coming when it was promised. The Government initially made a promise and for all sorts of reasons it has been delayed. Because of that, developers cannot make a promise to investors and the money does not come into the UK, and then we lose the industry itself. There is a huge ripple effect.

Q229       Ruth Cadbury: Ms Johnson, is there anything you want to say about the Government, legislation and what is needed and when?

Lisa Johnson: It is urgent now, in all honesty. We are told that it is coming and then that it is not coming. It is a big frustration for us. We have been operating since 2018 and we are good to go. We can scale tomorrow if we have the right legislative framework in place. I do public affairs in Finland as well. Finland has a framework. There are large supermarkets there saying, “Come on, do our last mile. That would be great.” What is the investment decision for us as a company? Do we say, “Let’s keep risking it in the UK where we don’t have a framework, or certainty or clarity about what is going to happen”? Or do we just move the robots to Finland? That is a real business decision that needs to be made.

We are really committed to the UK. Milton Keynes is our UK home, as the Chair knows. Our most sophisticated global operation is based in Milton Keynes, so we are very committed to the UK, but the frustration is that even 2025 feels like it is not happening, if I am honest. The intention and the dialogue we are having with Government is fantastic. Everything the Government are saying about where they want us to be is absolutely where we want it to be. All we are desperate for is the follow-up legislation to back it up. For Starship in particular, we do not want to miss out, because a lot of the self-driving stuff focuses, quite rightly, on regulating heavy, larger vehicles that go very fast on roads. We absolutely need to make sure that is taken care of, but we do not want to miss out on that. There is a holistic approach to self-driving that we want to be involved in.

For us the frustration is that it is urgent now. I speak to people across the sector; I was at a roundtable with techUK members a couple of weeks ago. We need it very soon because we cannot keep making decisions in the uncertain grey area we are working in.

Peter Stephens: We want to see legislation cover not only the vehicle approval process but a whole bunch of sectoral regulation we have to comply with for heavy-duty vehicles, on both the regulation of bus services and the licensing of drivers. Some excellent work is going on with the Law Commission in reviewing the regulatory book. Our plea to Government is that it is not just about the passenger car; you also need to look at some of the sectoral regulation that goes around that, because we need to have the whole regulatory environment reflect the introduction of these new technologies.

Q230       Ruth Cadbury: Moving on from regulation, how successful have the Governments initiatives, such as the Centre for Connected and Autonomous Vehicles, been at driving progress?

Peter Stephens: We have a positive view of that. We found it really helpful.

Lisa Johnson: CCAV is very helpful. They know what they are talking about. We just need to see some of the stuff we are all talking about being translated into action, but they are very good.

Q231       Chair: I have one last question before we move on to our next panel. Peter, at the beginning you said that the person sitting at the wheel of your bus is ready to intervene if there is an issue. It strikes me that that is a level 2 operation; it is not fully automated, in the sense that there is a human ultimately responsible.

Peter Stephens: At this stage, that is the way we are approaching the trials.

Q232       Chair: What needs to happen for you to move it to at least level 3, where it is not a human who is ultimately responsible for the operation of that vehicle?

Peter Stephens: Rather than busking it, perhaps I can follow up. We need to be confident in both the technology and some of the responsibilities within the current regulatory framework, but I will follow up with the Committee, if that is all right.

Chair: That would be very helpful. Thank you all for answering our questions this morning. It has been a really interesting discussion and has given us much food for thought.