Written evidence submitted by Professor Burkhard Shaefer (SDV0030)
The focus of this response is on the regulatory implications of self-driving vehicles (henceforth AV). While this submission is by myself, it benefited from the work of the Regulation Node of the UKRI Trusted Autonomous Systems Network
With the joint report of the English and Scottish Law Commissions, there is now a detailed roadmap for legislative reform available that is both detailed and comprehensive, arguably one of the most advanced proposals also from a comparative and international perspective. This response broadly endorses the approach taken by the two Commissions, and focuses on those aspects that due to the remit given to them were less extensively discussed.
Of these, the Data Protection implications are arguably the most significant, complicated further by the changing landscape of post-Brexit data protection law in the UK. To complicate these issues further, personal data from the AV that originates in the UK may (have to be) transmitted across national borders, or a car registered and controlled in the UK may physically cross national borders while continuing transmission of data back to the UK. To allow such a seemingly trivial task as driving an AV from Scotland via Northern Ireland to the Republic of Ireland would potentially create highly complex data protection questions that may impact the safe operation of the car, especially should the UK lose at some point in the future the Adequacy finding under the GDPR, either as a result of a court challenge or because the divergence of the UK regime leads the European Commission to reconsider the situation. Northern Ireland in this case may, or may not, continue a different path from Great Britain, though it would be even more difficult than complying with EU regulations on products. While this issue affects obviously not just, or not even primarily, AVs, it points to some general constraints and issues for a future regulatory environment:
1) AVs are not just products and self-contained objects, they bundle together a product with a range of services, some but not all of them necessary for safe driving. This is implicit in a number of the recommendations of the Law Commissions, but poses challenges especially when service provider and car are (permanently or temporarily) in different jurisdictions. Data protection and international data flows simply highlight this issue in a particularly obvious way, and also illustrate that international agreements geared towards reduction of non-tariff barriers for goods may not be sufficient to ensure that AVs produced in the UK and with design decisions to comply with UK law can be sold, or indeed travel, abroad. The more emphasis is put on “law compliance by design” – a principle that in many ways is sound – the more difficult these questions become absent comprehensive and detailed international harmonisation. While human drivers can learn quickly enough to follow a new set of rules, especially given the international harmonisation of many aspects of road traffic, “by design” systems that hard-bake legal requirements into the system architecture frequently lack this adaptability.
2) The Law Commissions’ report addresses data protection where it has a direct impact on issues such as accident investigation and traffic policing. While welcome, there is a danger to treat data that is necessary for safe driving (core driving data, CDD) as separable from other data that an AV may generate. The Commissions e.g. mentions data from sensors designed to prevent an impact, but would, one assumes, geolocation data of the AV that is used to recommend the best radio station for the driver to the wider Data Protection issues that are outside its remit. While intuitively appealing, this approach is increasingly undermined by the evolving business models of the car industry, and also the approach to the design of AVs. While OEMs (original equipment manufacturers) will often be the point of data collection, the data is not any longer just used to deliver safe driving services or improve the safety of the product. Rather, it generates secondary or tertiary income streams where data is sold to infotainment/navigation service providers, and from these, potentially, to advertising etc companies. Design decisions for cars reflect this “dual nature” of a significant amount of the data that modern cars generate, where systems necessary for core driving functionality and those for other functionalities, including entertainment, are not separated on a hardware level, and instead communicate with each other.
From a cybersecurity perspective, this is already undesirable, and a number of the publicly reported attacks into (assistive) driving technologies have exploited vulnerabilities in the “entertainment” functionalities of the car. From a regulatory perspective, it poses additional challenges especially under proposed regimes that try to create new regulatory and supervisory entities whose remit could get artificially restricted and not match the complexity of the AV ecosystem. As one function of Data Protection law and the ICOs is also to create incentives for cybersecurity, conflicts or reduplication of efforts between regulatory bodies is easy to imagine – under DP law e.g. a hacking attack may trigger data breach disclosure obligations that are discouraged or even prohibited by a regulator whose remit is mainly the safety of the core driving functions (to avoid disclosure of the vulnerability, oor the police operation that detected it, e.g.)
This problem could be acerbated by the proposed approach to AI regulation as expressed in the policy paper on “Establishing a pro-innovation approach to regulating AI”. There, the UK approach, in contrast to the proposed EU AI Act, is designed to be sector specific, with significant emphasis put on sectorial regulatory agencies and other statutory bodies. The issue how more general rules can emerge from these, through appropriate channels of inter-regulator cooperation, is left largely for future discussion. The example of data protection however shows that even for a single “product”, this can mean multiple regulatory bodies dealing with individual aspects of the AV that in reality have long merged into one, both in terms of engineering solutions and business models.
3) the problem of overlapping regulatory bodies, together with the role of international trade agreements and the need for international harmonisation indicated above, threatens to create more fundamental challenges for the rule of law and the principle of democratic accountability. Both the UK and the EU proposals on AI regulation ultimately give significant rule-making competences to sector-specific regulatory bodies and, even more worryingly, non-governmental organisations such as standard setting bodies. To a degree this is inevitable, given the knowledge needed to develop efficient and workable rules, but also carries significant risks. Especially the latter often has neither the competencies and skills, nor the incentivise, to include legal and ethical issues into standards that on the face of it seem to address mere engineering questions about software safety. The report by the Law Commissions describes some of the issues with great clarity and sensitivity. A seemingly “technical” question on the way in which a sensor discerns patterns may have profound normative implications, shifting potentially risks of the technology to one group in society and the benefits to another.
Legal duties under the Equality Act, or the Equality and Human Rights Commission as another possible body with a “stake” in AV regulation, will alone not address all of these issues – how would we respond e.g if AV cars were to reduce significantly accidents on rural roads (where hospitals may be particularly far away, or accident scenes difficult to reach by emergency services) but increase them in city environments? These decisions have to be taken by Parliament and after public discussion, not by software engineers in standard setting bodies behind closed doors. The Commissions’ correctly emphasises therefore the need for primary legislation at the core of the new AV regulatory regime. It also emphasises rightly the need to have broad representation on the regulatory body(s), enshrined by statute and properly funded, as the groups most affected will have the least resources (financially, but also in wider social capital) to provide their input for free and no explicit right to have their views considered. The fragmentation of regulatory competencies however (and again also the need for international cooperation and harmonisation) could become an obstacle for this highly desirable vision, it is for instance not the model currently used by the ICO even though their remit includes “fair data processing” duties. A practical step here can also be to increase the capacity of standard setting bodies (including professional bodies such as the BCS) for interdisciplinary research and sensitivity to legal and ethical implications of technical regulation.
The Law Commission’s report does correctly identify the problem that primary legislation can and should deal with abstract and general principles foremost, and can’t be expected to anticipate any specific software issue. Regulation should also remain as much as possible technology neutral, to ensure their longevity, relevance and with that the level of legal certainty that generates trust and also allows long term investment. However, as the Commissions acknowledge, this then creates an accountability gap between the more specific and detailed rules set by regulators and standardisation bodies and the primary law. While the organisational methods outlines above can mitigate this problem, there is also a potential to think about primary legislation in new and innovative ways. “Law as (computer) Code” initiatives have also internationally attracted attention, an approach that would allow (or require) legislators to enact laws, where relevant, with a “computational translation”.in addition to the more familiar text addressed at human readers only. While still in its infancy and not without ethical problems of its own, these new ways to think about parliamentary law making have some potential to address the significant danger of a democratic deficit in AV (or AI) regulation.
4) The limitations in the remit of the Law Commissions, and the danger of regulatory fragmentation and mismatch, also become apparent in the discussion on policing. The need for police to get access to data as part of an accident investigation is highlighted. However, traffic policing extends far beyond accident investigation. It also plays pro-active roles to ensure road safety (from speed cameras to tests for intoxication), enforcement actions (stopping a getaway car after a robbery) or direction of traffic flows. The latter can have direct impact on other democratic rights, for instance when traffic is managed as part of a lawful demonstration. A society with AV cars is likely to require a substantial rethinking of the role, powers and accountability mechanisms of road traffic policing. This is possibly the most significant gap in the current debate on AV regulation.
When (traditional) cars were introduced on the roads in the UK, they quickly transformed policing on a considerable scale. New methods were needed to collaborate between police forces as the mobility of criminals increased, while the enforcement of road traffic rules allowed for the first time (at the time) socially low ranking police officers to give orders to, and if needed sanction, people rich and powerful enough to afford one of the early cars.
Nor was the question of road traffic regulation exclusively focussed on cars. Many jurisdictions created new duties for pedestrians too – many jurisdictions introduced “jaywalking” as a new offence: as car technology was not safe enough to operate around pedestrians, the later were increasingly forced off spaces that had previously considered their own as of right.
We should anticipate an equally significant impact of AVs on other groups today as well. For many if not most citizens, interaction with the police happens in a driving context. These interactions are governed by both formal and informal rules. Should officers e.g. still be allowed to stop an AV car if the information they seek from the driver can also be remotely accessed? Should they have remote access to AV data outside an accident context, and if so, should they have to go through the AVs data hub, requiring possibly a warrant, or can they directly access data? Conversely, should they be allowed to remotely interfere with the AV, for instance as part of an arrest? In some jurisdictions, “pretextual searches”, that is the search of a car for alleged violation of road traffic law (e.g. worn down tires) with the ultimate intent to learn more about the driver, passengers or possible contraband are well documented and legally permissible. If a car could remotely certify and provably assure its compliance with road traffic rules, would these stops still be permissible, and would the police lose a valuable source of intelligence?
Conversely, recent controversy around stop and search methods in London showed how dashcam or phone recordings of an interaction with the police can create powerful evidence. While body-worn police cams become more common, AVs would potentially record as a matter of course also data about police-driver interaction. How should this data be treated, should LEAs have a right to access them, how would a driver get access to this data from their own car for the purpose of a complaints procedure, and are the regulatory bodies that control policing in a position to evaluate them?