AutoNorms Project – May 2021

 

 

Written evidence submitted by AutoNorms Project (TFP0008)

 

The UK Government’s current approach to governing emerging technologies, more specifically military applications of Artificial Intelligence (AI), is reactive rather than proactive. If the UK continues on its current trajectory, it risks becoming a state that follows the AI governance norms set by others. We recommend that the UK Government acts to shape and directly influence AI governance norms to advance its own interests. We make three specific recommendations:

 

(1)   The FCDO should clarify its stance on the role and quality of human control it considers appropriate in the use of force. It should acknowledge that setting a positive obligation for maintaining human control in specific use of force situations is a crucial step in regulating weaponised AI and ensuring its adherence to international law. This includes aligning its definition of autonomy and automation in weapons systems with those of like-minded states and nongovernmental actors.

 

(2)   The FCDO should become aware of the risks presented by norms on weaponised AI emerging from practices of use rather than through critical deliberation with relevant stakeholders. The various ways in which security partners as well as “systemic competitors” use weaponised AI may shape norms that come to govern the use of AI in the absence of explicit, public debate about them. This process has the potential of creating undesirable norms that the UK should counteract before they emerge.

 

(3)   The UK should use its presidency of the G7, as well as its significant influence in other international forums, such as NATO and the UN’s Group of Governmental Experts (GGE) on emerging technologies in the area of Lethal Autonomous Weapons Systems (LAWS), to promote its policy position on governing (military) AI. This advances UK interests by cementing its role as a competent and reliable security partner, as well as a global leader shaping international regulation on emerging technologies.

 

Introduction

 

  1. This submission is authored by the European Research Council (ERC) funded AutoNorms Project based at the Centre for War Studies, University of Southern Denmark: Dr Ingvild Bode, Anna Nadibaidze, Dr Hendrik Huelss, and Dr Tom Watts.

 

  1. The AutoNorms Project focuses on the practices of Artificial Intelligence (AI) development and usage in four states: China, Japan, Russia, and the United States. These states are important when thinking about the UK’s response to the opportunities and challenges presented by emerging technologies as they include both long-standing security partners (Japan and the United States) and systemic competitors (China and Russia).[1] These states are also global leaders in the fields of AI and its associated technologies, such as robotics.

 

  1. The principal aim of the AutoNorms Project is to examine how the use of weapon systems with automated and autonomous features shapes international norms governing the use of force. For the purposes of our project, norms are broadly defined as understandings of appropriateness.[2] We understand norms as being something greater than just international law. The value of approaching norms in this way is that it allows us to capture the wider standards shaping state behaviour and what they consider as appropriate when it comes to using force that do not necessarily relate to the law.

 

  1. What states consider as appropriate uses of force is important when thinking about whether emerging technologies are fundamentally altering the nature of international relations. Use of force norms are vital components of the current rules-based order. For example, UN Charter provisions regulating the right to self-defence create an important stability of expectations for state conduct. Changes in use of force norms could drastically modify the character of the rules-based order, for example, by reducing the level and quality of direct control which human agents exercise over specific targeting decisions.

 

What technologies are shifting power? What is the FCDO’s understanding of new technologies and their effect on the UK’s influence?

 

  1. AI, and especially its military applications, is a key technology that has the potential to shift global power structures and challenge global standards governing the use of force. In simple terms, AI automates the performance of specific human tasks and, in principle, can therefore affect any human domain.

 

  1. Weaponised AI raises many significant ethical, legal, operational, and political questions. Much of the debate on the development of Lethal Autonomous Weapon Systems (LAWS)[3], as they have been labelled, frames these issues as being a concern for the future. In our view however, the integration of automated and autonomous features into weapon systems has already shaped the character of global security competition, as well as global patterns in defence research funding and acquisition, in problematic ways. We want to draw attention to three particular observations here.

 

  1. First, existing applications of weaponised AI are shifting understandings of the appropriate levels of human control in specific targeting decisions and reducing the quality of what some analysts call meaningful human control. In our previous research, we have argued that the design, testing, and operation of air defence systems with automated and autonomous features in targeting have contributed to an emerging norm that diminishes the quality of human control over specific targeting decisions.[4] In short: as the human operators role in air defence systems has changed from active controllers to passive supervisors, human operators have lost both situational awareness and a functional understanding of how algorithmic systems make targeting decisions. This diminished role of human control has been gradually normalised over time.[5] The resulting loss of human control over the use of force is a central legal, normative, and political problem.

 

  1. Second, the development of LAWS could lower the threshold of the use of force. The prospect of projecting military force overseas without risking the physical security of a country’s armed forces would be particularly attractive for democracies, including the UK, which are highly sensitive to the deaths of their soldiers. Yet, the development of LAWS could further erode public accountability over the use of force, creating even greater distance between three groups of actors: politicians authorising the use of force and the military on the one hand; and the British public on the other. If we assume their use in asymmetrical conflicts, it could involve almost no casualties for the side using LAWS. This could diminish the political costs of war for democratic leaders.[6] Meanwhile, using AI-driven weapon systems could also be attractive for authoritarian leaders wanting to engage in war, circumventing potential disloyalty from military elites.[7] In these and other ways, autonomy in weapon systems is likely to increase the current tendency of asymmetrical and/or covert warfare in undeclared conflict scenarios.

 

  1. Third, the uncertainties surrounding the specific threats of weaponised AI strengthen the risks of competitive, potentially destabilising geopolitical dynamics. States including China, Russia, and the United States consider the development of autonomous weapon systems to be a strategic priority. The recently published National Security Commission on Artificial Intelligence report, for example, calls for the United States to “embrace the AI competition” in order to both accelerate commercial innovation and beat out the growing geopolitical and ideological challenge posed by China.[8] Developments in the fields of AI and LAWS are closely monitored by other states. This has led many to speak of an “AI arms race”. While this AI arms race is often presented as inevitable and unavoidable, in our view, it describes one possible trajectory amongst others. It may or may not unfold based on what states, including the UK, do.

 

  1. Without specific international legal regulation on LAWS, such changes may have major implications for global power structures. Governments using AI for military purposes argue that this technology improves the efficiency and speed of military operations, communications, command and control, data processing, and decision making.[9] If states also come to see LAWS as being morally justified and ethical, the spread of these technologies becomes even more likely.[10] Furthermore, in the absence of clear regulation on LAWS, we anticipate the emergence through state practices of novel understandings of appropriate AI usage that may set precedents for what is considered the human agent’s acceptable role in the use of force. This also increases the risks of a growing distrust between key players developing AI and an increasing asymmetry of power among them.

 

The UK’s Science and Technology Strategy

 

  1. The UK Government views international relations and emerging technologies through the prism of global competition. The 2021 Integrated Review of Security, Defence, Development and Foreign Policy describes science and technology as “an arena of intensifying systemic competition” and identifies it as a key area of strategic investment.[11] Several countries are investing in militarised AI, including two key actors that the UK Government considers “systemic competitors”: China and Russia.[12] To counter their influence, the UK seeks  a “leading role in critical and emerging technologies”, including in the sphere of AI research and development.[13] The Ministry of Defence (MoD)’s 2020 Science and Technology Strategy includes the establishment of an Artificial Intelligence and Autonomy Unit to better understand emerging technologies and how to respond to them.[14]

 

  1. The UK’s stance towards weaponised AI reflects this goal. In the international debate on LAWS, which takes place within the framework of the UN Convention on Certain Conventional Weapons (CCW), the UK is opposed to negotiating a legally binding treaty banning the development and use of these technologies. The MoD has previously said that it considers a ban “premature”, and, according to reports, fears that it could threaten its ability to exploit the military advantages that AI may bring.[15] At the same time, the UK Counter Proliferation & Arms Control Centre, which is housed in the MoD, stated in 2017 that “the UK commits to maintaining human control over its weapon systems as a guarantee of oversight and accountability. The UK does not possess fully autonomous weapon systems and has no intention of developing them.[16] This is also confirmed in the Joint Doctrine Publication 0-30.2: Unmanned Aircraft Systems, dated August 2017.[17]

 

  1. However, the UK Government's definition of what constitutes an autonomous system  provided in Joint Doctrine Publication 0-30.2 suggests otherwise. Crucially, it highlights that “an autonomous system is capable of understanding higher-level intent and direction”.[18] A 2018 House of Lords Select Committee report suggested that this emphasis on “higher-level intent and direction was “clearly out of step with the definitions used by most other governments”. This “limits both the extent to which the UK can meaningfully participate in international debates on autonomous weapons” and also hamstrings attempts to arrive at an internationally agreed definition”.[19]

 

  1. This ambiguous approach to defining LAWS allows the UK Government to simultaneously claim that it is opposed to LAWS in principle, whilst strengthening its “position as a global leader in developing AI technologies”, something which it considers key for its military and geopolitical competitiveness.[20]

 

  1. Based on the rate of technological advancement in the field of AI and its potential to destabilise the existing global framework governing the rule of force, we recommend that the FCDO adopts a more focused strategy to fulfilling its goals of supporting the maintenance of human control over weapon systems and being a global leader in emerging technologies. This strategy should be built on a thorough and conscious consideration of the various drawbacks of LAWS, not just their conceivable advantages.

 

  1. As it stands, the UK’s position is ambiguous. In our view, the UK Government’s position and reasoning on the emergence and consequences of LAWS requires further elaboration. It is not sufficiently clear to what extent the UK is aware of the challenges and what its response to these issues will be. In particular, we want to highlight that the UK Government does not stand to benefit from the unregulated emergence of LAWS, not least because the UK is not among the leading developers of AI in weapon systems.[21]

 

4.How can the FCDO use its alliances to shape the development of, and promote compliance with, international rules and regulations relating to new and emerging technologies? Is the UK taking sufficient advantage of the G7 Presidency to achieve this?

 

  1. The intergovernmental discussion about LAWS taking place at the CCW is in a deadlock. The attempt to regulate LAWS is complicated by varying perspectives on autonomy and meaningful human control over the use of force. Strategic competition between key global actors including the United States, China, and Russia, makes an agreement on security-sensitive topics such as the regulation of AI even more difficult.

 

  1. Around 30 CCW high contracting parties together with a range of nongovernmental organisations are calling for a complete ban on LAWS – that is, weapon systems without meaningful human control in targeting. Others (including the UK, as noted above) have opposed this, arguing that existing international humanitarian law is sufficient to manage the concerns that these technologies create. It is noticeable that other CCW states parties listen when the UK takes the floor during its sessions on LAWS. Yet, in our view, the UK Government has not fully capitalised on this influence. The UK’s current position lacks depth and coherence. Provided that the UK formulates a concise and coherent position on LAWS in full awareness of the risks they pose to the rules-based order, it could leverage its global influence to lead a group of like-minded states in promoting greater regulation of LAWS. This is a unique opportunity for the UK to spearhead the global governance of military AI.

 

  1. The development of LAWS could also be regulated through regional organisations and alliances. For instance, NATO’s Reflection Group recommended in 2020 that the Alliance should “serve as a crucial coordinating institution” for developing a common strategy towards emerging and disruptive technologies, including in the areas of AI and autonomous capabilities.[22] NATO’s own AI strategy is expected to include guidelines for an ethical military use of AI. The UK could contribute significantly to the development of such a common strategy.

 

  1. A first step towards UK leadership in this area would be clarifying and aligning its definitions of automation, autonomy and LAWS with NATO allies. In the 2021 Integrated Review, the Government has mentioned the importance of international collaboration on AI research, ethics, and regulation.[23] The UK is often considered to be the transatlantic bridge between the United States and the EU on foreign policy and defence matters. After Brexit, it has an opportunity to continue pursuing this role as a key actor linking these two global power centres.

 

  1. The EU has become a first mover on AI governance. On 21 April 2021, the European Commission proposed a legal framework for regulating the uses of AI, the first legislation of its kind.[24] This underlines the EU’s ambition to become a global leader in technology regulation. While this proposal does not touch upon security and defence, if implemented, it could pave the way for a regional approach to governing weaponised AI. The UK Government has yet to confirm whether it would follow these steps, shape an alternate regulatory framework, or continue with its existing approach to technology regulation. The UK Government should clarify these positions in its expected National AI Strategy to be published later this year and take this opportunity to shape the norms influencing the use of these technologies rather than ending up following those set by others. Furthermore, the FCDO should clarify its position towards institutionalised security and defence cooperation with the European Union, something that is lacking in the Integrated Review. Such a format of post-Brexit UK-EU cooperation could also include a common position on the ethics of AI used for military purposes. It would also align with the UK’s goal of remaining a key player in European security.

 

  1. The Integrated Review emphasises the role of bilateral relationships with several European partners, as well as commitment to NATO and transatlantic relations. The FCDO could lead efforts to develop a common NATO approach towards militarised AI and to bridge the transatlantic divisions on this issue. While it remains uncertain whether either a global or European ban on LAWS will transpire, the FCDO should be more active in building greater trust between NATO allies, to promote a common position towards an ethical use of AI.

 

  1. The UK’s G7 presidency in 2021 is an important opportunity to lead on issues of global concern. This should include LAWS as part of a wider policy priority around the opportunities and risks associated with AI. While economic issues have typically been in the focus of the G7 and should not be disregarded, the potential political and social consequences of AI-driven technologies should also be highlighted, as they require global governance. The policy priorities for the 2021 summit in June do not refer to new and emerging technologies. It should be considered whether relevant issues could form part of the agenda and, for example, be linked to existing priorities such as “championing our shared values”. The UK’s G7 presidency is therefore a chance to develop a shared understanding and policy programme for the ethical and legal governing of (military) AI. Having identified AI as one of the UK’s ten tech priorities,[25] this aligns with the UK’s interest and gives it the opportunity to shape the evolving governance architecture on AI rather than follow it.

 

 

 

 

May 2021

9


 


[1] The term systemic competitor has been taken from HM Government’s Integrated Review of Security, Defence, Development and Foreign Policy, which describes both China and Russia in these terms, p. 49.

[2] Ingvild Bode and Hendrik Huelss. “Autonomous weapons systems and changing norms in international relations”. Review of International Studies (2018, 44:3).

[3] The International Committee of the Red Cross (ICRC) defines LAWS as “any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) without human intervention”. ICRC. Views of the ICRC on autonomous weapons systems, 11 April 2016.

[4] Ingvild Bode and Tom Watts. Meaning-less human control: Lessons from air defence systems for lethal autonomous weapons. Centre for War Studies & Drone Wars UK, February 2021.

[5] Hendrik Huelss. “Norms Are What Machines Make of Them: Autonomous Weapons Systems and the Normative Implications of Human-Machine Interactions”. International Political Sociology (2020, 14:2), p.121.

[6] Ingvild Bode and Hendrik Huelss. “The Future of Remote Warfare? Artificial Intelligence, Weapons Systems and Human Control”. In Remote Warfare: Interdisciplinary Perspectives. E-IR Info, 2021. See also Rubrick Biegon and Tom Watts. “Remote Warfare and the Retooling of American Primacy”. Geopolitics (2020), pp. 17-18.

[7] Ondřej Rosendorf. “Predictors of support for a ban on killer robots: Preventive arms control as an anticipatory response to military innovation”. Contemporary Security Policy (2021, 42:1), p.39.

[8] National Security Commission on Artificial Intelligence. Final Report, 19 March 2021, p.11.

[9] Ingvild Bode and Hendrik Huelss. “Autonomous weapons systems and changing norms in international relations”, p.18.

[10] Ingvild Bode and Hendrik Huelss. “Why “stupid” machines matter: Autonomous weapons and shifting norms”. Bulletin of the Atomic Scientists, 12 October 2017.

[11] HM Government. Global Britain in a Competitive Age: the Integrated Review of Security, Defence, Development and Foreign Policy, 16 March 2021, p.24.

[12] Ibid, p. 49.

[13] Ibid, p. 38.

[14] Ministry of Defence. Science and Technology Strategy 2020, October 2020, p. 15.

[15] Damien Gayle. “UK, US and Russia among those opposing killer robot ban”. The Guardian, 29 March 2019.

[16] Ministry of Defence. "Letter in Response to Natalie Samarasinghe, Executive Director of the United Nations Association UK, and Richard Moyes, Managing Director Article 36". UNA-UK, 8 December 2017.

[17] Ministry of Defence. Joint Doctrine Publication 0-30.2: Unmanned Aircraft Systems, August 2017, p.14.

[18] Ibid, p. 13.

[19] House of Lords Select Committee on Artificial Intelligence. AI in the UK: reading, willing and able?, 16 April 2018, p. 105.

[20] HM Government. Global Britain in a Competitive Age: the Integrated Review of Security, Defence, Development and Foreign Policy, p.39.

[21] Justin Haner and Denise Garcia. “The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development”. Global Policy (2019, 10:3), 331-337.

[22] NATO. NATO 2030: United for a New Era, Analysis and Recommendations for the Reflection Group Appointed by the NATO Secretary General, December 2020, p. 29.

[23] HM Government. Global Britain in a Competitive Age: the Integrated Review of Security, Defence, Development and Foreign Policy, p.40.

[24] European Commission. Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence, 21 April 2021.

[25] Department for Digital, Culture, Media and Sport (DCMS). Our Ten Tech Priorities, March 2021.