Professor Christian Enemark – Written Evidence (AIW0004)
Human-AI Interaction in the Operation of Weapon Systems:
An Ethical Perspective
Summary
1. Introduction
1.1. Christian Enemark is Professor of International Relations at the University of Southampton. This evidence is submitted to the Committee because it draws upon the findings of publicly funded research. During 2018–2023, Professor Enemark is Principal Investigator for the Emergent Ethics of Drone Violence (DRONETHICS) project, funded by the European Research Council (grant no. 771082) under the European Union’s Horizon 2020 research and innovation programme. During 2020–2024, he is a Co-Investigator for the Trustworthy Autonomous Systems Hub, funded by UK Research and Innovation (grant no. EP/V00784X/1).
2. Defining an autonomous weapon system
2.1. The Committee was appointed “to consider the use of artificial intelligence [AI] in weapon systems”, but its inquiry focuses on autonomous weapon systems (AWS). So, it is firstly worth noting that a weapon system that incorporates AI is not necessarily (nor usually) an AWS. In a technological system (such as a weapon system) comprising multiple functions, the condition of being AI-enabled could involve AI performing one, some, or all functions. Any remaining functions would need to be performed by humans for the system to operate.
2.2. Incorporating AI into a system does not necessarily render the whole of that system AI-controlled. What it usually achieves is the establishment of a new distribution of function-performance roles among the AI and human elements of the system. This is to assume, though, a ‘narrow’ (functional) notion of AI.[1] For present purposes, given the current and foreseeable state of AI research, AI should be understood as a technology for substituting human control of a specific function with control by a narrowly intelligent machine (information-processor). The broader notion of a single AGI (artificial general intelligence) entity, exhibiting ‘human-level’ intelligence and controlling a complex combination of functions, is highly speculative and thus less relevant to the Committee’s inquiry.
2.3. Since 2014, government experts have met informally in Geneva, under the auspices of the 1980 Convention on Certain Conventional Weapons, to discuss questions related to emerging technologies in the area of ‘lethal autonomous weapons systems’. Progress in these discussions has often been frustrated by problems of terminology. Specifically, some states have insisted upon adopting either a broad or a narrow definition of ‘autonomy’ when considering what kinds of weapon systems would be covered by any future regime of international governance.
2.4. For example, since 2012 the US government has defined an ‘autonomous’ weapon system broadly as one that is able, after activation, to “select and engage targets without further intervention by a human operator”,[2] and the International Committee of the Red Cross (ICRC) adopted a similar definition in 2021.[3] The UK government, by contrast, has used a stricter definition: an “autonomous system is capable of understanding higher-level intent and direction … [and] “of deciding a course of action … without depending on human oversight and control”.[4]
2.5. According to the broader definition, more systems (including some systems that already exist) would be covered, whereas the stricter (UK) definition would seem to cover only systems that are under the overall control of some highly sophisticated AI that might emerge in the future. The latter is presumably what the UK Ministry of Defence has in mind when it offers the assurance that “we do not operate, and do not plan to develop, any lethal autonomous weapons systems”.[5]
2.6. Unfortunately, in the Geneva-based discussions, the question of why a particular type of weapon system ought to be permitted or prohibited (because of the way that system incorporates AI technology) often gets overtaken by disagreement about what ‘autonomy’ in general ought to mean. Sometimes, too, the use of a weapon system is held to be justified because it is not ‘fully autonomous’ (in the manner of a ‘killer robot’),[6] yet this term misses the point that even an increase in the number of weapon system functions performed by AI (falling short of all of them) might reduce human control of that system to a morally unacceptable level.
Recommendation: to facilitate more productive diplomatic discussions, the UK government should bring its definition of ‘autonomous system’ into closer alignment with the AWS definitions used by the US government and the ICRC.
3. Benefits and risks of AWS: the ethical debate
3.1. In the ongoing ethical debate about deploying ‘lethal autonomous weapon systems’, a critical issue is whether any system can or should incorporate ethical decision-making by AI (functioning as an artificial moral agent). Proponents of AI ‘autonomy’ over whole weapon systems have generally claimed that this will produce ethically better outcomes than those expected when humans retain control. Opponents have usually insisted that weapons can only be sufficiently restrained by the exercise of human moral agency.
3.2. The debate is driven by different understandings of what it means to be ethical, and it has focused on the performance of the most morally significant function when it comes to violence: decision-making. Contributors to the debate who are agency-focused tend to argue that only humans can make moral decisions,[7] and contributors who are outcome-focused tend to be more open to the idea that AI involvement might result in better moral decisions overall.[8]
3.4. Against this, a different outcome-focused argument is: even if the introduction of AI-controlled violence demonstrably caused a reduction of unjust harms on an incident-by-incident basis, this benefit might yet be outweighed by the harm of an AI-driven increase in the overall risk of political violence occurring in the world. Such an objection to AWS reflects a fear that, if AI were to increase the tempo of attack-and-defence dynamics beyond humans’ capacity to comprehend it, violent conflicts would be more likely to escalate quickly, to the overall detriment of peace and people worldwide.
3.5. The ‘superior performance’ justification for replacing humans with AI could also be criticised as one that carries the potential for lowering the standard of acceptable ethical conduct in the exercise of violence on behalf of a state. AWS proponents tend to use as their moral benchmark the record of human frailty rather than the potential for human improvement. So, if the standard of violent human behaviour were to decline in the future, there would be room (according to proponents’ logic) to tolerate a deterioration in the ‘performance’ of AWS too.
3.7. In some predictions of how AI will ‘behave ethically’ in wartime, compliance with the ethical principle of discrimination (between legitimate and illegitimate targets) is discussed. And, sometimes, this is done in terms of AI sensing (using cameras, laser scanners and/or other tools) of entities and phenomena in an environment of violence. One idea is that a weapon system could incorporate ‘machine vision’ AI which is programmed, trained or enabled to learn not to target protected symbols (like the red cross) or small-statured people (like children). Or, AI could be tasked to distinguish, in video footage, an unarmed civilian from a motorcyclist carrying a machine gun. Or, AI could target ‘enemy’ weapons of a recognised type (like AK-47 rifles) which are presumed to be in the hands of hostile (legitimately targetable) individuals.
3.8. In each such instance, however, AI would arguably not be discriminating morally because it would not also be making contextual judgments, as a human would. Merely sensing a thing which has the size and shape of a rifle, for example, does not involve judging who (i.e. what kind of person) is carrying that rifle and why (i.e. for what kind of purpose). Such judgment is an inherently imprecise rather than physically determinate thing, and it is also essentially reflective in the sense that it brings to bear both learned ideas and lived experiences. The morality of a situation-in-context is thus only partially captured when dealing only with physical properties that stand as nonmoral proxies for moral requirements. Genuinely moral discrimination by AI alone is impossible, then, because the critical element of contextual judgment is missing.
3.9. When it comes to the ethical principle of proportionality, the purported exercise of moral agency by AI often relies on an assumption that ‘ethics’ is computable. The proportionality principle is thus approached, by some AWS advocates, as a kind of ‘moral arithmetic’. However, an enduring reality which is bypassed in any strictly numerical approach to proportionality is the incommensurability of different values to be counted and compared by the performer of a decision-making function. How, for example, is the relative value of a human life and a military objective to be measured and assessed?
3.10. On this anti-AWS argument, contextual judgment (about the instrumental and inherent worth of competing values) is, again, held to be an essential element of moral agency. And, as this element cannot be captured by AI (which is capable only of computation), it follows that ‘doing’ proportionality requires a human. Non-human (AI) adherence to this principle is impossible to achieve.
3.11. If the element of judgment were removed from our understanding of what it means to behave ‘ethically’ (such that the exercising of moral agency was reduced to abstraction and calculation), this would also deny the contestability of ethics. A narrow concept of what ‘ethics’ is might appeal to an engineer or user of AI, because it seems to make the challenge of being ‘ethical’ more tractable. However, it might not take long, then, for attempts by AI to ‘do the right thing’ to run into trouble.
3.12. Suppose, for example, that the rightness or wrongness of an AI decision were considered only to be a matter of following or violating a rule, respectively. Even if such ‘rules’ as discrimination and proportionality were somehow able to be encoded within the operating system of a weapon, a particular situation might yet prove that achieving good behaviour does not equate to rule-following. In some situations where competing values are at stake, it will appear to be the case that both following and violating a rule will lead to a bad outcome. This problem of ‘moral dilemmas’ often arises in war when lives are constantly on the line.
3.13. Alternatively, good behaviour might be difficult to equate to rule-following because a particular rule is itself unethical, either in principle or in certain kinds of circumstances. Here, the breaking of a rule can be considered a good action because it is the virtuous thing to do. Throughout military history, rule-breaking has occasionally taken the form of disobeying orders or even betraying a leader for the sake of a greater good, and it has sometimes required moral courage for a person to do this.[9]
3.14. Also, people have sometimes refrained from doing what a rule permits them to do (for example, killing an enemy soldier) and have instead exhibited the virtues of compassion and mercy toward a fellow human being. Such transcendence of a rules-based concept of ethics can sometimes be morally admirable in practice. And yet, as some opponents of AWS have argued, it is a form of goodness that is beyond the capacity of AI to achieve. For all that the incorporation of AI into weapon systems might seem to promise a moral gain (better humanitarian outcomes), there could also be a loss of moral opportunity (to be virtuous) when humans end up doing less.
Recommendation: in the operation of weapon systems, the UK government should restrict the function of moral decision-making to humans and avoid characterising AI as an artificial moral agent.
4. A safeguard: ensuring ‘meaningful human control’ over weapon systems
4.1. In the face of various ethical objections to the idea of AI agency in the operation of weapon systems, no government participating in the diplomatic debate about AWS has unambiguously advocated the full transfer of control from humans to AI. However, even if this never occurs, the use of a weapon system that incorporates AI might still be morally unacceptable. When human moral agency is at stake, it is not straightforwardly the case that non-autonomous weapon systems may be used and autonomous ones may not. Rather, much depends on the way in which a weapon system’s multiple functions are distributed among the AI and human elements of the system, and on the conditions under which each function is then performed. When approached in this way, moral permissibility becomes a matter of whether a weapon system is subject to ‘meaningful human control’ (MHC).
4.2. The MHC concept prompts consideration of whether, in the context of a given weapon system’s operation, human control over violence is genuine or illusory. The concept precludes the proposition that any form of human involvement is necessarily a sufficient safeguard against injustice. In considering whether human control of a system is ‘meaningful’, the central question is not whether humans or AI should exercise moral agency, but rather: how should AI technology be used in a way that assists (or avoids disrupting) the proper exercise of human moral agency?
4.3. As a guiding principle for thus limiting the transfer of control over violence from humans to AI, the content of MHC has yet to be precisely defined and agreed upon internationally. However, there are at least five indicators worth considering. Human control of a weapon system is ‘meaningful’ if it:
4.4. A weapon system’s ‘critical’ functions are generally understood to include selecting targets and engaging (firing at) targets. These are the functions that most directly enable violence to occur, and other functions are morally less significant. In the operation of an armed drone, for example, non-critical functions (take-off, landing, navigation, and in-flight manoeuvring) might be performed by AI technologies without raising any serious ethical concerns. The violence emanating from that drone could still be regarded as remaining under meaningful human control if its critical functions were performed by humans.
4.5. The meaningfulness of human control can also be understood in a temporal sense. Here, adherence to the MHC principle requires that relevant human function-performers within a system can understand and interact with its AI function-performers in a timely fashion. Human control of AI-assisted violence is temporally meaningful only if the opportunity exists to override AI and thus prevent or mitigate injustices resulting from malfunctions or mistakes. This opportunity might involve, for example, having time for a human to check whether the AI performer of an intelligence function has correctly tagged a target as ‘legitimate’. Or, it might involve the preservation of time for careful checking of an AI-generated estimate of the damage likely to be caused if the weapon system were used under certain circumstances.
4.6. Another indicator of MHC is non-excessive trust in the AI components of a weapon system. From an operational perspective, it is well-understood that human function-performers within that system need somehow to establish a sense of trust. However, an ethical concern can arise afterwards from the potential for humans to over-trust AI. Sometimes, excessive trust in AI might result simply from human complacency and overconfidence. At other times, excessive trust might be largely attributable to information overload.
4.7. If human operators of weapon systems experience ‘automation bias’,[10] this can lead to an overestimation of the accuracy and reliability of information provided by AI. Alternatively, if humans become confused by situations that are fast-paced and complicated, they might feel that they have no choice but to trust in an AI-generated output that is possibly not trustworthy. Here, too, the meaningfulness of human control and the purchase of moral agency are undermined, because the bestowing of ‘trust’ in AI (to enable good outcomes) is forced and therefore false.
4.8. A capacity for moral responsibility, including the ability to be held accountable for wrongdoing, is another part of what it means to be genuinely in control. As an indicator of MHC, a commitment to human accountability for the operation of a weapon system serves as a check against the purported blaming and punishing of an AI technology that is inherently unblameable and unpunishable. Beyond this, however, there needs to be more than a commitment only to make any human accountable. For human control to be meaningfully connected to moral responsibility, the holding of someone to account must still be fair. It would not satisfy the MHC principle if, for example, an instance of wrongdoing was strictly blamed upon one person who was clearly incapable of understanding and intervening in the performance of a certain function by AI. That person (a predesignated scapegoat) would not be genuinely responsible for the wrongdoing, and its true source (whatever it was) would remain unknown and unaddressed.
4.9. The strength of a commitment to meaningful human control is indicated, lastly, by the way in which a weapon system is designed. Here, part of the challenge is to avoid excluding human operators from necessarily performing the system’s critical functions. Achieving MHC by design could involve building in limitations on AI behaviour,[11] including deliberately limiting the speed of AI information-processing so that human function-performers are less likely to be cognitively overwhelmed.
4.10. In addition, an important consideration is whether a particular function within a system is able to be performed only by a human. Where this is not the case (because that function is performable by a human or AI), adherence to the MHC principle is merely a matter of choosing how to use a weapon system. Choices are more easily changed than are the engineered features of a system. So, if control over a critical function were instantly transferrable to an AI function-performer (perhaps by a human finger flicking a switch), that design feature would be one that enabled the rapid fulfilment of any temptation to release a weapon system from human restraint. The human control over such a system would thus be less meaningful because it could so easily be given away.
4.11. A weapon system incorporating AI can have built-in restraints and still not be under meaningful human control, and this problem is well illustrated by one type of ‘loitering munition’ (manufactured by the Israel-based company IAI): the Harpy.[12] It is designed, once activated, to aerially survey a large area of land until it detects an enemy’s radar signal, whereupon it responds by crashing itself into the source of the signal and explodes. Thus, the performance of the Harpy’s critical functions (target selection and engagement) does not require the maintenance of a communication link to a remote operator. Absent such a continuous opportunity for human intervention, this loitering munition is still temporally and spatially restrained by design. That is, its operation cannot exceed a set time limit and it cannot fly outside set geographical boundaries.
4.12. From a MHC perspective, however, the risk is that this weapon system will be insufficiently sensitive to morally significant changes in its operating environment that might arise even within a short timeframe. Although, by design, a Harpy’s only potential targets are non-human ones (radar installations), it is apparently not designed to deactivate itself (or to be able always to be deactivated remotely) if humans unexpectedly come close to its materiel target. Here, the moral problem with a lack of meaningful human control is essentially one of non-discrimination: the inability of this weapon system’s targeting function (as performed by AI) to spare any nearby human from harm, let alone a civilian human.
Recommendation: the UK government should incorporate a specific concept of meaningful human control into its Defence AI policy and commit to developing that concept in the light of future technological developments.
5. Human-Machine Interaction
5.1. In the 2022 Defence AI Strategy, the Ministry of Defence (MoD), places the concept of “Human-Machine Teaming” (rather than machine autonomy) at the heart of its approach to AI adoption.[13] In an accompanying policy statement, the MoD also exhibits an intention to maintain human control over AI-enabled systems. This is reflected in three of its Ethical Principles for AI in Defence: Human-Centricity, Responsibility, and Understanding.[14] The explanation of the Responsibility principle, for example, includes the claim: “While the level of human control will vary according to the context and capabilities of each AI-enabled system, the ability to exercise human judgement over their outcomes is essential.”[15]
5.2. Varying the level of human control has the potential to be ethically permissible, because the achieving of MHC does not necessarily require the same approach to be applied to all types of weapon system. Rather, as the MoD’s statement implies, context is morally relevant: different systems operating in different situations could be made to satisfy the MHC principle in different ways. In some contexts, it would be morally more important to arrange human-machine interaction (HMI) in a way that is highly restrictive of AI function-performers within the weapon system. In other contexts, “milder forms of human control” would suffice.[16]
5.3. Sometimes, circumstances might give rise to an ethical requirement for HMI in a weapon system to be arranged on what could be called a ‘green-light’ basis. This arrangement would involve a built-in presumption against engaging a target that is selected or recommended by AI; the engagement function could only be performed if or when a human operator decided to go ahead. In different circumstances, only a ‘red-light’ HMI arrangement would be morally required for MHC purposes. There would then be a built-in presumption in favour of proceeding toward engaging an AI-selected selected target within a set timeframe, but this process could be overridden by a human operator deciding to stop it.
5.4. Green-light HMI would be suitable where, for example, there is a higher moral risk associated with excessive human trust in AI, and red-light HMI would be suitable where a lengthy opportunity for human intervention is morally less important. In other kinds of situations, however, it might be the case that neither mode of HMI would suffice because it is too risky to ever involve AI in performing any of a weapon system’s critical functions.
5.5. When it comes to determining which (if any) mode of HMI is morally acceptable in the exercise of AI-assisted violence, two examples of contextual factors worth considering are: (a) the type of target; and (b) the prevailing environmental conditions.
5.6. The meaningfulness (or otherwise) of an arrangement for human control can begin to be judged according to whether a weapon system incorporating AI is to be used against materiel targets (for example, incoming missiles) or against humans. Morally, it is a less serious matter to target inanimate objects than it is to target living people. So, to the extent that the MHC principle serves a humanitarian end, it is reasonable to distinguish anti-materiel targeting from anti-human targeting when determining what kind of HMI arrangement is ethically permissible.
5.7. Directing force against materiel can, of course, carry a risk of harm to untargeted humans nearby. For example, a drone releasing onboard munitions to destroy an incoming air-to-air missile might result in those munitions striking a civilian aircraft further away. What matters more here, though, is that anti-materiel targeting by a specially-designed weapon system is ‘non-lethal’ in the sense that it is not intended (by the designer) to kill anyone. This important distinction between non-lethal and deadly violence has sometimes been overlooked in the broader debate about AWS, largely due to the efforts of some organisations to raise awareness and achieve prohibition of so-called “killer robots”.[17]
5.8. The ethical permissibility of a mode of HMI can also depend upon whether the use of force is to occur in an environment of armed conflict that is ‘cluttered’ or ‘low-clutter’. Or, permissibility could depend upon whether an AI-enabled weapon system is to be used in a non-war environment (for example, domestic law-enforcement) in which stricter ethical standards apply to uses of force on behalf of the state.
5.9. In the context of armed conflict, some operating environments can be described as ‘low-clutter’ in the sense that there are few (if any) civilians or friendly personnel present who could be endangered by a weapon system. Most aerial and maritime environments fit this description, although many land environments (such as cities) do not. Other environments are ‘cluttered’, so using a weapon system there against enemy targets (materiel or human) carries a greater risk of breaking the wrong things or killing the wrong people.
5.10. Fewer ethical problems would be likely to arise where a weapon system incorporating AI was used in a low-clutter environment, because here legitimate targets are more easily recognisable by analysing data from the system’s sensors. It could be morally more acceptable, then, to run the risk of having in place a mode of HMI in which less human control is exercisable over the weapon system’s critical functions. Even so, it would be important to ensure that that system stayed in a low-clutter environment, or that it could be swiftly removed if the environment suddenly became cluttered.
5.11. Whatever ‘meaningful’ human control means in a war scenario, it has a stricter meaning when the state uses a weapon system to violently enforce its domestic criminal law. Where a system is to be used for this purpose, the moral expectation for taking care with human lives is greater, and the scope for permitting AI-assisted violence is likely to be much narrower (if any exists at all). This is mainly because a law enforcement environment is heavily ‘cluttered’ by a strong, rights-based presumption against killing in peacetime. The essential objective of policing is not to defeat enemies but rather to protect the lives of all citizens. When it comes to the police use of force, adhering to the MHC principle should thus involve taking a stricter view of what weapon system functions should count as ‘critical’ and of when (if ever) AI may perform such functions, even under close supervision by humans. A mode of HMI that is ethically permissible in armed conflict might, then, be unjustifiable within a weapon system that is to be used for law enforcement.
Recommendation: the UK government should consider what system-specific arrangements for ensuring meaningful human control are required in different kinds of AI-enabled weapon systems.
6. Conclusion
6.1. It is possible to imagine a future in which humans suffer fewer unjust harms because some weapon systems are controlled entirely by AI. However, weighing against this imagined humanitarian gain, it is certain that much would be lost from the meaning of ‘being ethical’ if humans became less involved in decision-making at moments when other humans’ lives are immediately at stake.
6.2. Even if future AI technologies turned out to be better overall at following rules for restraining state violence, it would remain the case that only humans possess the valuable capacities to make judgments based on lived experience, to disobey rules when this is morally required, and to bear moral responsibility for wrongdoing. Thus, to preserve the beneficial effect of these capacities, the better approach to incorporating AI into weapon systems is to ensure that it leaves room for violence to remain under meaningful human control.
6.3. In shifting attention away from the vexed issue of what weapon systems count as ‘autonomous’, the MHC concept is useful because it allows distinctions to be drawn between the technical characteristics and ethical acceptability of different kinds of AI-assisted systems. Although there are certain minimal indicators of whether human control is ‘meaningful’, it is also the case that context matters when making moral judgments about how AI should assist violence. Standards of meaningfulness can and should differ according to, for example, the type of target to be struck and the kind of environment into which a weapon system is deployed.
6.4. When it comes to future incorporations of AI into weapon systems, it can be anticipated that some higher-risk circumstances would require a stricter (green-light) mode of HMI when selecting and engaging targets. In other circumstances, when a lengthy opportunity for human intervention is morally less important, a milder (red-light) mode of HMI would suffice in the performance of these critical functions. And, sometimes, when the risk of unjust harm is especially high, AI assistance in the application of force would never be morally permissible.
Professor Christian Enemark
March 2023
15
[1] See: Stuart Russell, Human Compatible: AI and the Problem of Control, London: Penguin, 2019: 46.
[2] DoD Directive 3000.09, ‘Autonomy in Weapon Systems’, US Department of Defense, 21 November 2012: 13.
[3] ICRC, ‘ICRC position on autonomous weapon systems’, International Committee of the Red Cross, 12 May 2021, <https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems>.
[4] Development, Concepts and Doctrine Centre, Human-Machine Teaming (Joint Concept Note 1/18). Swindon: UK Ministry of Defence, 2018: 60.
[5] Development, Concepts and Doctrine Centre, Human-Machine Teaming (Joint Concept Note 1/18). Swindon: UK Ministry of Defence, 2018: 58.
[6] See: Jean-Baptiste Jeangène Vilmer, ‘A French Opinion on the Ethics of Autonomous Weapons’, War on the Rocks, 2 June 2021, <https://warontherocks.com/2021/06/the-french-defense-ethics-committees-opinion-on-autonomous-weapons/>.
[7] See: European Parliament, Resolution of 20 January 2021 on artificial intelligence: questions of interpretation and application of international law in so far as the EU is affected in the areas of civil and military uses and of state authority outside the scope of criminal justice (2020/2013(INI)), para. 25.
[8] See: Development, Concepts and Doctrine Centre, Human-Machine Teaming (Joint Concept Note 1/18). Swindon: UK Ministry of Defence, 2018: 50.
[9] See: Nigel Jones, Countdown to Valkyrie: The July Plot to Assassinate Hitler, Barnsley: Frontline Books, 2009.
[10] See: Raja Parasuraman and Victor Riley, ‘Humans and Automation: Use, Misuse, Disuse, Abuse’, Human Factors 39 (2) (1997): 230–53.
[11] Tony Gillespie, ‘Good Practice for the Development of Autonomous Weapons’, The RUSI Journal, 165 (5–6) (2020): 58–67, at 62.
[12] ‘HARPY: Autonomous Weapon for All Weather’, IAI, <https://www.iai.co.il/p/harpy>.
[13] Ministry of Defence, Defence Artificial Intelligence Strategy, London: UK Government, 2022: 7.
[14] Ministry of Defence, Ambitious, Safe, Responsible: Our Approach to the Delivery of AI-Enabled Capability in Defence. London: UK Government, 2022: 9 – 10.
[15] Ministry of Defence, Ambitious, Safe, Responsible: Our Approach to the Delivery of AI-Enabled Capability in Defence. London: UK Government, 2022: 10.
[16] Daniele Amoroso and Guglielmo Tamburrini, ‘Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues’, Current Robotics Reports 1 (2020): 187–94, at 192.
[17] See, for example: Campaign to Stop Killer Robots, <https://www.stopkillerrobots.org>.