Sie sind auf Seite 1von 8

UK

and Definitions of Autonomous Weapons Systems


A short report for the House of Lords select committee on Artificial Intelligence
Noel Sharkey

The UK Ministry of Defence definitions of Autonomous Weapons Systems (AWS), also


known as Lethal Autonomous Weapons Systems (LAWS), and their interpretation of
Automated Weapons Systems are out of step and at odds with how European and US
allies and others are describing them at United Nations meetings such as the CCW. The
definitions are also at odds with the engineering community.
Part 1 of this brief report outlines the definitional differences between the UK and its
allies on Autonomous Weapons Systems. Part 2 focuses on how MoD documents assign
the dividing lines between Automated and Autonomous Weapons Systems differently
than others. This creates a definitional conflation that clouds political judgments and
impacts negatively on the UK’s ability to develop coherent policies on autonomy in
weapons that are consistent with and relevant to the international community of
nations. Part 3 considers the need for a definition of Autonomous Weapons Systems that
includes the type of human control required for compliance with International Law.
There is an opportunity here for the UK to ‘get ahead of the game’ and show
international leadership on the issue.
1. Autonomous Weapons Systems1
According to the UK Ministry of Defence, two of the requirements for AWS status are
that they must be:
(i)“self-aware and their response to inputs indistinguishable from, or even superior to,
that of a manned aircraft. As such, they must be capable of achieving the same level of
situational understanding as a human.”2
(ii)“capable of understanding higher level intent and direction. From this understanding
and its perception of its environment, such a system is able to take appropriate action to
bring about a desired state. It is capable of deciding a course of action, from a number of
alternatives.”3
These machines are unlikely to exist in the near future if ever. As the MoD correctly
point out, “machines with the ability to understand higher-level intent, being capable of
deciding a course of action without depending on human oversight and control currently
do not exist and are unlikely in the near future.”
Such ‘science fiction’ requirements can misdirect the UK into inferences such as, ‘since
they are unlikely to exist in the near future, we do not need to consider their impact on
the nature of armed conflict or consider prohibiting or regulating them.’ Others define
AWS in a realistic way that is consistent with new developments in weaponry in the hi-
tech nations including the UK.
In the field of robotics the terms autonomy and autonomous robot are used with specific
meanings that are only vaguely related to the political and philosophical definitions of
autonomy. They were first used to indicate that the robot had an onboard computer
(when computers got small enough). An autonomous robot is a mobile robot that can
perform tasks in an (usually) unstructured environment without human supervision or
guidance. Sensors on the robot send information to a computer or controller that
operates motors to perform the tasks. A good example is the Roomba vacuum cleaning
robot. In contrast, a semi-autonomous robot can perform some of its tasks without

1
human intervention. This differs from an automatic robot that carries out a set of
preprogrammed and predefined actions in a fixed environment e.g. painting a car.
The key component of an autonomous weapons system is that it has autonomy in the
critical functions of target selection and the application of violent force. In other words,
a weapons systems that can select targets and apply force without human supervision at
the time of attack. This is how AWS are discussed at the UN. Below are significant
extracts from the definitions of European state actors, the US and the International
Committee of the Red Cross that evidence this.
US “A weapon system that, once activated, can select and engage targets without further
intervention by a human operator.”4
International Committee of the Red Cross (ICRC) “Any weapon system with
autonomy in its critical functions. That is, a weapon system that can select (i.e. search for
or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or
destroy) targets without human intervention.” 5
France “[LAWS imply] a total absence of human supervision, meaning there is
absolutely no link (communication or control) with the military chain of command …
targeting and firing a lethal effector (bullet, missile, bomb, etc.) without any kind of
human intervention or validation.”6
Norway: “weapons that would search for, identify and attack targets, including human
beings, using lethal force without any human operator intervening.”7
Austria “[AWS are] weapons that in contrast to traditional inert arms, are capable of
functioning with a lesser degree of human manipulation and control, or none at all.”8
Italy “[Lethal AWS are systems that make] autonomous decisions based on their own
learning and rules, and that can adapt to changing environments independently of any
pre-programming” and they could “select targets and decide when to use force, [and]
would be entirely beyond human control,”9
Switzerland “[AWS are] weapons systems that are capable of carrying out tasks
governed by IHL in partial or full replacement of a human in the use of force, notably in
the targeting cycle.”10
The Netherlands “a weapon that, without human intervention, selects and attacks
targets matching certain predefined characteristics, following a human decision to
deploy the weapon on the understanding that an attack, once launched, cannot be
stopped by human intervention.”11
The Holy See “An autonomous weapon system is a weapon system capable of
identifying, selecting and triggering action on a target without human supervision.”12

2 Automated v Autonomous Weapons Systems


An important issue in UN discussions about Autonomous Weapons Systems is that there
are number of weapons currently being used for high speed defences such as shooting
down missiles, mortar shells and swarm attacks on ships. Examples of weapons include
C-RAM, Phalanx, NBS Mantis and Iron Dome. These systems complete their detection,
evaluation and response process within a matter of seconds and thus render it
extremely difficult for human operators to exercise meaningful supervisory control once
they have been activated other than deciding when to switch them off.

2
There is understandable concern that new regulations or prohibitions of autonomous
weapons, may impact on use of such defensive weapons. Thus it is felt that there is a
definitional need to separate these from Autonomy in weapons systems.
Some suggest calling the defensive weapons systems automatic or automated rather
than autonomous. For example, the International Committee of the Red Cross proposes
that, “An automated weapon or weapons system is one that is able to function in a self-
contained and independent manner although its employment may initially be deployed
or directed by a human operator.”13 The US Department of Defence suggests that, “…the
automatic system is not able to initially define the path according to some given goal or
to choose the goal that is dictating its path.”14 There may also be other ways to make this
separation between defensive and autonomous weapons systems (See Appendix 1 on
SARMO systems).
UK definitions conflate autonomous and automated weapons systems: The UK
definition of automated weapons is similar to those of others: “… an automated or
automatic system is one that, in response to inputs from one or more sensors, is
programmed to logically follow a predefined set of rules in order to provide an outcome.
Knowing the set of rules under which it is operating means that its output is
predictable.15” However, MoD pushes autonomous weapons systems into the category of
automated weapons systems. Sometimes they refer to these as advanced automation or
highly automated. This conflates autonomous and automated weapons systems in
contrast to the definitions of other nation states.
This definitional conflation leads to UK thinking about Lethal Autonomous Weapons
Systems (LAWS) that is out of step with their allies “the UK believes that LAWS do not,
and may never, exist.” Yet according to the definitions of others, the US, China, Israel and
Russia are the front-runners in developing and testing prototype autonomous tanks,
fighter jets, submarines, ships and swarm technology. Swarm technology is about the
creation of force multiplication with large numbers of attack vehicles operating
autonomously together.
The reason why the UK takes a stance on saying that LAWS may never exist is merely
definitional. By setting an unrealistic requirement for its definition of LAWS it places
them into the category of automated weapons. This hides, either inadvertently or
deliberately, the UK’s views and plans for autonomous weapons. This shows up most in
the MoD document Future Operating Environment 2035 (FOE35) 16 where the term
automated weapons systems is used to refer to what others call Autonomous Weapons
Systems or Lethal Autonomous Weapons Systems.
Thus whilst the UK continue to say that they will never develop autonomous weapons
systems and thus do not see the need to support new regulations, a moratorium or a
prohibition on them, they use veiled comments in FOE35 to say that, “our immediate
priorities should be: … investment in emerging technologies, especially automated
systems.” (p30) and “Defence will need to make exploiting emerging technology and
capability in automated systems a priority, as well as countering our opponents’
systems.” (p32)
It is clear that the UK, as a morally upstanding nation, has concerns about the ethical and
legal use of what they call advanced automated systems (read Autonomous Weapons
Systems): “Our legal and societal norms will continue to apply restraint to the conduct of
military operations, particularly violent conflict, out to 2035. This will be particularly

3
true where this applies to new technologies such as automated systems and novel
weapons.
There are UK discussions that could directly engage with some of the arguments against
and problems with LAWS at the UN. But they do not articulate these concerns because
they all fall under the UK definition of automated weapons systems. Here are three
quotes from FOE35 to demonstrate that concerns about the impact of LAWS on global
security are being hidden under the term ‘automated’.
(i) Use of AWS by rogue nations and non-state actors: “Our potential adversaries may
not be so constrained, and may operate without restraint.” (p44) and “in the virtual
environment, swarm attacks could be planned through crowd-sourcing before being
executed through multiple access points in multiple countries, making deterrence and
defence against them almost impossible. These could be orchestrated by terrorists,”
(p41)
(ii) Proliferation of AWS: “Automated systems, including those that are armed, will
proliferate over the next 20 years. Advances in technology will almost certainly enable
swarm attacks, allowing numerous devices to act in concert. This may serve to counter
the advantage of high-end systems.” (p27) and “As they become cheaper and easier to
produce, technologically advanced systems are likely to proliferate, with developing
states and non-state actors having growing access to capable systems.” (p16)
(iii) Lowering the threshold for resorting to violent conflict: “change the threshold
for the use of force. Fewer casualties may lower political risk and any public reticence
for a military response.”
The UK has rendered itself unable to make these arguments about AWS at the UN
because it has dismissed them out of hand by giving AWS a science fiction definition and
hiding what everyone else calls AWS or LAWS under the cloak of automated weapons
systems.
3. Human Supervisory Control of weapons systems
One necessary requirement for defining a weapons system as autonomous is common to
all: autonomous weapons systems are weapons that operate without human control.
The hub of the debate on autonomous weapons systems concerns what is meant by
human control. While all Nation States say that their weapons will be under human
control, they do not specify what this mean.
The Parliamentary under secretary of state, Lord Astor of Hever, stated that: ‘[T]he MoD
currently has no intention of developing systems that operate without human
intervention … let us be absolutely clear that the operation of weapons systems will
always be under human control’.17 What has not been made absolutely clear in the
United Kingdom, however, is exactly what type of human control will be employed.
To say that there is a human in the control loop does not clarify the degree of human
involvement. It could simply mean a human programming a weapons system for a
mission or pressing a button to activate it, or it could (hopefully) mean exercising full
human judgment about the legitimacy of a target before initiating an attack. The UK NGO
Article 36 coined the term meaningful human control18 to facilitate discussions about the
type of human control for every attack that is acceptable under international law.
The UK has a wonderful opportunity here to show international leadership by laying out
in detail what human control of weapons systems means. Clues to the type of process
required for this analysis and the problems are provided in Appendix 2. This is not

4
intended to be treated as dogma. It has been derived from the large scientific literature
on human supervisory control of machinery.

APPENDIX 1: A draft definition of defensive weapons
There are currently weapons systems in use that operate automatically once activated.
Such SARMO (Sense and React to Military Objects19 ) weapon systems intercept high-
speed inanimate objects such as incoming missiles, artillery shells and mortar grenades
automatically. Examples include C-RAM, Phalanx, NBS Mantis and Iron Dome. These
systems complete their detection, evaluation and response process within a matter of
seconds and thus render it extremely difficult for human operators to exercise
meaningful supervisory control once they have been activated other than deciding when
to switch them off.
There are a number of common features for SARMO weapons20 that are necessary
although not sufficient to keep them within legal bounds:
• fully pre-programmed to automatically perform a small set of defined actions
repeatedly and independently of external influence or control

• used in highly structured and predictable environments that are relatively
uncluttered with very low risk of civilian harm

• fixed base – although these are used on manned naval vessels, they are fixed base
in the same sense as a robot arm on a ship would be.

• switched on after detection of a specific threat

• unable to dynamically initiate a new targeting goal or change mode of operation
once activated

• have constant vigilant human evaluation and monitoring for rapid shutdown in
cases of targeting errors, change of situation or change in status of targets

• the output and behaviour of the system is predictable

• only used defensively against direct attacks by military objects

The US Department of Defense calls these human supervised autonomous weapons:
“Human-supervised autonomous weapon systems may be used to select and engage
targets, with the exception of selecting humans as targets, for local defense to intercept
attempted time-critical or saturation attacks for:
(a) Static defense of manned installations.
(b) Onboard defense of manned platforms.”21
It is the human decision of when to use the weapon that is key to the legality of SARMO
weapons systems. It is essential for making such decisions that precautionary measures
have been taken about the target’s significance - its necessity and appropriateness, and
likely incidental and possible accidental effects of the attack22. It is also essential that
vigilance is maintained during operation of the weapons systems and that there is a
means for rapidly deactivating the weapons if it becomes apparent that the objective is
not a military one or that the attack may be expected to cause incidental loss of civilian
life.23

5
APPENDIX 2 Ideas for the analysis of human control of weapons
In order to ensure the legality of human control of weapons it is necessary to ensure that any
interface between operators and weapons is designed with an understanding of human
psychological processes. This is required to guarantee that precautionary measures are
taken about the significance of a target, its necessity and appropriateness, and likely
incidental and possible accidental effects of the attack.
To see how this works, we can look at fundamental types of control as shown in Table
1.24

Table 1: A classification for levels of human supervisory control of weapons

1. human deliberates about a target before initiating any attack


2. program provides a list of targets and human chooses which to attack
3. program selects target and human must approve before attack
4. program selects target and human has restricted time to veto
5. program selects target and initiates attack without human involvement

Level 1 control is the ideal. A human commander (or operator) must have full contextual
and situational awareness of the target area at the time of a specific attack and be able to
perceive and react to any change or unanticipated situations that may have arisen since
planning the attack. There must be active cognitive participation in the attack and
sufficient time for deliberation on the nature of the target, its significance in terms of the
necessity and appropriateness of the attack, and likely incidental and possible accidental
effects. There must also be a means for the rapid suspension or abortion of the attack.
Level 2 control could be acceptable if shown to meet the requirement of deliberating on
the potential targets. The human operator or commander should be in a position to
assess whether an attack is necessary and appropriate, whether all (or indeed any) of
the suggested alternatives are permissible objects of attack, and to select the target
which may be expected to cause the least civilian harm. This requires deliberative
reasoning. Without sufficient time or in a distracting environment the illegitimacy of a
target could be overlooked.
A rank ordered list of targets is particularly problematic as there would be a tendency to
accept the top ranked target unless sufficient time and attentional space is given for
deliberative reasoning.
Level 3 is unacceptable. This type of control has been experimentally shown to create
what is known as automation bias in which human operators come to trust computer
generated solutions as correct and disregard or don’t search for contradictory
information. Cummings experimented with automation bias in a study on an interface
designed for supervision and resource allocation of in-flight GPS guided Tomahawk
missile.25 She found that when the computer recommendations were wrong, operators
using Level 3 control had a significantly decreased accuracy.

6
Level 4 is unacceptable because it does not promote target identification and a short
time to veto would reinforce automation bias and leave no room for doubt or
deliberation. As the attack will take place unless a human intervenes, this undermines
well-established presumptions under international humanitarian law that promote
civilian protection.
The time pressure will result in operators neglecting ambiguity and suppressing doubt,
inferring and inventing causes and intentions, being biased to believe and confirm,
focusing on existing evidence and ignoring absent but needed evidence. An example of
the errors caused by fast veto came in the 2004 war with Iraq when the U.S. Army's
Patriot missile system engaged in fratricide, shooting down a British Tornado and an
American F/A-18, killing three pilots.26
Level 5 control means control by computer alone and it is therefore refers to an
autonomous weapons systems.
It should be clear from the above that research is urgently needed to ensure that human
supervisory interfaces make provisions to get the best level of human control needed to
comply with the laws of war in all circumstances.

Footnotes

1 Thanks to Daan Kayser from PAX Netherlands for assistance in compiling the European Definitions. See

also PAX report Keeping Control, October 2017 www.paxforpeace.nl


2 UK Ministry of Defence, Development, Concepts and Doctrine Centre (2011) The UK Approach to

Unmanned Aircraft Systems, Joint Doctrine Note, 30 March 2011


https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/33711/20110505J
3 Ministry of Defence, ‘Joint Doctrine Publication 0-30.2: unmanned aircraft systems’ September 2017,

https://www.gov.uk/government/publications/unmanned-aircraft-systems-jd-0-302.
4 US Department of Defense (DoD), Autonomy in Weapon Systems, Directive 3000.09, 21 November 2012

and its amended version (still Directive 3000.09) 2017


5 Views of the International Committee of the Red Cross on autonomous weapon systems at the CCW

Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Geneva
6 Working Paper of France, ’Characterization of a LAWS’, CCW informal meeting of experts on LAWS,

Geneva, April 2016,


7 Statement of Norway, CCW informal meeting of experts on LAWS Geneva, 13 April 2016,

http://www.reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2016/meeting-
expertslaws/statements/13April_Norway.pdf
8 Statement of Austria, CCW informal meeting of experts on LAWS, Geneva, 13 May 2014,https://unoda-
web.s3-accelerate.amazonaws.com/wp-
content/uploads/assets/media/22D8D3D0ACB39CA8C1257CD70044524B/file/Austria%2BMX%2BL
AWS.pdf.
9 Statement of Italy, CCW informal meeting of experts on LAWS, Geneva, 12 April 2016

10 Informal Working Paper submitted by Switzerland, CCW informal meeting of experts on LAWS, 30

March 2016, http://www.reachingcriticalwill.org/images/documents/Disarmament-


fora/ccw/2016/meeting-expertslaws/documents/Switzerland-compliance.pdf
11 AIV/CAVV, ‘Autonomous weapon systems; the need for meaningful control’, October 2015, a synopsis of

the report can be found here: http://aiv-advies.nl/8gr#advice-summary and the full report here
http://aivadvies.nl/download/606cb3b1-a800-4f8a-936f-af61ac991dd0.pdf.
12 Working Paper submitted by the Holy See, ‘Elements Supporting the Prohibition of Lethal Autonomous

Weapons Systems’, April 2016,

7

http://www.reachingcriticalwill.org/images/documents/Disarmamentfora/ccw/2016/meeting-
experts-laws/documents/HolySee-prohibition-laws.pdf.
13 ICRC (2011) International Humanitarian Law and the challenges of contemporary armed conflicts, p 39

14 US Department of Defense (2013) Unmanned Systems Integrated Roadmap, FY2013-2038, p 66

15 It is important to note here that when a mobile device is being controlled by information detected by

sensors, its exact behavior cannot be predicted in an open ended or unstructured environment.
16 UK Ministry of Defence, Strategic Trends Programme: Future Operating Environments 2035, 14

December 2015
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/646821/2015120
3-FOE_35_final_v29_web.pdf
17 26 March 2013. Cf. http://bit.ly/1lZMQyW _14.

18 Article 36, Autonomous weapons, meaningful human control and the CCW, May 21, 2014

http://www.article36.org/weapons-review/autonomous-weapons-meaningful-human-control-and-
the-ccw/
19 The term SARMO weapons first appeared in, Sharkey N . Towards a principle for the human supervisory
control of robot weapons’, Politica and Società, 2 (2014), 305–24.
20 Fire and forget weapons such as radiation detection loitering munitions and heat seeking missiles are

not included here and require a separate discussion.


21 US Department of Defense (2012) op cit. p7

22 As specified Article 57 of additional protocol 1 to the Geneva Convention 1977


http://bitly.com/1hJF4GC Last accessed March 5 2014
23 See article 57 2(iii)(b) op cit for a full account.

24 For a more in-depth understanding of these analyses see Sharkey, N.E. (2016) Staying in the Loop:

human supervisory control of weapons, in Bhuta Nehal and Hin Yan Lui (eds), Autonomous Weapons
Systems and the Law, Cambridge University Press.
25 Cummings, ML (2006) Automation and Accountability in Decision Support System Interface Design,

Journal of Technology Studies, vol. 32, 23-31


26 Cummings (2006) ibid

Das könnte Ihnen auch gefallen