Autonomous Weapons and the Individualisation of War: The CCW Moves Ahead

A- A A+

This past week member states to the Convention on Certain Conventional Weapons (CCW) met at the United Nations in Geneva to discuss various disarmament issues and prepare for its Fifth Review Conference in December of 2016.  Additionally, on the agenda was whether or not member states ought to continue debating a preemptive ban on lethal autonomous weapons systems, or weapons systems that can detect, select and attack a target without human intervention, under an additional CCW Protocol.  In the springs of 2014 and 2015, the CCW held week long meetings of invited experts in informal (i.e. off the record) settings, where the experts discussed the technical, operational, legal and moral considerations of lethal autonomous weapons.  This past week, the member states voted again to continue an additional week long informal meeting of experts in April of 2016.  Having participated as an invited expert in 2015, I am happy to see the discussion going forward. 

The issue of lethal autonomous weapons is distinct from the current debate over the use of “drones” or remotely piloted aircraft (RPA), but is still in some senses linked.  There are two worries that present themselves, especially when viewed from the perspective of the individualization of war.  First, is whether and to what extent delegating the decision to use lethal force to a machine is legally, morally or prudentially permissible.  Much is debated on this front, and I cannot do it justice here.  Second, is whether such weapons were utilized against human targets—and not solely materiel—might they have the capacity to attack only those liable to harm.

Drones are currently piloted by a human operator, and the decision to release weapons is made by a human.  Moreover, a myriad of humans in the military and intelligence circles undertake the targeting process.  This may not and more likely will not be the case with autonomous weapons systems.  These systems will not be remotely piloted; they will be capable of navigating and moving on their own once deployed, or even if stationary, will still be capable of targeting and firing without human intervention.  Moreover, how the detection and selection of a particular target takes place is still open to debate.  That is, whether and to what extent prior loading of preselected targets counts as a “human” intervening in the targeting process—thereby making the weapon a “semi-autonomous” one—or if mere selection of a target object once deployed makes the weapon “fully autonomous.”

For instance, take the Israeli Harpy.  This is a loitering munition (drone) that is capable of navigating to a particular area, loitering over that area for up to several hours, and utilizing its seeker to find hostile radar emitters.  Once a radar emitter is located, the munition inverts itself into a vertical dive and detonates its warhead just above the signal.    If one were to claim that the Harpy is semi-autonomous because it can only detect and select particular radar signals and compare those signals to a preprogrammed library of permissible targets (created by humans), then this munition would not be part of a preemptive ban.  However, if one were to claim that any commander who fields a Harpy over a target area does not actually know exactly which target the munition will strike because the munition makes the “decision” then one might want to consider this an autonomous weapon.  In essence, the commander can have no a priori knowledge of where a potential radar emitter will be, and thus she cannot know if striking it is permissible.  As most radar emitters are mobile, an adversary may place them by protected areas or persons, such as hospitals, cultural heritage sites or in civilian populated areas.  However, the commander has selected the target area, and so one might claim that this is sufficient for “selecting.”   Yet as we can see, such a system is quite unlike a Predator RPA flown to a particular area, where human operators have persistent surveillance of the area and control over each particular strike.

Where the questions of individual liability to harm come into play, both from an RPA and an autonomous weapons standpoint, is in the targeting process.  For RPA strikes against individuals, we consistently ask about the type of intelligence that led humans to judge that this particular individual was liable to lethal harm.   Moreover, this will be the same type of question for autonomous weapons.  However, with autonomous weapons, we will need to utilize a variety of other technologies to identify potential or actual threats that justify lethal harm.

Leading up to the CCW meetings last week, I participated in two workshops that began to examine some of these issues.  First was a workshop convened at the University of Zurich, where participants attempted to discuss the ethical design of present and future systems.  That is, how we might “design in” ethics to ensure permissible outcomes.  The second, was a workshop sponsored by the United Nations Institute on Disarmament Research (UNIDIR).  This workshop sought to see the interconnectedness of increasingly autonomous technologies, particularly cyber, artificial intelligence and autonomous weapons.

At the Zurich workshop I presented my view that future autonomous weapons systems would require a variety of different learning artificial intelligence capacities to comply with existing international humanitarian law.  That is, to be able to identify permissible targets in a battlespace, the weapon system would have to identify friendly, neutral and hostile forces, as well as nonliable or protected persons.  Presently militaries have good capacities of identifying friendly and neutral dismounted soldiers, and quite good capacities of identifying friendly, neutral and hostile platforms (such as planes, tanks, ships and other various vessels).  However, present day militaries suffer from an inability for platforms to identify easily dismounted soldiers (or other humans).  This is why such targeting most often needs a visual confirmation from a human on the ground.

Deploying autonomous weapons, however, with an eye towards using them against human beings means that the weapons will have to be able to identify humans as permissible targets.  I argued that how they might have to do this is by a variety of learning artificial intelligence software like facial recognition, gesture recognition, emotional recognition, and biometric data.  However, such computing power would require larger onboard processors, making a weapon larger and more expensive.  Or, alternatively, it means that we may face new types of weapons systems where something like a larger airframe loiters and deploys a series of smaller and cheaper munitions to attack.  In either case, the decision to attack might be made onboard or off-board a particular munition, but it would not be made by a human operator or commander.  Rather it would be made by a series of artificial intelligence architectures that are capable of learning and adapting to changes in the battlespace.

Thus the UNIDIR workshop is quite prescient in attempting to understand the intersection of all of these systems.  Cyber systems already have autonomous weapons capacities, and they also link in present day operations, networks, communications, etc.  Moreover, artificial intelligence will be a necessary feature for any future autonomous system if that system will be operationally useful, cost effective and comply with international law.  The challenge ahead, then, is to get the international community and CCW member states to see the complexity of the future of war.  Balancing desires for state security, force protection and civilian protection may actually have wider ranging effects than they presently estimate.