Making sense of science

A Guide to Lethal Autonomous Weapons Systems

A Guide to Lethal Autonomous Weapons Systems

The debate surrounding the development of autonomous weapons is skewed by the particularly stressful nature of the arguments. Yet to be able to adopt a position on the topic, it is essential to have a precise understanding of the issues involved. The researchers Raja Chatila and Catherine Tessier explain.

Over the last three years, "autonomous weapons," not to mention "killer robots," have been the subject of numerous press articles, open letters, and debates offering sensational and anxiety-provoking arguments, but virtually no scientific and technological background or evidence. Since 2014, a debate has been underway at the UN in Geneva, in the framework of the "Convention on Certain Conventional Weapons" (CCW), regarding a ban or moratorium on the development of such weapons.

"Guidance and navigation functions have been automated for a long time (aircraft or drone autopilots for instance) without raising significant concerns, and are therefore not critical to this debate. More sensitive are the automatic target identification and engagement systems (in other words opening fire)."
"Guidance and navigation functions have been automated for a long time (aircraft or drone autopilots for instance) without raising significant concerns, and are therefore not critical to this debate. More sensitive are the automatic target identification and engagement systems (in other words opening fire)."

Words that hit hard

The term "killer robot" suggests that automata are driven by the intention to kill, and even that they are conscious, which obviously makes no sense in the case of a machine, even if it was designed and programmed to destroy, neutralize, or kill. No one talks of "killer missiles," for example. This is the rhetoric of pathos, which by its very nature prevents discussion on ethics, and seeks to spark public rejection instead.

The term "killer robot" suggests that automata are driven by the intention to kill, which obviously makes no sense in the case of a machine.

The term "autonomous" is also problematic, as it has multiple meanings depending on the different stakeholders, scientists in particular. For example, "autonomous weapon" can designate a machine that reacts to certain predefined signals, or that optimizes its trajectory to reach a target whose predetermined signature it recognizes automatically.

It could also refer to a device that mechanically seeks, within a given area, a target distinguished by certain characteristics. Hence, rather than speak of "autonomous weapon," it would be more relevant to study what functionalities are or could be automated—that is to say assigned to computer programs—and with what limitations, as part of shared authority with a human operator.

Automated Functions

Guidance and navigation functions have been automated for a long time (aircraft or drone autopilots for instance) without raising significant concerns, and are therefore not critical to this debate. More sensitive are the automatic target identification and engagement systems (in other words opening fire). 

Existing weapons that have target-recognition capabilities are equipped with software that compares the signals detected by sensors (radio, radar, cameras, etc.) against a list of signatures (predefined target models). This generally involves large objects that are "easy" to identify (radars, air bases, tanks, missile batteries). The engagement of targets is carried out under human supervision, although this type of software does not evaluate the situation surrounding these objects, for example the presence of civilians. 

Sentinel military robot SGR-A1 manufactured by a subsidiary of Samsung.
Sentinel military robot SGR-A1 manufactured by a subsidiary of Samsung.

Recognizing complex targets

Today’s discussions and controversies are about weapons that can be endowed with the capacity to recognize more elaborate targets (for example combatants as opposed to the wounded out of action), and to do so in situations and environments that are themselves complex (for example in an evolving context), with the faculty to engage such targets solely on the basis of this recognition.

These abilities entail the presence of a formal (mathematical) description of possible states within the environment, along with factors of interest and actions to be carried out, even though there is no such thing as a typical situation. These capacities also involve recognizing a particular state or element using sensors, and evaluating whether the actions under consideration adhere to the principles inscribed in international humanitarian law (IHL), such as humanity (avoiding unnecessary suffering), discrimination (distinguishing military objectives from civilian populations and property), and proportionality (the means used must be proportionate to the effect sought). The challenge is to automatically understand a situation on the basis of mathematical models.

Who decides?

Beyond the technological and legal aspects, is it ethically acceptable to entrust the decision to eliminate a human being who was identified by a machine to that same machine? More specifically, the question is who would establish the characterization, modelling and identification of the relevant targets and select some informational subsections (to the detriment of others) for automatically calculating the decision, and how this would be done. It is also a matter of knowing who would specify these algorithms, and how it would be determined that they comply with international agreements and rules of engagement, especially if they are capable of learning. Moreover, who should be held responsible in the event of a violation of agreements or misuse? For beyond the actual weapon, it is also important to reflect, when designing them, on the potential misuse of these so-called "autonomous" objects for purposes of attack. For example, how can self-driving cars or recreational drones be made more resistant to piracy and malevolent modification?

Autonomous drone test to identify targets, on the Cape Cod base in Massachusetts US), in August 2016.
Autonomous drone test to identify targets, on the Cape Cod base in Massachusetts US), in August 2016.

Define a framework

Faced with these questions, a number of international organizations are calling for "autonomous" weapons to always be under significant human control, that is to say the decision to engage a target should always be made by a human being. From a technological standpoint, it must be ensured that in all circumstances the human being has sufficient and up-to-date information to make this decision. It must also be determined how the calculated information selected and transmitted by the machine influences human understanding and decisions.
 

A number of international organizations are calling for "autonomous" weapons to always be under significant human control.

For example, the recommendations formulated by the "IEEE1 Global Initiative on Ethics of Autonomous and Intelligent Systems,2" which cover all autonomous and intelligent system technologies, state some principles for LAWS, including foreseeability, traceability, and clear identification of human responsibility.

Adaptive learning devices should therefore be designed so that they are capable of clearly explaining their reasoning and decisions to human operators, who remain responsible for their implementation. Furthermore, independent systems with predictable behavior could become unpredictable during swarm deployment, in which unforeseeable behavior may occur. This type of use should therefore be banned.

Finally, it is important to emphasize that while autonomous weapons can help win a war, the psychological effect of their use on victim populations is unlikely to win the peace.

The analysis, views and opinions expressed in this section are those of the authors and do not necessarily reflect the position or policies of the CNRS.

For more on the subject: 
Vincent Boulanin and Maaike Verbruggen, Mapping the development of autonomy in weapon systems, Stockholm International Peace Research Institute (SIPRI), November 2017. 

Footnotes
  • 1. Institute of Electrical and Electronics Engineers, a professional and scientific international association.
  • 2. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html

Comments

0 comment
To comment on this article,
Log in, join the CNRS News community