LETHAL AUTONOMOUS WEAPONS

What are they?

Lethal autonomous weapons systems, sometimes called “killer robots,” are weapon systems that use artificial intelligence to identify, select, and kill human targets without human control. This means that the decision to kill a human target is no longer made by humans, but by algorithms. A common misconception is that these weapons are “science fiction.” In fact, given the increasing interest in the militarization of AI, lethal autonomous weapons systems are currently under development by some countries. They could be used on the battlefield very soon.

Why are they problematic?

Lethal autonomous weapons systems are intrinsically amoral and pose grave threats to national and global security, including to superpowers. The United Nations Secretary General has stated that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”

Beyond ethical issues, replacing human decision-making with algorithms is highly destabilizing. Cheap mass-production and/or copying of the code of algorithms for  lethal autonomous weapons would fuel proliferation to both state and non-state actors, including domestic and foreign terrorists. This scalability of lethal AWS has led some to classify their nature as weapons of mass destruction (WMDs). 

Further, because they can operate at a speed beyond human intervention, they introduce significant risks , such as the rapid conflict escalation, unreliable performance, and unclear attribution. The potential use of facial recognition software makes these weapons systems uniquely equipped to selectively target specific individuals or groups by gender, ethnicity, or other biometric data.

What can be done?

There are two key elements central to the global regulation of lethal autonomous weapons.

1. The Positive Obligation of Human Control: The first is a commitment by countries that all weapons systems must operate under meaningful human control. This means that humans, not algorithms, make the decision to kill (i.e. humans “press the button” to use lethal force, not an AI system). This requirement for collaborative human/AI decision making is already employed by many of the military’s semi-autonomous systems today, such as lethal drones.

2. Prohibitions on Systems Incapable of Human Control: The second element is for countries to agree to prohibit weapons systems incapable of meeting the human control requirement. 

As we develop governance structures to steer our future with AI towards a positive outcome for humanity, we must set clear precedents on acceptable and unacceptable uses of AI. That begins with drawing a red line that the decision to take a human life must never be delegated to algorithms.