Lethal AutonomousWeapons

What Are They
Lethal autonomous weapons systems, sometimes called “killer robots”, are weapon systems that use artificial intelligence to identify, select, and kill human targets without human control, or “autonomously.” This means that the final decision to kill a human target is no longer made by humans, but by algorithms.
A common misconception is that this type of weapon is “science fiction.” As a result of the increased interest in the militarization of AI, lethal autonomous weapons systems are currently under development, and we could see them on the battlefield very soon.
Lethal autonomous weapons systems are fundamentally amoral and pose grave threats to national and global security, including to superpowers. The United Nations Secretary General has stated that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.” Beyond ethical issues, replacing human decision-making with algorithms is destabilizing. Cheap mass-production and/or copying of the code of algorithms for lethal autonomous weapons would fuel proliferation to non-state actors including domestic and foreign terrorists, enabling very few to kill very many. Because they can operate at a speed beyond human intervention, they introduce new significant risks to society, such as the rapid conflict escalation, unreliable performance, and unclear attribution. The potential use of facial recognition software makes these weapons systems uniquely equipped to selectively target specific individuals or groups by gender, ethnicity, or other biometric data.
Projects
Cities
Engineering Services

Architectural Engineering
Sed porttitor lectus nibh. Vivamus suscipit tortor eget felis porttitor volutpat. Quisque velit nisi, pretium ut lacinia in, eleme

Transportation & Bridge
Sed porttitor lectus nibh. Vivamus suscipit tortor eget felis porttitor volutpat. Quisque velit nisi, pretium ut lacinia in, ele

Custom Builds
Sed porttitor lectus nibh. Vivamus suscipit tortor eget felis porttitor volutpat. Quisque velit nisi, pretium ut lacinia in, ele

What Can Be Done
There are two key elements central to ensuring that lethal autonomous weapons are not developed and used by militaries.
The first is a commitment by countries that all weapons systems must operate under meaningful human control. This means that humans, not algorithms, make the final decision to kill (i.e. humans “press the button” to use lethal force, not an AI system). This requirement for collaborative human/AI decision making is already employed by many of the military’s semi-autonomous systems today, such as lethal drones.
The second element is for countries to agree to prohibit weapons systems incapable of meeting the human control requirement, which means banning fully autonomous weapon systems.
As the global governance community grapples with how we want to use AI in society, we must draw a clear red line against AI systems that can autonomously kill.



Quisque velit nisi, pretium ut lacinia in, elementum id enim. Cras ultricies ligula sed magna dictum porta. Mauris blandit aliquet elit, eget tincidunt nibh pulvinar a. Sed porttitor lectus nibh. Nulla porttitor accumsan tincidunt. Cras ultricies ligula sed magna dictum porta. Vivamus suscipit tortor eget felis porttitor volutpat. Proin eget tortor risus. Sed porttitor lectus


Cras ultricies ligula sed magna dictum porta. Mauris blandit aliquet elit, eget tincidunt nibh pulvinar a. Sed porttitor lectus nibh. Nulla porttitor accumsan tincidunt. Cras ultricies ligula sed magna dictum porta. Vivamus suscipit tortor eget felis porttitor volutpat. Proin eget tortor risus. Sed porttitor lectus nibh. Curabitur arcu erat, accumsan id imperdiet et, porttitor at sem.
