Slaughterbots are here.

The era in which algorithms decide who lives and who dies is upon us. We must act now to prohibit these weapons.

What’s the problem?

Weapons that use algorithms to kill, rather than human judgement are immoral and a grave threat to national and global security.

1. Immoral: Algorithms are incapable of comprehending the value of human life, and so should never be empowered to decide who lives and who dies. Indeed, the United Nations Secretary General António Guterres agrees that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”

2. Threat to Security: Algorithmic decision-making allows weapons to follow the trajectory of software: faster, cheaper, and at greater scale. This will be highly destabilising on both national and international levels because it introduces the threats of proliferation, rapid escalation, unpredictability, and even the potential for weapons of mass destruction.

Slaughterbots

Featured in: BBC Newsnight | BBC Panorama | News Show | Das Este | The Atlantic | The Times | IEEE Spectrum | Futurism | The Guardian | The Economist | + more…

Reality – not science fiction.

Terms like “slaughterbots” and “killer robots” remind people of science fiction movies like the The Terminator, which features a self-aware, human-like, robot assassin. This fuels the assumption that lethal autonomous weapons are of the farfuture.

But that is incorrect.

In reality, weapons which can autonomously select, target, and kill humans are already here.

A 2021 report by the U.N. Panel of Experts on Libya documented the use of a lethal autonomous weapon system hunting down retreating forces. Since then, there have been numerous reports of swarms and other autonomous weapons systems being used on battlefields around the world.

The accelerating rate of these use cases is a clear warning that time to act is quickly running out.

Milestones

Mar 2021

First documented use of a lethal autonomous weapon

Jun 2021

First documented use of a drone swarm in combat

What's being done about it?

The International Committee of the Red Cross (ICRC) Position

We do not need to be resigned to an inevitable future with Slaughterbots. The global community has successfully prohibited classes of weaponry in the past, from biological weapons to landmines.

As with those efforts, the International Committee on the Red Cross (ICRC) recommends that states adopt new legally binding rules to regulate lethal autonomous weapons.

Importantly, the ICRC does not recommend a prohibition of all military applications of AI - only of specific types of autonomous weapons. There are many applications of military AI already in use that do not raise such concerns, such as automated missile defense systems.

Find out more about the available solutions:

The ICRC Position:

The ICRC is recommending three core pillars:

1: No human targets

Prohibition on autonomous weapons that are designed or used to target humans ("Slaughterbots").

2: Restrict unpredictability

Prohibition on autonomous weapons with a high degree of unpredictable behaviour.

3: Human control

Regulations on other types of autonomous weapons combined with a requirement for human control.

Their full position can be found here:

The risks

What risks do lethal autonomous weapons pose?

t

Unpredictability

Lethal autonomous weapons are dangerously unpredictable in their behaviour. Complex interactions between machine learning-based algorithms and a dynamic operational context make it extremely difficult to predict the behaviour of these weapons in realworld settings. Moreover, the weapons systems are unpredictable by design; they’re programmed to behave unpredictably in order to remain one step ahead of enemy systems.

Escalation

Given the speed and scale at which they are capable of operating, autonomous weapons systems introduce the risk of accidental and rapid conflict escalation. Recent research by RAND found that “the speed of autonomous systems did lead to inadvertent escalation in the wargame” and concluded that “widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability.”

Proliferation

Slaughterbots do not require costly or hard-to-obtain raw materials, making them extremely cheap to mass-produce. They’re also safe to transport and hard to detect. Once significant military powers begin manufacturing, these weapons systems are bound to proliferate. They will soon appear on the black market, and then in the hands of terrorists wanting to destabilise nations, dictators oppressing their populace, and/or warlords wishing to perpetrate ethnic cleansing.

Selective Targeting of Groups

Selecting individuals to kill based on sensor data alone, especially through facial recognition or other biometric information, introduces the risk of selective targeting of groups based on perceived age, gender, race, ethnicity or religious dress. Combine this with the risk of proliferation, and autonomous weapons could greatly increase the risk of targeted violence against specific classes of individuals, including even ethnic cleansing and genocide.

Learn more about the risks of lethal autonomous weapons:

Global debate

What is the current debate around lethal autonomous weapons?

UN CCW in Geneva - Development of A New Legally Binding Protocol

The United Nations' Convention on Certain Conventional Weapons (CCW) in Geneva established a Group of Governmental Experts on Lethal Autonomous Weapons to debate this issue and to develop a new "normative and operational framework" for consideration by states.

Several years ago, this group produced a set of eleven non-binding Guiding Principles on Lethal Autonomous Weapons from which to develop a new instrument. The group is expected to share the output of those discussions in a report to states for the Sixth Review Conference in December 2021. This is the key opportunity for states to agree on a new legally binding protocol to the CCW that would prohibit autonomous weapons that target humans, as they have done in the past with other types of weapons, such as blinding lasers. With a rapid increase in use cases over the last year, this is the CCW's last chance to deliver on the ICRC's recommendation of new law.

Our Common Agenda - The Future of the United Nations

The U.N. Secretary General presented a 25-year vision for the future of global cooperation, and reinvigorated inclusive, networked, and effective multilateralism, at the 2021 General Assembly. This report identifies "establishing internationally agreed limits on lethal autonomous weapons systems" as key to humanity's successful future.

Help us prevent Slaughterbots before it's too late.

Once lethal autonomous weapons are unleashed upon the world, there is no going back. Take the pledge to help us build a strong case against lethal autonomous weapons.

Organisations

What organisations are working on this issue?

News feed

See the latest news updates on the issue:

See the full history of Lethal Autonomous Weapons coverage: