Slaughterbots are here.

The era in which algorithms decide who lives and who dies is upon us.
We must act now to prohibit these weapons.

What’s the problem?

Weapons that use algorithms to kill, rather than human judgement, are immoral and a grave threat to national and global security.

1. Immoral: Algorithms are incapable of comprehending the value of human life, and so should never be empowered to decide who lives and who dies. Indeed, the United Nations Secretary General António Guterres agrees that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”

2. Threat to Security: Algorithmic decision-making allows weapons to follow the trajectory of software: faster, cheaper, and at greater scale. This will be highly destabilising on both national and international levels because it introduces the threats of proliferation, rapid escalation, unpredictability, and even the potential for weapons of mass destruction.

A vision of the near future…


Featured in: BBC Newsnight | BBC Panorama | News Show | Das Este | The Atlantic | The Times | IEEE Spectrum | Futurism | The Guardian | The Economist | + more…

Slaughterbots – if human: kill()

In 2017, Slaughterbots warned the world of what was coming. Its sequel shows the world that lethal autonomous weapons have arrived. Will humanity prevail?

Reality – not science fiction.

Terms like “slaughterbots” and “killer robots” remind people of science fiction movies like the The Terminator, which features a self-aware, human-like, robot assassin. This fuels the assumption that lethal autonomous weapons are of the farfuture.

But that is incorrect.

In reality, weapons which can autonomously select, target, and kill humans are already here.

A 2021 report by the U.N. Panel of Experts on Libya documented the use of a lethal autonomous weapon system hunting down retreating forces. Since then, there have been numerous reports of swarms and other autonomous weapons systems being used on battlefields around the world.

The accelerating rate of these use cases is a clear warning that time to act is quickly running out.


Mar 2021

First documented use of a lethal autonomous weapon

Jun 2021

First documented use of a drone swarm in combat

What's being done about it?

The International Committee of the Red Cross (ICRC) Position

We do not need to be resigned to an inevitable future with Slaughterbots. The global community has successfully prohibited classes of weaponry in the past, from biological weapons to landmines.

As with those efforts, the International Committee on the Red Cross (ICRC) recommends that states adopt new legally binding rules to regulate lethal autonomous weapons.

Importantly, the ICRC does not recommend a prohibition of all military applications of AI - only of specific types of autonomous weapons. There are many applications of military AI already in use that do not raise such concerns, such as automated missile defense systems.

Find out more about the available solutions:

The ICRC Position:

The ICRC is recommending three core pillars:

1: No human targets

Prohibition on autonomous weapons that are designed or used to target humans ("Slaughterbots").

2: Restrict unpredictability

Prohibition on autonomous weapons with a high degree of unpredictable behaviour.

3: Human control

Regulations on other types of autonomous weapons combined with a requirement for human control.

Their full position can be found here:

The risks

What risks do lethal autonomous weapons pose?



Lethal autonomous weapons are dangerously unpredictable in their behaviour. Complex interactions between machine learning-based algorithms and a dynamic operational context make it extremely difficult to predict the behaviour of these weapons in realworld settings. Moreover, the weapons systems are unpredictable by design; they’re programmed to behave unpredictably in order to remain one step ahead of enemy systems.


Given the speed and scale at which they are capable of operating, autonomous weapons systems introduce the risk of accidental and rapid conflict escalation. Recent research by RAND found that “the speed of autonomous systems did lead to inadvertent escalation in the wargame” and concluded that “widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability.”


Slaughterbots do not require costly or hard-to-obtain raw materials, making them extremely cheap to mass-produce. They’re also safe to transport and hard to detect. Once significant military powers begin manufacturing, these weapons systems are bound to proliferate. They will soon appear on the black market, and then in the hands of terrorists wanting to destabilise nations, dictators oppressing their populace, and/or warlords wishing to perpetrate ethnic cleansing.

Selective Targeting of Groups

Selecting individuals to kill based on sensor data alone, especially through facial recognition or other biometric information, introduces the risk of selective targeting of groups based on perceived age, gender, race, ethnicity or religious dress. Combine this with the risk of proliferation, and autonomous weapons could greatly increase the risk of targeted violence against specific classes of individuals, including even ethnic cleansing and genocide.

Learn more about the risks of lethal autonomous weapons:

Global debate

What is the current debate around lethal autonomous weapons?

UN CCW - ‘Road to Nowhere’

The United Nations' Convention on Certain Conventional Weapons (CCW) in Geneva has been discussing lethal autonomous weapons since 2013. That year they set up informal ‘Meetings of Experts’ to address what was then an emerging issue. In 2016, this was formalised into a Group of Governmental Experts (GGE) to develop a new ‘normative and operational framework’ for states’ consideration. Three years later, the CCW adopted the GGE’s suggested eleven guiding principles.

In 2021 the world saw the first uses of lethal autonomous weapons in combat. If it had not been already, the need for a legally binding protocol was now self-evident. So, when the landmark CCW Review Conference of December 2021 could not even agree to start negotiating a protocol, the Convention was widely seen as ‘having failed’.

The GGE has since stalled without further progress. In July, the majority delegations got behind a promising draft - which was then stripped of all that promise by a persistent, blocking minority. By now, as Ray Acheson, Director of Disarmament at the Women's International League for Peace and Freedom (WILPF) put it, the CCW has proven itself to be a ‘road to nowhere’.

Almost ten years since these discussions began, autonomous weapons are spreading fast, yet they remain unregulated. The world can no longer afford to wait for this Convention to get something done. States must find another forum through which to reach a protocol.

Our Common Agenda - The Future of the United Nations

The U.N. Secretary General presented a 25-year vision for the future of global cooperation, and reinvigorated inclusive, networked, and effective multilateralism, at the 2021 General Assembly. This report identifies "establishing internationally agreed limits on lethal autonomous weapons systems" as key to humanity's successful future.

Help us prevent Slaughterbots before it's too late.

Once lethal autonomous weapons are unleashed upon the world, there is no going back. Take the pledge to help us build a strong case against lethal autonomous weapons.


What organisations are working on this issue?

News feed

See the latest news updates on the issue:

See the full history of Lethal Autonomous Weapons coverage: