Shaking the Foundations: The Human Rights Implications of Killer Robots

Fully autonomous weapons have the potential to contravene the right to life, which the Human Rights Committee describes as “the supreme right.”[2] According to the International Covenant on Civil and Political Rights (ICCPR), “No one shall be arbitrarily deprived of his life.”[3] Killing is only lawful if it meets three cumulative requirements for when and how much force may be used: it must be necessary to protect human life, constitute a last resort, and be applied in a manner proportionate to the threat. Each of these prerequisites for lawful force involves qualitative assessments of specific situations. Due to the infinite number of possible scenarios, robots could not be pre-programmed to handle every circumstance. In addition, fully autonomous weapons would be prone to carrying out arbitrary killings when encountering unforeseen situations. According to many roboticists, it is highly unlikely in the foreseeable future that robots could be developed to have certain human qualities, such as judgment and the ability to identify with humans, that facilitate compliance with the three criteria.

The use of fully autonomous weapons also threatens to violate the right to a remedy. International law mandates accountability in order to deter future unlawful acts and punish past ones, which in turn recognizes victims’ suffering. It is uncertain, however, whether meaningful accountability for the actions of a fully autonomous weapon would be possible. The weapon itself could not be punished or deterred because machines do not have the capacity to suffer. Unless a superior officer, programmer, or manufacturer deployed or created such a weapon with the clear intent to commit a crime, these people would probably not be held accountable for the robot’s actions. The criminal law doctrine of superior responsibility, also called command responsibility, is ill suited to the case of fully autonomous weapons. Superior officers might be unable to foresee how an autonomous robot would act in a particular situation, and they could find it difficult to prevent and impossible to punish any unlawful conduct. Programmers and manufacturers would likely escape civil liability for the acts of their robots. At least in the United States, defense contractors are generally granted immunity for the design of weapons. In addition, victims with limited resources and inadequate access to the courts would face significant obstacles to bringing a civil suit.

Finally, fully autonomous weapons could undermine the principle of dignity, which implies that everyone has a worth deserving of respect. As inanimate machines, fully autonomous weapons could truly comprehend neither the value of individual life nor the significance of its loss. Allowing them to make determinations to take life away would thus conflict with the principle of dignity.

Read the full report by Human Rights Watch and the Harvard Law School International Human Rights Clinic here.