Defending the Boundary – Constraints and Requirements on the Use of Autonomous Weapon Systems under International Humanitarian and Human Rights Law

Over recent years, there has been growing debate about the ethical, humanitarian, legal and security implications of autonomous weapon systems (AWS). The basic idea is that once activated, such weapon systems would detect, select and attack targets without further human intervention. According to leading researchers in the field of artificial intelligence (AI) and robotics, AI technology has ‘reached a point where the deployment of such systems is — practically if not legally — feasible within years’. AWS are said to have the potential to revolutionize warfare (and policing, although that argument is seldom made). Whilst success in the quest for AI may bring unprecedented benefits to humanity, it is also argued to pose an existential threat to humankind.

A small number of states are actively engaged in research and development with the stated goal of increasing autonomy in weapon systems. Regarding the drivers for this trend, commentators cite a perceived need to react to threats more quickly, process growing data much more efficiently (speeding up the targeting–decision cycle), improve performance in communications-denied environments, increase persistence and endurance, and reduce the exposure of states’ own security forces to physical harm.

Read the full report by the Geneva Academy of International Humanitarian Law and Human Rights here.