AI​ ​Now​ ​2017​ ​Report

Ethical questions of bias and accountability will become even more urgent in the context of rights and liberties as AI systems capable of violent force against humans are developed and deployed in law enforcement and military contexts. Robotic police officers, for example, recently debuted in Dubai. If these were to carry weapons, new questions would arise about how to determine when the use of force is appropriate. Drawing on analysis of the Black Lives Matter movement, Peter Asaro has pointed to difficult ethical issues involving how lethal autonomous weapons systems (LAWS) will detect threats or gestures of cooperation, especially involving vulnerable populations. He concludes that AI and robotics researchers should adopt ethical and legal standards that maintain human control and accountability over these systems.

Similar questions apply in the military use of LAWS. Heather Roff argues [in “Meaningful Human Control or Appropriate Human Judgment? The Necessary Limits on Autonomous Weapons” – a Briefing Paper for delegates at the Review Conference of the Convention on Certain Conventional Weapons (CCW) Geneva, 12-16 December 2016] that fully autonomous systems would violate current legal definitions of war that require human judgment in the proportionate use of force, and guard against targeting of civilians. Furthermore, she argues that AI learning systems may make it difficult for commanders to even know how their weapons will respond in battle situations. Given these legal, ethical and design concerns, both researchers call for strict limitations on the use of AI in weapons systems.

Read the full AI Now report here.