The central area of concern regarding the development of autonomous weapons systems (AWS) is that they might lack the necessary human control in the critical functions of identifying, selecting and applying force to targets. Without the necessary human control, such systems might not allow the proper application of legal rules, or might produce interpretations of the legal framework that erode civilian protection, or lead to other negative outcomes relating to the morality of human interactions or the maintenance of peace and stability.
In this context, this paper argues that:
× Consideration of the form and nature of human control considered necessary is the most useful starting point for discussions on this issue.
× The existing legal framework of international humanitarian law provides a framework that should be understood as requiring human judgment and control over individual “attacks” as a unit of legal management and tactical action.
× That without recognizing a requirement for human control to be in some way substantial or meaningful, the existing legal framework does not ensure that human legal judgment will not be diluted to the point of being meaningless, as a result of the concept of “an attack” being construed more and more broadly.
× Against that background, delineation of the key elements of human control should be the primary focus of work by the international community.