The UK’s engagement to date in multilateral discussions on the implications of increased autonomy in weapons systems, facilitated by robotics and AI, is not adequate to the broad societal implications of the subject matter. How the relationship between human and machine decision-making is managed on issues of life and death is of fundamental importance to how society’s relationship with computers and AI will develop in the future. In that context the UK’s approach to policy making on autonomous weapons so far lacks foundations in a vision of the role of AI in society in the future. It fails to engage with the key questions of immediate relevance yet seeks to avoid movement towards multilateral agreement on the nature and form of human control that should be considered necessary in making decisions over how force is applied. UK policy making in this area should be subject to a broad review to ensure that a policy driven by defence interests also reflects the position the UK wishes to take on the wider roles of AI and computer autonomy in society in the future.