Throughout our history, humanity has been plagued by its inability to live in peace, with conflicts over resources, territory, ideology, and power predating civilization itself. It wasn’t until the devastating consequences of modern warfare became apparent during the American Civil War that world leaders began to recognize the need for formal, binding rules of war. This realization led to the first Geneva Convention in 1864, establishing a foundation for the humane treatment of the wounded and sick in wartime and marking a crucial step toward regulating the brutality of armed conflict.
Today, the international community continues to grapple with the challenges posed by emerging technologies capable of fundamentally altering the nature of warfare. Just as the destructive potential of the Gatling gun and ironclad warships revolutionized warfare, sparking new discussions on the ethical and humanitarian challenges of mechanized conflict, AI-powered weapons now present similar debates and concerns. These discussions are unfolding in key international bodies, where states are working to determine how to adapt existing laws and possibly create new ones to regulate the growing threat of autonomous weapons. Below is an overview of some of the key international forums where these critical conversations on the use of AI in war are taking place, as they work to shape the future of global security and military ethics.
The United Nations Human Rights Council (UNHRC)
The UNHRC is an intergovernmental body within the United Nations system that works to promote and protect human rights worldwide. Established in 2006, it replaced the UN Commission on Human Rights as the primary UN body for human rights and is responsible for addressing human rights violations and promoting universal human rights standards. The Council is composed of 47 member states, which serve three-year terms with elections held annually to ensure a rotating membership. While the UNHRC is not the primary body dealing with autonomous weapons, it plays an important role in highlighting the ethical and humanitarian concerns associated with these technologies.
In 2013, Christof Heyns, the UN Special Rapporteur on extrajudicial, summary, or arbitrary executions, submitted a report ON autonomous weapons to UNHCR. The report recommended temporary bans or suspensions (moratoriums) on the development, deployment, or use of autonomous weapons, and called for the establishment of a high-level panel to explore the issue more thoroughly. Following this report, 30 states within the Council spoke out against autonomous weapons, with many advocating for a ban or stronger regulatory measures to address the ethical and humanitarian concerns surrounding these technologies.
Resolution 51/22, adopted by UNHRC in October 2022, urged respect for international human rights law in the development and use of new military technologies. It stressed the need to ensure that the development and deployment of such technologies comply with existing legal frameworks, including the Geneva Conventions and human rights law, to protect civilians and prevent abuses. The resolution marked a crucial step in acknowledging the growing concern about the use of advanced military technologies, including autonomous weapons, and their potential impact on human rights.
The Convention on Certain Conventional Weapons (CCW)
Adopted in 1980, the CCW (Convention on Certain Conventional Weapons), also known as the Inhumane Weapons Convention, is an international treaty designed to prohibit or restrict the use of specific types of weapons that cause excessive harm or have indiscriminate effects, particularly those that pose unnecessary suffering to combatants and civilians during armed conflict. The treaty includes multiple protocols, each addressing specific categories of weapons, including landmines, incendiary weapons, blinding laser weapons, and explosive remnants of war. In 2013, the CCW agreed to initiate discussions on autonomous weapons, a pivotal moment that marked the beginning of formal international conversations about the risks these systems pose.
In 2015, the CCW’s Group of Governmental Experts (GGE) held its first formal meeting dedicated to autonomous weapons. Experts and states analyzed the technological, legal, and ethical issues surrounding autonomous weapon systems, raising concerns about accountability, the risk of unintentional harm, and the need for clear legal norms governing their use. One significant conclusion was that autonomous weapons should not replace human decision-making in critical areas, such as targeting and engagement. In 2019, it adopted 11 guiding principles on autonomous weapons in 2019, which emphasized that such weapons should be subject to human control and oversight to ensure compliance with international law, particularly the principles of distinction and proportionality.
The CCW operates on a consensus-based decision-making process, meaning that no decision can be made unless every member of the CCW agrees to it. Since countries with vested interests in military technology, such as the United States and Russia, have been reluctant to adopt binding restrictions on autonomous systems, there has been slow progress on the issue within the CCW. No new protocol or instrument has been agreed upon at the CCW since 2003.
United Nations Educational, Scientific and Cultural Organization (UNESCO)
UNESCO is a specialized agency of the United Nations established in 1945 to promote international collaboration in education, science, culture, and communication. Its mission is to build peace through these areas by fostering understanding, cooperation, and the exchange of knowledge among nations. In recent years, UNESCO has expanded its focus to address the ethical and regulatory concerns related to emerging technologies, such as AI, biotechnology, and autonomous weapons.
In 2021, all 193 member states of UNESCO voted in favor of adopting the Recommendation on the Ethics of Artificial Intelligence, a comprehensive framework designed to ensure that AI technologies, including those with military applications like autonomous weapons, are developed and used in ways that align with fundamental human rights, ethics, and international law. Critically, the framework emphasizes that, “in scenarios where decisions (…) may involve life and death decisions, final human determination should apply.”
The United Nations Security Council (UNSC)
The UNSC is one of the six main organs of the United Nations, responsible for maintaining international peace and security. It is composed of 15 member states, including five permanent members with veto power (China, France, Russia, the United Kingdom, and the United States). The UNSC has the authority to impose sanctions, authorize military interventions, and address threats to global stability. As autonomous weapons technologies advance, the UNSC plays a critical role in addressing the potential risks posed by these systems, particularly regarding their implications for conflict, human rights, and international security.
In July 2023, under the chairmanship of the United Kingdom, the UNSC held its first session focused specifically on the threats posed by AI. During the meeting, representatives from various member states and international experts emphasized the need for human oversight over autonomous weapons. UN Secretary-General António Guterres briefed the Council, expressing his support for the creation of a new United Nations entity to govern emerging technologies such as AI. He proposed that this new entity be inspired by existing organizations such as the International Atomic Energy Agency (IAEA), the International Civil Aviation Organization (ICAO), and the Intergovernmental Panel on Climate Change (IPCC).
The UN General Assembly (UNGA)
The UNGA is the main deliberative body of the United Nations, where all 193 member states have equal representation and participate in discussions and decisions on global issues. Recently, attention has shifted to the UNGA as a potential forum for the development of a legally binding instrument on autonomous weapons. In 2023, the General Assembly passed a resolution on autonomous weapons with 164 states in favor, signaling broad international support for regulating these technologies. A year later, UNGA adopted a second resolution on autonomous weapons systems, emphasizing the “serious challenges and concerns that new and emerging technological applications in the military domain, including those related to artificial intelligence and autonomy in weapons systems, [raise] from humanitarian, legal, security, technological and ethical perspectives.” The resolution passed with 161 states in favour and 3 against, with 13 abstentions.
Summit for Responsible AI in the Military Domain (REAIM)
REAIM serves as a platform for fostering international collaboration to ensure that AI technologies in the military domain are developed and deployed ethically, transparently, and with accountability. It aims to balance technological advancements with the protection of human rights and international law, ensuring that the potential benefits of AI do not come at the cost of ethical standards or humanitarian principles.
In REAIM 2023, sixty countries participated, engaging in in-depth discussions on the ethical challenges posed by AI in military applications. The culmination of these discussions was the issuance of a “Call to Action“, a non-binding commitment by the participating nations to ensure that AI technologies, particularly those used in warfare, are developed and deployed in accordance with international humanitarian law and human rights. While the Call to Action set out important ethical principles, it notably refrained from advocating for legally binding regulations, leaving room for future negotiations on more concrete, enforceable frameworks.
The forum also provided a platform for the United States to introduce its Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. This declaration addresses autonomous weapons systems and military AI, expressing the US commitment to ensuring the responsible use of these technologies. However, the declaration has faced criticism for being one of the weakest non-binding commitments proposed to date, with many viewing it as insufficient in addressing the urgent need for stronger regulation and accountability in the development and deployment of military AI.
A second summit took place in Seoul, South Korea, in September 2024.