Regulating Military AI through International Law
Military and human conduct in warfare is governed by International Humanitarian Law, a body of law that comprises conventions and treaties, as well as customary norms that mostly evolved in the aftermath of large-scale wars in history. International Humanitarian Law comprises the Four Geneva Conventions of 1949 and the two Additional Protocols of 1977. These streams of law are also buttressed by customary laws that have evolved through a combination of established state practice and interpretations of the laws as they stand.
The emergence of new technologies in recent times has posed significant legal (and in some cases, moral) questions that point to the need for either revised laws, or at least creative interpretations of existing legal provisions. Not responding to these legal challenges means that the deployment of emerging technology in war gains a free pass without accountability, as it conveniently occupies a grey area that can be carefully used to serve the powerful. While some of the extent of the impact of the military use of emerging technologies on human life is already known, AI also has the potential to fundamentally alter the nature of warfare altogether. The integration of AI into defence capabilities raises very real questions around accountability, liability, necessity and proportionality, and other vulnerabilities.
International Humanitarian Law and Military AI
IHL fundamentally defines the critical constraints for the use and deployment of force in conflict contexts, namely distinction, proportionality, and precaution (Garkusha-Bozhko, 2021). The principle of distinction requires combatants to distinguish between civilian and military targets. The principle of proportionality requires that any anticipated harm targeting civilians should not be excessive in comparison to military advantage. The principle of precaution requires all parties to the conflict to take every step feasible to minimize civilian casualties. No matter the nature of force used or weapons involved, these principles must be adhered to in armed conflict at any level. The Martens Clause adds a further constraint indicating that in situations not covered by specific treaties, people and belligerents remain under the protection and authority of the principles of public international law and the laws of humanity (Ticehurst, 1997). IHL does not directly regulate or name AI, but military AI produces harmful impacts that it covers through interpretation.
The current state of AI technology struggles with complex, contextual judgments that are essential in applying these principles effectively, especially in dynamic conflict environments where it is difficult to distinguish between civilians and combatants (Del Monte, 2018). Human judgment is foundational to apply the principle of distinction, requiring human control to be maintained over targeting decisions (Gakusha-Bozhko, 2021). Rapidly advancing AI systems have already begun to displace humans in the loop, and are making decisions on who to target based on predefined parameters applied to datasets on which they have been trained (Roff, 2014). Enabling a machine to make life-and-death decisions without any human intervention whatsoever goes against the fundamental principles of humanity. Military AI is not free of biases, given that it is trained on datasets that re-entrench racial and colonial stereotypes (Coxe, 2021). There is also the fact that it can hamper civilian life in significant ways – both by targeting the wrong people as well as creating spatial damage that can harm civilian life significantly (Sharkey, 2012).
As decisions are left to a machine to make, there is the real challenge of identifying a chain of command to ensure liability and responsibility for the harm caused. There are established rules in IHL that identify a chain of command, or a hierarchical structure that ensures discipline and respect for IHL within armed forces, holding commanders responsible for ensuring their forces obey the law and punishing subordinates who violate it (Green, 1995). This doctrine requires commanders to train subordinates, ensure their orders comply with IHL, and prevent or punish IHL violations, establishing both individual criminal responsibility for war crimes and a system of accountability for commanders. An additional complication in many cases is that military AI is deployed and used by private military contractors, who are able to escape liability under IHL owing to the lack of harmonized domestic regulations and enforcement mechanisms despite initiatives like the Montreaux Document and the ICoCA Code (Crawford & Pert, 2024).
With humans out of the loop, the question of liability can become complicated: Who is responsible for training the military AI? What levels of accountability kick in for those who built the dataset, the algorithms, and other critical infrastructures?
Arms Control and Disarmament Frameworks
Arms control refers to an agreement made to limit the use of weapons in a particular conflict, both in terms of quantity and quality. It differs from disarmament, which refers to doing away with weapons altogether. Both reduce the likelihood of war, limit potential damage and suffering caused by war, and are key components in dialogue and negotiation among parties to a conflict. A range of institutional mechanisms support arms control and disarmament at the global level, comprising bilateral and multilateral legal instruments calling for the reduction in the proliferation of different kinds of weapons. AI does not fall under the strict category of military technology, but is rather more versatile and can be used in a variety of ways that have both direct and indirect military components and impacts (Osimen et al., 2024). This makes it a complex tool to regulate in relation to its use in war. However, several states are actively already developing, deploying, and/or using AI-enabled weapons (Bhatt & Bharadwaj, 2024; Hiebert, 2024).
A number of analysts rightfully point out that mutually assured destruction will emerge from a clash between AI superpowers (Payne, 2018). There have been calls for the limitation or complete ban on particular kinds of AI-enabled weapon systems (Scharre & Lamberth, 2022), while still others have flagged that the regulation of AI is exceptionally challenging given both its versatility and civilian use dimensions (Mittelsteadt, 2021). The absence of comprehensive verification mechanisms to gather, organize, and analyse information on compliance with standards for AI deployment and use can render AI regulation useless (Dahlman, 2010).
Currently, the United Nations Convention on Conventional Weapons (UNCCW) has emerged as the primary multilateral forum for addressing lethal autonomous weapons systems. Since 2014, the CCW's Group of Governmental Experts has held numerous sessions examining various dimensions of LAWS, including technical, ethical, and legal considerations. However, progress has been limited, with states divided on whether new legally binding instruments are necessary or whether existing IHL is sufficient (Gill, 2018). There have also been several proposals within the CCW framework. Some states have called for a comprehensive prohibition on fully autonomous weapons systems, whereas others argue for a more nuanced approach focusing on human-machine interaction and maintaining meaningful human control (Singh & Aravindakshan, 2020). The concept of "meaningful human control" has become central to these discussions, although a consensus on what it precisely means remains elusive.
Gaps in the law
Even as some are asking big questions around the regulation of military AI, very few are paying attention to the gendered implications of using military AI, particularly from the standpoint of the law. LAWS can perpetuate and exacerbate gender-based violence and discrimination (Acheson, 2020). Military AI systems trained on biased datasets can misidentify or under-protect vulnerable groups and expose them to significant harm. The specific targeting of men of colour has racial and gendered connotations that cannot be sidelined, either. Military AI is inherently coded with the values of militarized masculinity and racial stereotypes, where the notion of the racialized other is segregated and targeted by a larger system that does little to introspect on its own warmongering ways (Enloe, 2016).
Military AI has very real consequences for war crimes, and the perpetration of targeted harms to carry out campaigns of genocide, ethnic erasure, and crimes against humanity. The use of machines in conflict complicates questions of accountability and responsibility, and also obfuscate the application of the law as it stands. When machines are not able to distinguish between civilians and combatants, and cause disproportionate harm, while taking no care to minimize harm, it is clear that they violate the laws of war – but who becomes accountable for these harms? Is it the one who built the data to train the machine, or designed and developed the machine, or deployed the machine, or directed its use at all? The rush to deploy these tools under the assumption that they bring military advantages ignores very real threats that the law has not been equipped to handle just yet.
References:
Acheson, R. (2020a). Autonomous Weapons and Patriarchy. https://reachingcriticalwill.org/images/documents/Publications/aws-and-patriarchy.pdf
Bhatt, C., & Bharadwaj, T. (2024). Understanding the Global Debate on Lethal Autonomous Weapons Systems: An Indian Perspective. Retrieved from Carnegie-Endowement for International Peace.
Coxe, C. (2021). Gender and lethal autonomous weapons systems. Feminist Legal Studies, 29(2), 151-175.
Crawford, E., & Pert, A. (2024). International humanitarian law. Cambridge University Press.
Dahlman, O. (2010). Verification: To detect, to deter and to build confidence. in UN research, K. Vignard, & J. Linekar (eds.), Arms control verification (pp. 3–13). United nations.
Del Monte, L. A. (2018). Genius weapons: Artificial intelligence, autonomous weaponry, and the future of warfare.
Enloe, C. (2016). Globalization and Militarism: Feminists Make the Link. Rowman & Littlefield.
Garkusha-Bozhko, S. Y. (2021). Application of the Principles of International Humanitarian Law (Principles of Distinction, Proportionality, and Precaution) to Armed Conflicts in Cyberspace. Russian Journal of Legal Studies (Moscow), 8(3), 73-90.
Gill, A. S. (2018). The Role of the UN in Addressing Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. https://www.un.org/en/un-chronicle/role-united-nations-addressing-emerging-technologies-area-lethal-autonomous-weapons
Green, L. C. (1995). Command Responsibility in International Humanitarian Law. Transnat'l L. & Contemp. Probs., 5, 319.
Hiebert, K. (2024). The United States Quietly Kick-Starts the Autonomous Weapons Era, Centre for International Governance Innovation.
Mittelsteadt, M. (2021). AI Verification. Center for Security and Emerging Technology.
Osimen, G. U., Newo, O., & Fulani, O. M. (2024). Artificial intelligence and arms control in modern warfare. Cogent Social Sciences, 10(1), 2407514.
Payne, K. (2018). Artificial intelligence: a revolution in strategic affairs? Survival, 60(5), 7–32.
Roff, H. M. (2014). The strategic robot problem: Lethal autonomous weapons in war. Journal of Military Ethics, 13(3), 211-227.
Scharre, P., & Lamberth, M. (2022). Artificial intelligence and arms control. Center for a New American security.
Sharkey, N. (2012). The evitability of autonomous robot warfare. International Review of the Red Cross, 94(886), 787-799.
Singh, S., & Aravindakshan, S. (2020). Killer Robots or Soldiers of the Future: Legal Issues and India's Role in the Lethal Autonomous Weapons Debate. Indian Journal of Law and Technology, 16, 103.
Ticehurst, R. (1997). The Martens Clause and the laws of armed conflict. International Review of the Red Cross (1961-1997), 37(317), 125-134.