The Human Cost of Military AI

By Kirthi Jayakumar

The deployment of AI for military use has sparked off significant questions around civilian protection from and the gendered impacts of technological warfare. As AI systems are increasingly used to make targeting decisions, conduct surveillance operations, and engage in strategic planning, the human cost of these technologies falls disproportionately on people who are systemically disadvantaged and marginalized based on gender, race, caste, and ability, among other things. The deployment of AI effectively exacerbates existing vulnerabilities and creates new forms of harm that demand urgent attention.

Even as AI has been considered a useful value addition to military strategies, the “black box” nature of all military AI systems goes against the values of transparency, accountability, and attention to differential impacts. When targeting decisions are mediated by algorithmic processes that military organizations treat as classified or proprietary, it becomes impossible to assess whether these systems adequately account for civilian protection principles or gender-specific vulnerabilities. Betty Reardon's (1985) analysis of militarism as a system of domination provides insight into how AI systems can perpetuate and amplify militaristic logics. By reducing complex human situations to algorithmic calculations, military AI systems embody what Reardon identifies as masculinist approaches to security that prioritize control, prediction, and domination over care, relationship, and protection.

The Human Cost of AI Warfare

War affects different people differently, especially because of their differing social roles, economic positions, and historic experiences of marginalization (Enloe, 2014; Cockburn, 2012). The ICRC (2019) noted that women and children constitute nearly 90% of war casualties in contemporary armed conflicts. This shift toward civilian targeting has accompanied the infusion of technology into warfare, including the deployment of AI systems that alter how military forces identify, track, and engage targets.

With the deployment of weapons and the strategic use of sexual violence, armed conflicts result in significant direct harm. However, armed conflict also produces impacts that transcend direct violence: Displacement – both internal and beyond borders, economic disruption, the breakdown of the security sector and public infrastructures that offer social support, and the exacerbated care burden imposed most often on women are equally significant forms of harm produced by armed conflict. The use of AI against this backdrop exacerbates these vulnerabilities, especially because AI systems are aimed at prioritizing speed and purported efficiency over civilian protection. 

AI-enabled targeting often diminishes human oversight, escalating the scale and magnitude of civilian harm (Abraham, 2024). The acceleration of targeting processes fundamentally alters the decision-making architecture that governs the use of force, reducing the time available for human judgment about civilian protection and proportionality assessments that are essential under international humanitarian law.  Several of these AI tools are even tested and built after studying their impact in certain militarized zones in the world, such as Gaza, and then put into circulation as part of the global arms trade. For example, the Israeli military’s use of surveillance technologies and AI-enabled military tools help determine targets to attack in Gaza, along with an escalated risk of civilian harm (Human Rights Watch, 2024). AI systems like Lavender assign numerical scores to individuals based on the algorithmic assessment of their likelihood of being “militants.” This transforms how targeting decisions are made, and even lower the threshold for civilian harm. AI-enhanced surveillance also causes psychosocial impacts. The fact that pervasive monitoring continues means that people are constantly under surveillance, which affects their mobility, autonomy, and access to resources. For instance, in Gaza, the deployment of widespread surveillance infrastructure in combination with AI analysis creates "digital panopticons" that restrict civilian movement and normalize states of exception that disproportionately constrain women's access to public spaces and economic opportunities (Puar, 2017).

AI-powered surveillance and targeting systems also exacerbate displacement patterns as they make civilian areas increasingly untenable through persistent monitoring and the lingering threat of algorithmic targeting. For instance, in Ukraine, AI systems that were deployed for battlefield management and target identification have resulted in the displacement of over 6 million people, with women constituting approximately 57% of this massive number (UNHCR, 2023). The unpredictability and perceived omnipresence of AI systems create distinctive forms of psychological harm, as civilians struggle to understand or predict algorithmic decision-making processes that determine their safety. This makes access to psychosocial care essential, but given the breakdown of public infrastructure during conflict, access becomes challenging if not entirely impossible.

As AI systems process intelligence and coordinate military responses at great speed, the warning time available to civilian populations is significantly reduced, forcing hasty evacuations that break families and increase the care burden on women. It also results in increased vulnerability for people across the SOGIESC spectrum, who are particularly targeted for their sexual and gender identity, gender expression and performance, and sexual orientation at border checkpoints and by the security sector and armies. The conflict in Yemen particularly demonstrates how AI-enabled military systems interact with existing gender inequalities to produce compounded vulnerabilities. At least 26% of displaced households are headed by women, of whom 20% are aged under 18 years (UNHCR, 2024). This also shows how conflict creates new patterns of vulnerability that AI systems may fail to recognize or account for in targeting algorithms. The precision claimed by AI systems masks their inability to account for complex social dynamics, such as the presence of women and children in households targeted based on algorithmic assessments of male family members' activities. This technological mediation of targeting decisions can obscure the human judgment necessary to implement principles of distinction and proportionality that form the foundation of international humanitarian law.

The gendered impacts of these AI deployments reflect broader patterns of how technological warfare affects civilian populations. Ukrainian women have borne primary responsibility for protecting children and elderly family members during evacuation processes that AI-enhanced military operations have accelerated (Tomko, 2025). The speed at which AI systems can identify and engage targets has reduced the time available for civilian protection measures, forcing women to make rapid decisions about family safety with limited information about impending threats. AI-powered infrastructure are also used to specifically target public infrastructure such as power grids, water systems, healthcare, and educational facilities – and these strategic disruptions impose the burden on women to step up to take on additional care roles for their families. Cynthia Enloe (2014) and Christine Sylvester (2013) have explained how technological warfare transforms intimate spaces and relationships, often in ways that remain invisible to traditional security analyses focused on state-level strategic interactions.

Policy gaps in responding to these challenges

The human cost of military AI lays bare some of the significant gaps in technology development, military doctrine, and international law. Current military systems do not account for differential impacts of armed conflict and now, the deployment of technology. Not acknowledging the ways in which algorithmic decision-making affects women, children, and other vulnerable populations differently will not only exacerbate the harm, but will also set a new standard of civilization that would very easily relegate this level of harm to collateral damage.  

International humanitarian law as it stands does not have the full capacity to conceptualize the level of harm and legal conundrums that emerge from military AI use, leave alone address the specific challenges posed by AI-mediated warfare. There is a need for clear standards for human oversight of algorithmic targeting decisions, requirements for algorithmic transparency in civilian harm assessment, and accountability mechanisms for AI-enabled violations of civilian protection principles that the current lay of the law is missing. Experts on the laws of war, already alarmed by the emergence of AI in military settings, say they are concerned that its use in Gaza, as well as in Ukraine, may be establishing dangerous new norms that could become permanent if not challenged (The Hindu, 2024). Without deliberate intervention to ensure that AI systems advance rather than undermine civilian protection and gender equality, these technologies risk institutionalizing forms of harm that will affect the most vulnerable populations for generations to come. The choices we make today about how to develop and deploy military AI will fundamentally shape the prospects for civilian protection and gender equality in future conflicts.

References

 

Previous
Previous

Military AI and the WPS Agenda

Next
Next

Cyber Warfare and Information Operations: AI's Role in Digital Violence