Non-LAWs AI and the Human Costs of Military AI

By Kirthi Jayakumar

Non-LAWs Military AI

There has been a heavy focus on LAWS in most policy regulation and legislative spaces globally. This has, invariably, arrived at the cost of the attention that is meaningfully due to the full spectrum of military AI that are actively deployed widely (Nadibaidze et al., 2024). Military AI does not start and end with LAWS (Rosen, 2024). Rather, it includes a growing range of applications, tools, and use cases (Mohan, 2023). Aside from unlocking autonomy in weapons systems, AI has been put to use in command-and-control operations, information management processes, and logistics and training (UNIDIR, 2023).

One of the most commonly used technologies are called AI-based Decision Support Systems (AI DSS). Used to inform decision-making in relation to targeting and the use of force (Nadibaidze et al., 2024), AI DSS systems typically analyse vast amounts of data and support with the analyses of patterns, offer recommendations for actions, and make predictions as well (Zhou & Griepl, 2024). In effect, they guide decisions on what to attack, when, and how, in order to be most efficient in terms of both, precision and accuracy, and use of resources (Zhou & Griepl, 2024).

AI DSS are not intended to replace human involvement in decision-making but instead require the machines and humans to work collaboratively. These tools have been considered useful and ideal because they bring together precision, reliability, and efficiency without doing away with human intelligence (Scharre, 2016). However, in practice, decisions are outsourced to machines in order to create a psychological distance from the military operation, and sometimes, decisions are left to the machine because there is a very, very small window of time between receiving the insight and determining the outcome, offering limited room for nuanced decision-making (Provan, 2025). Unfortunately, thus, it is often the case that the human’s role in these processes is reduced to a step of approval – the quintessential “rubber stamp,” at best.

Many of these tools have been used in contemporary military operations and warfare. For instance, in Ukraine, Russia has been using AI to monitor military manoeuvres, intercept and translate communications, and make decisions on what and when to target (SIPRI, 2025). In Gaza, Israel has made use of the Gospel to generate lists of buildings and structural edifices and categorize them into tactical, underground, residential, and power targets in order to strike them (SIPRI, 2025); Lavender to produce lists of individuals to be targeted based on a numerical rating indicating the perceived likelihood of their affiliation with particular armed groups (Human Rights Watch, 2024); and  Where’s Daddy? to trace the movements of individuals marked as military targets using geolocation, and then target them along with their families and other residents of the building they are in.  

The Use of Miliary AI for Non-Military Purposes: Surveillance, Borders, and Control 

The military origins of AI have invariably shaped its civilian applications, with military ideas and value systems underpinning most of these engagements. One major concern around the increased proliferation of AI for military use is the potential for the use of such AI for non-military purposes in the larger security sector (Khlaaf et al., 2024). The convergence of AI with surveillance technologies has enabled this proliferation, transforming the way states monitor, control, and police populations in public places, at borders, within marginalized communities, and in refugee camps (Madianou, 2024). Through these technologies, the way state power is exercised over people has undergone a qualitative transformation in ways that exacerbate dominance and control. Vulnerable communities along the intersecting lines of race, gender, class, nationality, and other markers of marginalisation bear the heaviest burden of this state power (Madianou, 2024).

Understanding AI Surveillance Architecture

Military AI surveillance systems operate through integrated networks comprising sensors, databases, and analytical algorithms that process vast quantities of data in real-time (Viveros Álvarez, 2024). They combine traditional surveillance technologies such as cameras, microphones, and motion detectors, with more advanced AI capabilities such as facial recognition, behavioural analysis, and predictive modelling. They rely on machine learning algorithms that are trained on massive datasets to identify patterns, flag anomalies, and make automated decisions around threat levels and appropriate responses.

Given that these datasets are not accurately representative of diverse populations and their lived experiences, it is clear that these systems incorporate embedded assumptions around threat detection that are problematic when applied to civilian populations (Acheson, 2020). Further, given that these datasets could well include data that individuals did not consent to sharing, it is not always the case that such mechanisms operate with the full, free, and informed consent of those whose data played a vital role in creating the system. Military AI prioritizes rapid decision-making, pattern recognition, and threat neutralization. These priorities translate poorly to complex civilian environments where context, nuance, human rights, and ethical considerations should inform policy and legal responses rather than efficiency alone.

The Harms of AI Surveillance

Military-grade AI systems offer precision and speed in surveillance, and in the process, amplify existing patterns of discrimination and social control (Madianou, 2024). This looks like high-precision border security systems, population monitoring technologies, and predictive policing algorithms. Technologies that have been developed for battlefield reconnaissance have been deployed to monitor migration routes, analyse travel patterns, and predict unauthorised border crossings (Jones, 2016; Magnet, 2011). The same algorithms are used to track insurgent movements and keep an eye on “suspicious” behaviour among asylum seekers, refugees, and economic migrants (Madianou, 2024). In the security sector, military AI technologies are used for predictive policing (O’Donnell, 2019). Here, they analyse historical data, demographic information, and behavioural patterns to identify “high-risk” individuals and locations for intensive surveillance and intervention.

Predictive policing is built on algorithms that are embedded with historical and systemic biases, which have the potential to mirror and amplify existing patterns of discriminatory enforcement (Amoore, 2020; O’Donnell, 2019). For example, systems trained on historical arrest data continue to perpetuate racial disparities in policing, as prior instances of discrimination become encoded into algorithmic decision-making (Noble, 2018). Regions with minority populations and historically marginalised groups receive disproportionate algorithmic attention. This creates feedback loops where increased surveillance results in more arrests – effectively reinforcing discriminatory assessments of these areas as “high-crime.” There are also gendered dimensions of predictive policing that get sidelined in policy spaces. Algorithms that can predict domestic and intimate partner violence fail to account for the complex dynamics of such violence – often placing survivors in positions of greater vulnerability and risk by misunderstanding threat patterns.

Women and non-binary people, especially those from communities that are subject to intensive predictive policing tend to face greater risks of family separation through the arrest and incarceration of partners and relatives flagged by algorithmic risk assessments – often without attention to context. Facial recognition systems demonstrate significantly high error rates for people of colour, and misidentification leads to wrongful detention, harassment, and violence (Gilliom & Monahan, 2013). Class compounds vulnerabilities, as people from low-income communities of colour are oftentimes more subject to intense surveillance. This dynamic is complicated even further by immigration status, as AI systems that are designed to identify “illegal” presence treat unauthorized immigrants as inherently suspicious, no matter their actual behaviour. Military AI systems are primarily trained on datasets that are dominated by white male subjects of a privileged class with citizenship status, and are thus beset with these biases.  

Addressing surveillance through a feminist lens

The use and deployment of military AI in surveillance amplifies existing patterns of discrimination and create new forms of algorithmic oppression (Zuboff, 2019). When seen through an intersectional lens, the harmful impacts of these systems drive home the need for a meaningful policy that recognises the harms of surveillance in itself, and the complexities that emerge with the deployment of AI in the mix. The military origins of these technologies shape their civilian applications in ways that prioritize security over human rights and efficiency over equity, but addressing the deployment of AI in surveillance can happen meaningfully only if we recognise the harmful nature of surveillance in itself. With the growing expansion of military AI surveillance, there is a need to address the social, political, and technical dimensions of algorithmic discrimination. This requires challenging biased implementations and questioning the fundamental assumptions underlying the militarization of civilian surveillance. The stakes of this struggle extend beyond immediate policy debates to encompass fundamental questions about democracy, human rights, and social justice in an age of algorithmic governance.

References:

 

Previous
Previous

Cyber Warfare and Information Operations: AI's Role in Digital Violence

Next
Next

Lethal Autonomous Weapons: The Gendered Consequences of 'Killer Robots'