Cyber Warfare and Information Operations: AI's Role in Digital Violence
By Kirthi Jayakumar
With the emergence of new forms of digital violence, there is a blurring of conventional boundaries between war and peace and public and private spheres (Citron, 2014). The proliferation of AI-powered information operations marks a paradigm shift from kinetic warfare to psychological and social manipulation at unprecedented scale and sophistication. With the deployment of AI, a variety of actors are equipped to wage disinformation, harassment, and intimidation warfare that targets particular individuals, communities, and even states on the basis of a variety of markers of identity (Noble, 2018). This is not, however, an isolated development: AI-enabled cyber warfare and information operations are very much an extension of existing patterns of violence and marginalization into digital spaces, where the deployment of algorithms amplify and magnify harm exponentially (Noble, 2018; Banet-Weiser, 2018). Disinformation, cyberattacks, and harassment have existed and unfolded within cyberspace, but the infusion of AI has made these scalable weapons that can target particular communities such as women and non-binary folks, people from marginalized backgrounds, and dissenting voices.
The Anatomy of AI-Powered Information Warfare
Information warfare makes use of AI across multiple domains, mainly to manipulate public opinion, suppress dissent, and destabilise democratic institutions (Coombs, 2024).
Machine learning algorithms have the potential to generate convincing fake content at a massive scale, analyse target audiences for maximum psychological impact, and coordinate simultaneous attacks across multiple platforms and media channels – enhancing traditional propaganda and disinformation techniques significantly (Buffett Brief, 2025).
Natural language processing can create human-like text content that can flood social media platforms, comment sections, and online forums with coordinated messaging (Hyperscience AI, n.d.). AI systems can simulate authentic human voices, adapt messaging to specific demographic groups, and respond dynamically to counter-narratives or fact-checking efforts. This level of sophistication makes detection difficult, even for expert observers. Computer vision and deepfake technologies allow for the creation of convincing fake images and videos that can damage reputations, spread false information, or create evidence of events that never occurred. These synthetic media capabilities have particular implications for gendered violence, as they enable the creation of non-consensual intimate imagery and fake sexual content designed to humiliate and silence targets. Recommendation systems designed to maximize engagement often prioritize sensational or emotionally provocative content, creating ideal conditions for AI-generated disinformation to achieve viral reach.
The intersection of AI-enabled harassment with other forms of marginalization reveals how digital violence particularly impacts women of color, LGBTQ+ individuals, and others facing multiple forms of discrimination (Banet-Weiser, 2018). AI systems trained on biased datasets may generate harassment that incorporates racist, homophobic, or transphobic elements alongside misogynistic content, creating intersectional forms of digital violence that reflect offline patterns of marginalization.
This symbiosis between AI-generated content and platform algorithms creates information ecosystems where truth becomes increasingly difficult to distinguish from manipulation. This can have disastrous consequences for the world: The use of the radio to spread propaganda in the Rwandan genocide proves how misinformation and mass media use can shape the course of a conflict and enable genocide, ethnic erasure, and war crimes. When this is amplified with the incorporation of AI, the outcomes can be harmful in deeply disastrous ways. With the deployment of AI, state and non-state actors alike have a new resource to amplify their propaganda and harmful rhetoric.
Targeted Disinformation Campaigns: An Intersectional Analysis
AI-powered disinformation campaigns have targeted particular communities that are vulnerable on account of their identity attributes, as well as women and non-binary people who are visible or actively engaged in social changemaking as political candidates, journalists, and activists (Jane, 2017; Doucet, 2019). They are often targeted through gendered narratives about competence, appearance, and sexual behaviour, or through manipulated images that can create social stigma that will prevent them from having public lives. AI systems have also been used to generate thousands of fake accounts to amplify misogynistic messaging, creating artificial consensus around harmful stereotypes while drowning out supporting voices.
Deepfake technologies are among the most concerning applications of AI in information warfare, with particular implications for gendered violence. AI systems can create convincing fake videos by training neural networks on existing footage of target individuals, enabling the creation of synthetic content showing people saying or doing things they never actually did. The development of deepfake technology has been driven significantly by demand for non-consensual pornography, with the intention to target women for sexual exploitation (Powell & Henry, 2017; West et al., 2019). Female politicians and journalists are particularly vulnerability to deepfake attacks that sexualize, demean, or misrepresent their words and actions (Sobieraj, 2020). The threat of deepfake creation can deter women from public participation even when actual attacks do not occur, as the possibility of sophisticated character assassination creates anticipatory self-censorship.
Detection technologies struggle to keep pace with generation capabilities, creating situations where false content may circulate widely before identification and removal. The lag between creation and detection creates windows of opportunity for maximum damage to targets' reputations and psychological wellbeing. Even after identification, the rapid spread of synthetic content across multiple platforms makes complete removal nearly impossible.
Regulating AI-powered digital violence
Understanding AI-powered digital violence requires recognizing its connections to broader systems of oppression rather than treating it as isolated technological phenomenon. AI-powered digital violence does not stand alone: It is a technological extension of offline patterns of violence and silencing, and effectively offer a fillip for human behaviour that normalizes, chooses, and invests in violence as a norm rather than as an exception. The technological sophistication of AI-generated content amplifies the harm potential of existing propensities of violence and augments the pool of individuals who are targeted by it.
The global reach of AI-powered harassment campaigns creates new scales of revictimization, where survivors of offline violence may face ongoing digital persecution that transcends geographic and temporal boundaries. This persistence transforms isolated incidents of violence into ongoing campaigns of intimidation that can last for years and span multiple platforms and jurisdictions. The targeting patterns, tactics, and impacts of AI-enabled harassment mirror established forms of gender-based violence, suggesting that effective responses must address both technological capabilities and underlying social dynamics that enable their weaponization (Mantilla, 2015).
Developing effective responses requires coordinated efforts across technical, legal, and social domains. Detection technologies, platform policies, legal frameworks, and community support systems must all evolve to address the scale and sophistication of AI-enabled digital violence. However, technical solutions alone cannot address the social dynamics that enable the weaponization of AI technologies against marginalized communities. Ultimately, countering AI-powered digital violence requires confronting the broader patterns of inequality and discrimination that these technologies amplify and extend into digital spaces.
References:
Banet-Weiser, S. (2018). Misogyny and Digital Culture. NYU Press.
Buffett Brief (2025). The Rise of AI and Deepfakes. https://buffett.northwestern.edu/documents/buffett-brief_the-rise-of-ai-and-deepfake-technology.pdf
Citron, D. K. (2014). Hate Crimes in Cyberspace. Harvard University Press.
Coombs, A. (2024). "Persuade, Change, and Influence with AI: Leveraging Artificial Intelligence in the Information Environment." Modern War Institute, October 25, 2024. https://mwi.westpoint.edu/persuade-change-and-influence-with-ai-leveraging-artificial-intelligence-in-the-information-environment/
Doucet, A. (2019). The slow violence of algorithmic governance: A feminist perspective. Signs: Journal of Women in Culture and Society, 44(2), 427-453.
Hyperscience AI (n.d.). Natural Language Processing. https://www.hyperscience.ai/resource/natural-language-processing/
Jane, E. A. (2017). Misogyny Online: A Short (and Brutish) History. SAGE Publications.
Mantilla, K. (2015). Gendertrolling: How Misogyny Went Viral. Praeger.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Powell, A., & Henry, N. (2017). Sexual Violence in a Digital Age. Palgrave Macmillan.
Sobieraj, S. (2020). Credible Threat: Attacks Against Women Online and the Future of Democracy. Oxford University Press.
West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute.