Building a Feminist AI Governance Framework
While AI Governance and Responsible AI practices call for ethics and the centring of human values and rights, many governance mechanisms do risk re-entrenching the same systemic and structural violence that underpin most technologies and systems that are part of critical operations world over. Embedding a feminist approach offers one lens to go beyond compliance as a check-box exercise, to pursuing a proactive approach that looks at ethics and responsibility as foundational to engaging with AI rather than as an afterthought.
Broadly, a feminist approach calls on us to:
Reflect on Power: A feminist approach invites us to reflect on who holds power and who doesn’t, and who bears the impact of power being held and practiced by some to the exclusion of others. A simple way to action this is to start by mapping decision makers and impacted communities and identifying the gaps between the two. Identifying structural inequalities and systemic barriers to equity and justice follows, with a meaningful commitment to shift participation toward marginalized voices. A feminist approach to shifting participation could look like embedding participatory design, community consultation, and power mapping. These approaches must be carried out regularly and consistently, and its findings should be actioned sufficiently.
Embed Accountability: A feminist approach calls on us to recognize that the design, development, deployment, and use of AI can produce harm and the inequitable sharing of benefits. It offers the value of accountability, which helps determine who is responsible when harm occurs. In practice, this looks like establishing clear ownership systems, conducting audits and documenting audit trails, maintaining public documentation, deploying community-centric grievance and redress mechanisms, and relying on external and community oversight.
Practice Inclusion: Given the structural and systemic factors that underpin the AI lifecycle, inclusion is not the default. Some benefit more from AI than most others, and the most vulnerable face the harms and perils of AI use. Inclusion is not only about bringing multiple, diverse, and critical voices into decision-making and operations across different stages of the AI lifecycle, but also about embedding clear and ethical practices of benefit sharing and harm mitigation. In practice, it looks like relying on diverse datasets, creating and implementing inclusive design processes, paying attention to accessibility considerations, conducting bias audits, and facilitating AI literacy and red teaming exercises.
Prioritize Care: A feminist approach centres care as a core value. Exploitation and harm have been known to characterize different stages in the AI lifecycle. Labour and data that help make AI possible are often not meaningfully accounted for in the AI journey: Those that help make an AI tool what it is seldom benefit from the tool, often face exploitation, and their stories are erased from mainstream narratives on AI. A care-centric framework will pay attention to who is being exploited, harmed, disadvantaged, and setback as a result of AI, and embed a just, inclusive, and care oriented system to minimize, mitigate, and counter these harms. A feminist lens here would care for well-being over productivity, anticipate and address harm before deployment, and build systems for repair and wellbeing rather than prevention.
Recognize Desirable and Undesirable Consequences: While harm from AI may not always be intended or deliberately designed, it is an unfortunate reality that accountability is avoided by hiding behind having the best intentions. As a result, those facing harm continue to bear the brunt of it with no justice. Moving away from a focus on intent and impact, Sheila Jassanoff’s idea of desirable and undesirable consequences offers greater value. In paying attention to the consequence as desirable or undesirable, the focus shifts away from intention and moves toward embedding accountability without exception.
Prioritize Decoloniality: For the longest time, colonial structures and systems have underpinned the way we operate. In history, raw material was extracted from colonies for free or at low cost, and finished goods were dumped on them at high costs. This created a major imbalance and depletion of the colonies’ resources and economies. We see this being replicated with the Global North’s dominance in developing and building AI, using data extracted from the Global South with scant regard for their consent. Decoloniality allows us to interrogate these power structures and avoid replicating them, through practices like data sovereignty and community-owned data and AI models.
Embed Intersectionality: Feminist approaches go beyond just adding gender and stirring the mix. They recognize that the gendered experience is complicated by the presence and mingling of other identity attributes such as caste, race, religion, ability, age, and so much more. As a result, strategies that focus exclusively on correcting a gender bias in AI may not go very far. Recognizing the diverse lived experiences of everyone who is involved in building an AI tool and who will face the impact of the use of the AI tool is critical to addressing harm, mitigating risk, and prioritizing benefit-sharing in meaningful ways. Kimberle Crenshaw’s concept of Intersectionality helps us do just that.