Feminist Approaches to Tech: Interview with Monique Munarini

Monique Munarini is a Brazilian qualified lawyer who received a Master in Human Rights and Multi-Level Governance from the University of Padova in Italy and a LLM in Law, Economics and Management from the University of Grenoble-Alpes in France. She joined the National PhD in Artificial Intelligence coordinated by the University of Pisa where she researches about the intersection of Law, ethics and computer science in AI Governance. Her research interests includes AI ethics, business and human rights, AI due diligence, the impact of AI on women’s rights and human rights.

Can you tell us about yourself, the work you do, and how this journey into this field got started for you?

I was born in Brazil and studied law. I was always connected with human rights. Working in human rights at that time meant working for the UN, which was far from my reality. I began working at the Prosecutor’s office. I wasn't happy because I felt that my work was theoretical – I wanted to convert that to reality. I decided to pursue a master’s degree in human Rights and Multilevel Governance in Italy with a full scholarship. During my master’s, I had a lecture on facial recognition systems. This was when I began to get interested in AI. I had decided to do my master’s thesis on gender biases in AI, and everyone was skeptical because they wondered what AI had to do with human rights. This was in 2019. There was also a UNESCO report that spoke about how Siri was considered to have been developed with a bias against women because these devices received sexist and misogynistic prompts.  That made me go deeper and I noticed that there was a whole pipeline of women who were not included within the scope of AI research. They aren’t encouraged enough to keep at it in the field. This was eye-opening, and I wanted to do more. That was when I decided to pursue a PhD on AI and Society, which was advertised as a multidisciplinary program. There, I began to see how gender and feminist theory could be incorporated into computer science. The way I found to integrate the two was ethics.

That's when I began to work with equity as an ethical value, because this is easier for computer scientists to understand. If we talk about women's rights and non-discrimination, they may tend to disengage. But when we talk about ethics, they tend to engage. 

You are currently building an equity-based framework rooted in feminist theory, participatory action research, and design justice principles, with the aim of creating AI audits that genuinely account for the needs and voices of all communities affected by AI. What has your journey been like so far?      

A rollercoaster ride! It was very difficult to find space to do the work I am doing, but I was lucky to find supervisors who did not gatekeep the field or prevent me from pursuing areas of interest I wanted to engage with. They helped me flourish and develop my research and supported me in every way they could and knew. I was supervised by men from the Global North, but they were supportive and offered what they could with the full understanding that they wouldn’t be able to support me 100% in the process.

The design work was influenced by my previous research associated with a visiting period at King's College London. There are three key milestones in my journey that I would like to share. First, I focused on proposing equity as an alternative to fairness, because it is an ethical value associated with equality and non-discrimination. But in the computer science space, they began to invent fairness metrics, which I found were not working. I published my framework as a paper and shared it at conferences on computer science and policy, making a case for equity over fairness. At the International Conference of Electronic Governance in South Africa, I won the best research paper award. This was a powerful confirmation of the importance of what I was doing.

Second, one of my supervisors was part of a project involving feminist action research, in collaboration with an organization for survivors of domestic violence. They wanted to hear what they thought of explainable algorithms in relation to providing feedback on CV screening processes. I was able to observe those sessions. This motivated me to incorporate participatory research in my own PhD journey, and it was a key moment because I was able to put a face to my research. The women I engaged with in the feminist participatory action research session were the ones I wanted to impact with my work. This made it all the more real. When I began talking to women who were struggling with AI systems, considering the vast diversity of identities that complicated the gendered experiences they faced, it became clear to me that the fairness metrics are inadequate – they look at things from a binary lens and do not focus on nuances and intersectionality. This helped me see the importance of adopting this framework in doing this research.

The third piece has finishing my PhD thesis, and being able to connect all the dots and build something meaningful that can be implemented in action. Up until now, my focus was on research and the theoretical components, and now I can see ways to bring all this to life in the real world through a focus on real world applications. 

What are equitable AI audits? Can you share what can change because of the implementation of equitable AI audits? 

I used participatory methods in my research and conducted workshops with both AI experts and affected groups. I selected the youth as the impacted groups, as they would be the ones to face the long-term impacts of AI systems. They’re also rather engaged, so I would get a lot of intersectional perspectives. By drawing from both, the affected groups and the experts, I was able to build a few indicators of what equitable AI audits could be. The mixed backgrounds helped me draw from meaningful insights – from policymakers and civil society organizations alike. This helped me make sure that the technical insights were in place, alongside the lived experiences that often tend to get overlooked.

We must also be aware of systemic biases and how they arrive in the AI lifecycle – both through data and design. This also goes beyond just the AI life cycle because, ultimately, AI is developed and used within society. The mindsets of those involved, the systems and structures in place, and the different lived experiences are all equally involved. It is not just about how the system was designed, developed, or tested or monitored. It is important to see where and how the system is being implemented, and who is using and assessing the outcomes of the system, what impacts the systems themselves have, and how they alter and affect other aspects of life. A very interesting result of my workshops was the fact that a lot of people talked about the “human in the loop” and the technical aspects of how a human needs to be in the decision-making process that evolved automated systems, not just because of what the GDPR says, but also because of the technical aspects of it. This was very interesting and pointed to the need for deeper capacity building efforts because it is important to see the kind of training that people deploying, using, and engaging with the AI tool have. In a lot of the workshops I did, it was clear that people were very aware of how the systems are impacting them, and they were curious about who was monitoring them and how. Another intriguing finding was that even if participants recognized they were vulnerable they did not see that as a justification to absolve themselves of responsibility. They all talked about the shared responsibility tying the developer, designer, deployer, and user across the AI lifecycle. They reflected on how a user can provide feedback and contest the results of a system following an audit.

All of this has informed the work I do now. I am partnering with the AI Accountability Lab (AIAL) led by Dr Abeba Birhane, where we are working on a justice-oriented framework for audits that can drive accountability by addressing power imbalances.   

 

Do you think that if technology was truly owned, developed, and deployed by people as communities instead of by capitalists for profit motives, we might not find ourselves in as many of the crises we have at the moment? 

Technology is currently also owned and developed and deployed by people as communities, but it is hard to find. This is why I do what I do – a part of my work involves bringing the voices of these communities to the forefront, to show how technology can be owned and be a tool for empowerment. I want to share a powerful example, a South African civil society organization called GRIT, which developed a chatbot to support women who are facing domestic or other forms of violence. The chatbot was entirely developed using participatory methods, which means it was built with insights from women who faced violence in their lives. Eventually, the chatbot itself emerged like an older, wiser sister who can offer advice. There's a lot of brilliant work in Australia from researchers that have indigenous origins. They are working to bring indigenous languages to large language models, as this can impact their communities positively. These technologies are developed based on need.

However, technologies developed in Silicon Valley are not always, necessarily based on need. They care more for ways to increase profit, alter behavior to suit their ends, and to serve their goals. How can we get different outcomes if we are not solving the issues of people on ground that are getting overlooked? So we have both kinds of initiatives, where one is based on need and the other is based on profit. We need to invest more and find ways to connect needs-based tech and facilitate an exchange of knowledge and direct funds. 

What might a decolonial feminist lens bring to our collective engagements with tech, whether as consumers or participants?

One of my supervisors is a philosopher. When I was learning to pitch my research, I consistently said that I was working with equity from a feminist and decolonial perspective. He told me that these are big words and that people might stop listening to me if I used this phrase. My Latin American side has always been stubborn, so I kept doing it and continue to emphasize that I work from a feminist, decolonial perspective. 

What does this framework bring to engagements with tech? Well, shared knowledge of how to empower communities, bringing in different perspectives to break hegemonic approaches to human-centric-AI to ensure that the “human” represents the full spectrum of diversity, and respecting different streams and sources of knowledge. I look up to this play that Simone de Beauvoir wrote, called “Les Bouches inutiles” in French and “Who Shall Die?” in English. She speaks about equity, the notion of community, and who is included and not included. I also lean on the wisdom of Paulo Freire, and center a feminist, decolonial approach. The goal is not about what is better than the other, but about including multiple streams of knowledge together.

 

Next
Next

The WPS Agenda and Tech-Facilitated Gender-Based Violence