Feminist Approaches to Tech: Interview with Paola Galvez-Callirgos

Paola Galvez-Callirgos

Paola Galvez-Callirgos is a recognised expert in digital technology policy, with over a decade of experience delivering strategic advisory, building communities, and leading high-impact initiatives across the private sector, public institutions, and multilateral organisations.She holds a Master of Public Policy from the University of Oxford and has been appointed to two prestigious UNESCO networks: AI Experts Without Borders and Women for Ethical AI. Paola’s passion and deep understanding of the tech ecosystem has earned her recognition as a sought-after keynote speaker at global conferences.

Can you tell us about yourself, the work you do, and how this journey into this field got started for you?

I am a Peruvian lawyer and I specialize in technology policy. My work is all about regulatory frameworks, which includes laws, regulations, national strategies, and action plans for a country to leverage the potential of a specific technology. Lately, I have focused more on artificial intelligence. These journeys started in 2013, when I joined Microsoft, the Peruvian subsidiary, as an intern. I was interested in technology, and was moved to bring equality to my country. Peru is a developing country where lots of disparities exist. In Peru, AI was a huge novelty and my first interaction with this technology was when I saw a demo of Seeing AI, an AI system used to help people with disabilities to navigate their surroundings, and that made me see how meaningful technology can be if used for inclusion. As a lawyer, I realized that I could contribute by helping create the regulations that could enable my country to leverage the potential of these technologies. Now, I don’t practice law as I am focused in the development of public policies and regulatory frameworks. In other words, I work at the point when the law or regulation is developed, approved and brought into existence.

My journey began with the private sector with Microsoft and a few other companies before I joined the Peruvian government in 2021 at the Presidency of the Council of Ministers as strategic advisor to the Secretary of Digital Transformation. That was my most meaningful role as I got to contribute toward creating the rules which created institutionalization. I was privileged enough to have the trust of my boss at the time, which allowed me to create programs that fostered digital skills, a critical component to make technology benefit all.

I created a program called Digital Girls Peru, which was wonderful! I encountered a lot of pushback from people in the government, who told me that I should focus on a wider pool than girls. “We need many people, especially the new generation to learn how to deal with technology, particularly artificial intelligence”, they would tell me. However, we must adopt a gender lens and incorporate intersectionality to highlight and bridge all the gaps in the system. After my work with the government, I went to Oxford to study my Master's in Public Policy. That changed the way I see life and how I wanted to contribute my services. I decided that I was going to work for organisations whose values align with my own and we can create value in society. 

I didn’t return to the private sector, but became an independent consultant. I now work with the OECD, UNESCO and some other of organisations. Likewise, I work part-time as AI Ethics Manager at Globethics, a Swiss NGO with a global impact. 

Why is embedding social justice principles in the development of tech an afterthought more often than a norm?

In all the years that I've been advising governments, companies, and even international bodies, one thing that is clear is that it's not that they lack good intentions because they all try to do it with good intentions. However, I think it's more the case of where and how power is distributed and what the incentives are. Unfortunately, in a lot of tech projects, the key focus is on deriving concrete outputs. In roles I’ve held previously, I’ve asked to focus on equity, accessibility, gender impacts, and sustainability as well. These are, however, very hard to quantify. They show returns over the long term. But, they are conscious choices that we need to make for the long term social good.

On the one hand, we have economic pressures for the government and private sector. For the private sector, there is also the race to innovate, with many competing to be the first to launch a new AI tool or large language model. These actors don’t focus on the social justice side of things and do not necessarily pay attention to community-level realities. On the other hand, there are many decision-makers who think technology is neutral. But this is not true. We need to understand that datasets, algorithms, and platforms embed both human choices and the existing iniquities that we have. Without deliberate effort and conscious choices, existing biases are replicating at the scale.

There are claims that data is the new oil and that tech is changing the face of geopolitics. From your seat at the table, what stands out for you especially as you look at it through a tech ethics lens?     

This analogy has been around for a while – in fact, I remember it coming up in 2013 when I was in the government. This analogy has been there for a while. It is true that data is fueling economic growth. Unfortunately, countries are racing at the moment. When any actor seeks to develop or deploy AI, my first question is to understand what data they have. If an organization does not have data that is good quality, updated, and diverse, no project on AI can succeed. Countries are racing for this leadership. They're also racing to secure semiconductor supply chains. Technology has become a source of geopolitical tension or a geopolitical driver. Most dialogues have centered on investment and innovation. With this, powerful nations are ahead of the curve and countries in the global majority are left behind while their data is extracted. Ethical concerns relate to labor exploitation and data labeling, environmental impacts for massive computing and the use of resources and the placement of data centers in places that don't have water, like Chile. The ethical challenge is worsening with geopolitical tensions.

 You help shape human-centric tech policies by mobilising communities and offering strategic guidance. What are some of your observations on human-centric tech policies? Are we there yet?

No. We're not there yet. We've made progress, yes, because key concepts are now mainstream on the policy agenda, However, it is unfortunate that many national artificial intelligence strategies reference human-centric principles or are taking a human-centric approach. But these words remain aspirational, because they're just on paper and are not backed by clear duties or indicators. When we speak about human-centric policies, participation and inclusive policies are key. This is currently underdeveloped. I've been doing some research on how civil society from the global majority is participating in global AI governance. The reality is that they learn about a proposal or regulation too late, because consultations are too short, and expert panels are dominated by industry.

Grassroots groups, women's organizations, and marginalized and vulnerable communities are not invited to be part of these processes. If they are invited, the host organization doesn’t provide resources or support for them to contribute meaningfully. With such exclusions in place, policies rarely account for how gender, race, economic status, and geography interact. So without this lens, even solutions that may seem human centric on paper are still reproducing shelter inequalities.

I'd love for you to imagine how this might pan out, out loud: "If technology was owned, developed, and deployed by people as communities rather than by people as capitalists, we wouldn't be in the heart of the many, many crises we find ourselves in." Perhaps you agree, perhaps you don't - but we'd love to hear what this brings up for you!  

I partially agree. On the one hand, if communities collectively owned and govern AI priorities, we would not be thinking of user growth metrics or the returns of this new technology or development. We will be thinking, instead, of equitable access, social wellbeing, long-term resilience, how we are going to prepare our community to use this technology, and more. On the other hand, however, if communities governed technology on their own, they might struggle to fund large scale research and development. We don’t need extremes: Rather, the development of AI needs global coordination. We need high end expertise on this new technology that small communities might find hard to access. This is why we need to start working as a collective.  We must all ask ourselves whether we are where we thought we would be with this disruptive technology. We must strive to understand how the technology works and how small steps can make a huge impact. 

Next
Next

tech governance round-up #2