Feminist Approaches to Tech with Aubrey Blanche
By Kirthi Jayakumar
Aubrey Blanche is an expert and executive in equitable people & culture, ethical business operations, and responsible AI. Her work combines an empathic and intersectional approach to help organizational leaders achieve their operational and growth goals. From fair talent processes and bias-resistant AI products to inclusive communications strategy, she helps organizations think holistically about evolving to meet the needs of a rapidly-diversifying and globalizing world.
Can you tell us about yourself, the work you do, and how this journey into this field got started for you?
I think of myself primarily as an ethicist in some ways. I work across workplaces and AI, although the balance is shifting, and I'm moving more and more into AI. I think I am just being myself in the sense that I have always been preoccupied with issues of fairness and justice. If you ask my parents, they would have told you I was a 6-year-old who just couldn't handle when things weren't fair. My interest is really in structural analysis and critical theory and trying to understand the ways in which we can imagine a world that is better, which I define as the quality of life or the opportunity to flourish for people who are currently marginalized. I look at ways to imagine the world as better for the people who are most vulnerable right now, because I believe that's the way that we get the broadest kind of benefits for everyone. My role involves working with leading technology firms to think about scaling their people and culture, sustainability, philanthropy, and responsible technology practices. I am a director at an independent nonprofit called the Ethics Center, where I work with our clients to think about ethical decision-making, and the ethics of artificial intelligence. I also run a consultancy, where I work with organizations around those issues, and am also a master's student in AI ethics at the University of Cambridge.
What are some of the critical topics in the conversations around ethics and AI that are drawing your attention right now?
Given the week in which we're talking about this, I think the showdown with the Department of War in the US and Anthropic is on my mind. Before I went into technology, my academic training was in political sciences security studies. I specifically studied US Government contracting. All that’s happening right now is right within the area of all the stuff I've read a lot of books about, and I am really existentially terrified about what's going on, because the existing theories to understand how power works in the world assume rational actors. That's not what we're dealing with right now, and so I think the level of uncertainty and risk is incredibly high. As a human, I am deeply concerned about innocent people who are being put in harm's way or who have been killed.
As for the other AI-related issues that are really top of mind for me, one is that I have begun to be frustrated with the kind of AI hype cycle around productivity and efficiency as an objective. I think that perceiving or pursuing productivity and efficiency as an objective is really harmful economically. It will disadvantage people who are already marginalised and quite frankly, speaks to an incredible lack of imagination among people who are driving the dominant discussions in the field. It’s as if a human is not valuable if they are not productive.
The opposite idea is really value creation and innovation and creativity. I am really inspired by women who are pushing this – one of my classmates at Cambridge talks about co-intelligence and teaches people how to think about the strengths of AI and humans, and how we can actually combine them to think about creating value, which implicitly suggests doing things in new ways. To me, that opens up so much optimistic possibility for the idea of imagining a better world and charting a new path to get there. I don’t think people that are running budgets or making big decisions are thinking expansively, though, about what is actually possible.
I am a bit cynical now, but I refuse to give up my optimistic hat that things are possible. I see a part of that at my work, given the platform I have involves actually speaking to those people and helping them see how their methods can benefit people more and reorient ourselves collectively to get out of some of the trade-offs and zero-sum thinking that's happening in the way things are devolving right now.
You engage with folks who might not be thinking of ethics upfront as they build technology and make them see value in embedding ethics. What do these conversations look like?
Having done this work – I spent years in diversity, equity, and inclusion, corporate sustainability, and philanthropy – I think part of it is needing to change the language a little bit. What I am having a lot of success with right now is actually having a conversation about risk management. While I would love everyone that I work with to think of ethics as the first priority, I also recognize that in the current business environment, you're not really rewarded directly for ethics or calling for ethics. I think there is some value to that. Consumer tastes are changing and people who are quite senior in their careers have not received training on how to think about ethics in their contexts and they may not think about ethics at the same level of criticality as they might about hitting their revenue numbers.
This shift from necessarily talking about the ethical question of what we ought to do can be unproductive for people who haven't been given the skills to adjudicate those questions. Moving to an enterprise risk management frame, I think, more easily helps you couch the concerns in questions. For example, if I am talking to a company that is using AI in some kind of an employment context, I would care about potential issues of discrimination that might arise for marginalized individuals who are interacting with these systems. I could say that, and this might be making a moral argument that may or may not be persuasive, depending on who's in the room, but I can also conceptualize that problem as a potential reputational risk, a legal risk, and an ethical risk that we can quantify the likelihood and impact of that.
I find that even though we're maybe driving towards the same destination, we're taking a slightly different route, because the folks I'm working with have that framework in their head already, we can have a productive discussion about trade-offs. I think that's part of the work that sometimes those of us in the justice and ethics space find really frustrating, and rightfully so. We are constantly translating ourselves. But it is also true that not being willing to be flexible about our communication strategies and the way in which we frame issues ultimately undercuts our ability to do impactful, good work in the world. My particular ethical praxis is that I would rather carry the emotional labour of that personal frustration if it means that it results in a safer, more responsible outcome in the world.
In order to figure out what our work is, the first step is to really understand both our identity and positionality. Identity is who you are, positionality is the kind of the dynamics that you experience in a relational context in the social world. I am Mexican-American and European. I'm disabled, but invisibly so. I'm queer, but you can't necessarily tell from looking at me. Even though I have a handful or a collection of marginalized identities, I live at the very privileged end of those categories, which are spectrums. How I try to tangle with that weird tension is critical to me. I try to think about what emotional labour is fair for me to take on, because I do not do the emotional labour of navigating oppressive systems in the same way that someone who's visibly marginalized would every day. And so, to me, that feels like a balancing of the scales to say, should I say this thing in this meeting? I know I look like Karen, and so I'm going to get less blowback for saying it, and so therefore, I should take the risk that this idea ends up in the room from me, when compared to someone who may experience more abuse or harm from bringing that viewpoint in. Obviously, sometimes, you need to step back and open the door for other voices where the risk is appropriate. However, that is an example of where I've answered that question for myself. As an academic, I believe that part of my work is theorizing these problems and framing them in ways that people can take forward.
That's what I would encourage everyone who's interested in these issues to do: Work toward realizing that we each have a very unique role in creating systems change, and that unique role is determined by who you are and where you're positioned in the world, and what sort of resources you have access to.
Why are social justice perspectives oftentimes an afterthought in tech development journeys, if not a bunch of words that get co-opted in the rhetoric to push a particular agenda through technology?
What is anecdotally known is that AI ethics, safety, and responsibility tend to be more female, more queer, more brown, more disabled than the technical side of AI, which tends to have a higher distribution of folks from kind of dominant or majority groups. That makes sense to me, but it contributes to an existing problem, which is that, in general, in our education systems (and I'm speaking in the West, because that's where I have authority to speak from), there has been a general disdain to disrespect certain kinds of knowledge that aren't engineering-based or technical. I think there are confounding factors of the kind of epistemic injustice that those of us who are marginalized face, in that our lived experience and insight based on that lived experience is seen as valid and valuable. I think those dynamics play out in this field. But I think a lot of it also comes down to that for a huge amount of what I will call technical AI folks, those that are good people, well-intentioned, want to have a positive impact in the world. Their training has not actually provided them with the skills or background knowledge to effectively adjudicate the scale of the ethical challenges that they are entangled in. I think it's really true that there are certainly some difficult people out there, I am not denying that. I've worked in tech long enough to know that there are some people with personality disorders running things. But there's also a huge number of people who have simply been failed by the educational system to provide them the grounding that they would need to act as responsibly as their intentions would suggest.
Do you think that if technology was owned, developed, and deployed by people as communities instead of people as capitalists, we wouldn't be at the heart of so many crises that we find ourselves in?
I'm excited for a future in which more diverse imaginaries are adopted, and I think that comes from bringing people together who have different sets of lived and professional experiences. When I think about the possibility of kind of the democratization of AI, to me, that means thinking about different purposes, uses, and limitations of AI, up to and including bans on it. I think communities should have the right to opt out of engaging with that technology if the community itself, from a position of sovereignty, decides that the technology is not appropriate for them. I am excited about projects that are looking at non-English LLMs and sovereign AI in middle and low-income countries, and the promise of Small Language Models that solve some of the environmental issues we are seeing. I guess that's immediately where I get a little filled with hope about the world that could be!
What do you think a decolonial feminist lens could bring to how we engage with AI?
I think the most significant contribution is actually asking the question of what the goal we are moving towards actually is, and redefining what is seen as a worthwhile pursuit. Once you do that redefinition and wayfinding, it opens up possibilities for new methods and new ways of moving forward that encompass ideas like a rejection of human commoditization and an embrace of a slow process that achieves objectives in different ways. These things are just so exciting to me as someone who has spent most of my career in high-growth technology, so it's like the epicenter of those particular dynamics. I want to see a world where more options are available. In my vision of the world, the linear, fast-paced route should be an option on the table but not the dominant mode when it has been shown to prioritize the kind of things that are anti-life.