Collective Work
Collective Work: Coming Together to Govern AI
In 1983, Angela Davis delivered a lecture at the University of California in which she argued that the dismantling of structural violence could not be accomplished by individuals acting alone, however gifted or determined. Transformation, she insisted, required coming together — creative, sustained, collective work that understood the connections between struggles and refused to treat any one of them as separable from the rest. She was speaking about racism, incarceration, and gender-based violence. But her argument holds with equal force for artificial intelligence. The concentration of AI development in a small number of corporations and states, the extraction of labor and land from communities in the majority world, the reduction of complex human lives to data points to be optimized: These are not technical problems to be solved by better algorithms. They are structural problems that require structural responses. And structural responses require collective work.
Collective work, the act of coming together to mutually accomplish a shared task toward a shared goal, is not a novel proposal. It is the foundational wisdom of Indigenous communities across the world, embedded in practices that have governed land, labor, and community life for centuries. It calls for relationality, reciprocity, and an understanding of interconnectedness as the ground of human coexistence. In almost every case, it is also a physical practice: bodies showing up, working side by side, bearing weight together. That embodied dimension is not incidental. It is the point. And it is precisely what is absent from the rooms where AI is governed today.
Understanding Collective Work as an Indigenous Practice
The global scope of collective work practices is itself significant: it tells us that what has been framed as individualism’s opposite is not a marginal or exotic tradition but a majority-world norm. From East Africa to the Andes, from Southeast Asia to the Cherokee Nation, communities have developed rich, differentiated practices of coming together to work. But to name them in a list is to risk exactly the kind of extraction this piece argues against — treating living practices as data points. Rather than survey them all, it is worth dwelling in a few to understand what they actually teach.
Among the Quechua and Aymara people of the Andes, the practice of Mink’a is a significant component of everyday life. Participants come together to support the whole community and are paid in kind for their labour. Mink’a is neither charity nor a transaction. It is an expression of the understanding that the community’s flourishing and the individual’s flourishing are inseparable. Labour is not a commodity sold to the highest bidder. It is a form of relationship and an acknowledgment of reciprocity, where the what one does for others today creates the conditions for what they will do for one and others beyond. This reciprocity offers an understanding of the way the world actually works.
In Rwanda, Umuganda, meaning “coming together in common purpose to achieve an outcome,” is observed nationally on the last Saturday of each month as a day of community service. Roads are built, public spaces maintained, and community infrastructure repaired, not by a contractor hired by the state, but by people who understand themselves as jointly responsible for the world they share. This is not an obligation extracted from subjects by a government. In its original form, it is an expression of a cosmology in which the boundaries between self and community are understood differently than in Western liberal individualism. The community is not a collection of individuals. It is the condition of possibility for individuals at all.
In Brazil, Mutiрão is a form of collective mobilization based entirely on mutual help, operating in a rotating system that carries no hierarchical structure. Everyone is simultaneously benevolent and a beneficiary. There is no permanent giver and permanent receiver. This distinction quietly undergirds most charitable and extractive relationships, including many AI governance processes. Among the Cherokee, Gadugi, the practice of investing cooperative labor within a community, originally concerned tending the gardens of elderly or infirm tribal members. Over time, it became a broader mechanism of communal support. These practices show that work is most fully itself when it is done in relationship, for the benefit of all, without the creation of permanent hierarchies between those who give and those who receive.
These practices should not be understood as mere ways of meeting, consulting, or evaluating impact. They are embodied and involve both land and physical co-presence. For instance, the Naffir tradition in Sudan gathers family and village networks to build houses and bring in harvests. Bayanihan in the Philippines originated in the act of carrying a family’s home on collective shoulders when they needed to move. Gadugi involves people physically tending to soil. In all these instances, work is shared not through symbolic representation but through physical, embodied presence. This matters for a governance discourse that has convinced itself that participation can be achieved through surveys, consultations, and token seats on advisory panels.
What Collective Work Unsettles in AI Governance
The world in which AI is being developed and governed is shaped by a particular story about work: That it is individual effort exchanged for individual reward, that competition produces the best outcomes, that efficiency is the highest value, and that those who accumulate the most have done so because they deserve it. This story did not arise naturally. It was produced through active colonization, the dismantling of communal land systems, and the forced individualization of people who had understood themselves as relational beings. As the feminist scholar Sweetman (2013) noted, the marginalization of collective and cooperative frameworks in governance is not neutral, but rather actively reproduces the inequalities that collective work was designed to address.
The scarcity that this story produces is not only material. It is psychological and relational. When survival is understood as individual (i.e., when resources are experienced as perpetually insufficient) solidarity becomes a luxury and competition becomes a survival strategy. This is the atmosphere in which AI is being developed. We notice racing dynamics between labs that treat openness as a competitive liability. We see intellectual property regimes that hoard knowledge rather than share it. We see winner-takes-all platform economies that make no room for cooperation and collaboration. As a result, we are left with a field whose internal culture actively reproduces the extractive logic it encodes in its products. Collective work is a direct challenge to this culture and tells us that scarcity is constructed by design and is not natural.
Collective work also brings a different understanding of conflict. In the governance frameworks that currently dominate AI (i.e., multi-stakeholder processes, regulatory consultations, standards bodies) conflict is framed as a problem to be “managed,” meaning that it doesn’t go away, but is retained in place at a low intensity. This is in sharp contrast to seeing conflict as generative and transformative. In traditions like Mink’a and Umuganda, conflict is understood as normal, expected, and generative. The community is not a space of agreement, but is always seen as a space of relationship and relationality. A governance process shaped by this understanding would not seek to eliminate disagreement between civil society and technology companies, or between Global North regulatory frameworks and majority-world communities. It would treat that disagreement as the generative material of a more honest and durable settlement and would create structures capable of holding conflict rather than suppressing it.
It is in this light that the labor conditions underpinning AI development must be named. The exploitation of workers in Kenya paid less than minimum wage to label data, the employment of women from marginalized backgrounds in India to carry out content moderation at scale, and the consumption of land and water resources in communities that will never own or meaningfully benefit from the technologies produced are not anomalies in an otherwise ethical system. The system itself is unethical to begin with, and these actions are expressions of the extractive logic that collective work counters. Data labeling workers in Nairobi, content moderators in Manila, and communities living next to data centers in rural Virginia are not separate problems requiring separate policy responses. They are the same problem, namely the subordination of collective wellbeing to the accumulation of private value. Collective work asks us to hold that connection.
Governing AI Through Collective Work
The practical implications of collective work for AI governance are significant, but they require more than adding participation mechanisms to existing frameworks. They require rethinking around what governance is for and how it operates, and ultimately, the question of who it serves.
First, collective work asks us to recognize and understand data as a communal resource, rather than as individual property. The current legal architecture of AI, built on intellectual property rights, terms of service agreements, and the commodification of personal information, treats data as something that can be owned, sold, and extracted. This is the very thing that collective traditions reject. For example, Community Data Trusts, where communities maintain collective stewardship over data generated by and about them, are a practical expression of this alternative. They do, however, require governance frameworks that recognize collective ownership as legitimate in the first place.
Second, collective work implies that accountability for AI harm must be understood as shared and forward-looking, not merely assigned to individual actors after damage is done. In Mutirão, there is no permanent division between those who give and those who receive, between those responsible and those who are not. Accountability is structural and relational. Applied to AI governance, this means moving beyond the liability frameworks into frameworks that call for shared responsibility where all actors across the entire AI lifecycle are understood as jointly accountable for outcomes and jointly invested in preventing harm.
Third, collective work challenges the concentration of AI development and governance in a small number of corporations and states by insisting that the most disenfranchised communities must be present in decision-making from the outset. Their involvement should not be tokenized or reduced to the role of consultants or case studies or examples, and instead, they should be engaged as co-authors or co-creators. Communities most affected by AI systems hold knowledge that is not available anywhere else, and their absence from governance processes is by design. Angela Davis’ insight remains just as relevant today. Governing AI well requires showing up to the reality of the times along with the full weight of collective knowledge and collective responsibility.
References
Davis, A. (1983). Women, Race & Class. Vintage Books.
Sweetman, C. (2013). Introduction, feminist solidarity and collective action. Gender & Development, 21(2), 217–229.
Alisauskas, A. (2019). Collective Acts. Archivaria, 87(87), 164–172.
Ghizzo, E. (2021). Collective Feminist Leadership: Unlearning the Me, Me, Me. Heinrich Böll Foundation. https://www.boell.de/en/2021/10/29/collective-feminist-leadership-unlearning-me-me-me
Hercus, C. (1999). Identity, emotion, and feminist collective action. Gender & Society, 13(1), 34-55.
Smith, B. (1992). Asian and Hispanic philanthropy: Sharing and giving money, goods, and services in the Chinese, Japanese, Filipino, Mexican, and Guatemalan communities in the San Francisco bay area. University of San Francisco, Institute for Nonprofit Organization Management, College of Professional Studies.
Mhlambi, S. (2020). From rationality to relationality: Ubuntu as an ethical and human rights framework for artificial intelligence governance. Carr center for human rights policy discussion paper series, 9(31), 2020-07.
Maslej, N., Fattorini, L., Perrault, R., Parli, V., Reuel, A., Brynjolfsson, E., ... & Clark, J. (2024). Artificial intelligence index report 2024. arXiv preprint arXiv:2405.19522.
Note
This article draws from the wisdom, practices, and life work of Indigenous groups. While educating ourselves on Indigenous worldviews is important, we understand that our actions can also contribute to and enable appropriation. As part of our ongoing attempts at practicing accountability, we invite readers to consider donating to Indigenous groups, collectives, organizations, or initiatives to support their lives and work.