HITL Reading Group: Meeting #2
In our second session, we reflected on an article by James Ball, titled "Anthropomorphism is Breaking Our Ability to Judge AI," published by TechPolicyPress.
The opening thoughts for the group reflected on how human beings interact with AI. Reports show that humans tend to be kinder to AI than to actual people, oftentimes because they find it easier to humanize AI than to treat people as humans. Some members agreed that this design choice arrived at a time where people are separated from other people and are more likely to connect with AI and thus anthropomorphize it. Some members considered it a serious threat because it has gone to a point where few people are able to distinguish between an AI system and an actual person on the other side. One member felt a sense of resignation to the anthropomorphic nature of AI, recognising that there is a capitalistic model driving it, and inviting engagement with AI foundationally requires some level of trust in it. By drawing from a human-like identity, this trust is easily facilitated. Most participants expressed discomfort around how relationships are built with AI where people turn to generative AI tools to seek intimacy, therapy, or even just comfort and validation. Another member reflected on how young children and teens are turning to AI for these kinds of relationships, too. This all led to a reflection on what violence might mean in this kind of a relationship dynamic – what does child abuse and intimate partner violence mean or look like in a case where one builds intimacy with an AI tool? Is there any free will at all? Who would be held accountable and responsible, and what does that even look like? The group reflected on how we’re moving at a pace that fast outpaces any attempt at regulation or even comprehension of these impacts.
One participant referenced the Turing Test, which is a measure of a machine’s ability to mimic human behaviour, to explain this design choice. Another participant reflected on how there is a deliberate dimension to the design choice of embedding anthropomorphism, especially given that people are isolated, busy, and perhaps also losing out on the skills they need to build social connections. In some cases, there are assumptions of burdening another human that prevent meaningful social relationships from forming. Making models easier to talk to resulted in the creation of an addictive ecosystem, which springboards off the algorithmic feeds on our social media platforms. There, the addiction is to what the algorithm decides to add to your feed, here, the addiction is to the purported human nature of the AI tool.
Some participants reflected on how convenience has also paved the way for trust. Earlier, one used search engines to understand something and had to do the task of sifting through unrelated results to identify the key answers they needed. But now, with an AI tool offering to cut all those iterations (while possibly hallucinating, no less), convenience paves the way for trust. One participant referenced the case of children tending to trust Alexa over their parent for advice or knowledge.
Another participant reflected on sycophancy from AI tools like GPT 5.x, while censorship is a key challenge with Chinese models like Deepseek. While users tend to anthroppomorphise AI, those shaping and creating it tend to look at business metrics and capitalize on the profitable benefits that anthropomorphism brings to their tool.
Drawing from this, the participants identified ways to engage with AI to keep the overreliance and blind trust in check. One recommended the practice of friction maxing, which is to deliberately introduce friction into their lives – some inconvenience, some discomfort, so there is a human in the loop after all. Rather than anthropomorphizing AI, humanising the entire process is preferable. This means treating humans as human and tools as tools. This mindset can reframe how we use prompts so that we treat the journey like work and every interaction is intended to essentially focus only on work.