TThere is a lot of anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The term “AI psychosis” has been used to describe the plight of people suffering from delusions, paranoia, or dissociation after speaking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of relationships with AI; Half of teens chat with an AI companion at least a few times a month, and one in three find conversations with AI”be as satisfying, if not more so, than those who have real friends“.
But we must curb the panic. The dangers are real, but so are the potential benefits. In fact, there’s an argument that could be made that – depending on what future scientific research reveals – AI relationships could actually be a boon for humanity.
Consider how pervasive non-human relationships have always been for our species. We have a long history of healthy interactions with non-humans, whether pets, stuffed animals, or beloved objects or machines – think about the person in your life who is totally obsessed with their car, to the point of naming it. In the case of pets, these are real relationships to the extent that our cats and dogs understand that they are in a relationship with us. But the one-sided, parasocial relationships we have with stuffed animals or cars happen without those things knowing we exist. Only in very rare cases do these relationships develop into something pathological. Parasociality is, for the most part, normal and healthy.
And yet there is something unsettling about dealing with AI. Because they are fluent in the language, LLMs generate the uncanny feeling that they have human-like thoughts, feelings, and intentions. They also generate sycophantic responses that reinforce our views, rarely challenging our thinking. This combination can easily lead people down the path of illusion. It is not something that happens when we interact with cats, dogs, or inanimate objects. But the question remains: Even in cases where people are unable to see through the illusion that AIs are real people who actually care about us, is this still a problem?
Think about loneliness: one in six people on this planet experiences it, and it is associated with a 26% increase in premature deaths; the equivalent of smoking 15 cigarettes a day. Research emerges this suggests that AI companions are effective in reducing feelings of loneliness – and not just functioning as a form of distractionbut because of the parasocial relationship itself. For many people, an AI chatbot is the only friendship option available to them, as hollow as that may sound. As journalist Sangita Lal says recently explained In a report about those turning to AI for company, we shouldn’t be so quick to judge. “If you don’t understand why followers want, seek and need that connection,” Lal said, “you’re lucky you haven’t experienced loneliness.”
To be fair, there is a case to be made that the rise of new technologies and social media has itself played a role in the loneliness epidemic. This is why Mark Zuckerberg has been criticized for his enthusiastic support of AI as a solution to a problem that he may be partly responsible for creating. But if the reality is that it helps, it can’t be dismissed out of hand.
Research also shows that AI can be used as an effective psychotherapy tool. In a studypatients who chatted with an AI-powered therapeutic chatbot showed a 30% reduction in anxiety symptoms. Not as effective as human therapists, which generated a 45% reduction, but still better than nothing. This utilitarian argument is worth considering; there are millions of people who, for whatever reason, cannot access a therapist. And in those cases, turning to an AI is probably better than not seeking help at all.
But a single study proves nothing. And that’s the problem. We are in the early stages of researching the potential benefits or harms of AI companionship. It’s easy to focus on the handful of studies that support our preconceptions about the dangers or benefits of this technology.
It is in this research vacuum that the true dangers of AI are revealed. Most entities deploying AI companions are for-profit companies. And if there’s one thing we know about for-profit companies, it’s that they’re eager to avoid regulations and avoid evidence that could hurt their bottom line. They are incentivized to downplay risks, cherry-pick evidence, and tout only benefits.
The emergence of AI is reminiscent of the discovery of the analgesic properties of opium; If harnessed by responsible parties for the purpose of relieving pain and suffering, AI and opioids can serve as a legitimate tool for healing. But if bad actors exploit their addictive properties to enrich themselves, the result is either addiction or death.
I remain hopeful that there is a place for AI companionship. But only if it is based on solid science and deployed by organizations that exist for the public good. AIs must avoid the problem of sycophancy that leads vulnerable people to delusion. This can only be achieved if they are explicitly trained to do so, even if it makes them less attractive as a potential mate; a notion that is anathema to companies that want you to pay a monthly subscription, without which you lose access to your “friend.” They should also be designed to help the user develop the social skills they need to interact with real humans in the real world.
The ultimate goal of AI companions should be to make themselves obsolete. No matter how helpful they may be in bridging gaps in access to therapy or alleviating loneliness, it will always be better to talk to a real human.
Justin Gregg is a biologist and author of Humanish (Oneworld).
Further reading
Code dependent: living in the shadow of AI by Madhumita Murgia (Picador, £20)
The Coming Wave: AI, Power, and Our Future by Mustafa Suleyman (Vintage, £10.99)
Supremacy: AI, ChatGPT and the race that will change the world by Parmy Olson (Macmillan, £10.99)