Using Generative AI for therapy might feel like a lifeline – but there’s danger in seeking certainty in a chatbot | Carly Dober


TRan * sitting in front of me, phone in hand, scrolling. “I just wanted to make sure I was not saying the bad thing,” he said, referring to a disagreement with his partner. “So I asked Chatgpt what I had to say.”

He read the message generated by the chatbot aloud. It was articulated, logical and composed – too composed. It didn’t look like Tran. And that certainly did not look like someone in the midst of a complex emotional conversation on the future of a long -term relationship. Nor did he mention that some of TRA’s contributing behavior to the relational tension that Tran and I discussed.

Like many others that I saw in therapy, Tran had turned to AI in a moment of crisis. Under immense pressure at work and faced with uncertainty in his relationship, he had downloaded Chatgpt on his phone “just to try it”. What started as a curiosity quickly became a daily habit, asking questions, writing texts and even trying to reassure themselves about his own feelings. More trained used it, the more he began to guess in social situations, turning to the model to obtain advice before responding to colleagues or relatives. He felt strangely comforted, as “no one knew me better”.

Her partner, on the other hand, started to feel like she was completely talking to someone else.

Chatgpt and other generative AI models have a tempting accessory, even an alternative, to traditional therapy. They are often free, available 24/7 and can offer personalized and detailed responses in real time. When you are overwhelmed, sleepless and desperately to give meaning to a disorderly situation, type a few sentences in a chatbot and recover what looks like Sage’s advice can be very attractive.

But as a psychologist, I worry more and more about what I see in the clinic; A silent change in the way people deal with distress and increasing dependence on artificial intelligence in place of human connection and therapeutic support.

The AI can look like a life buoy when the services are overloaded – and do not go wrong, the services are too implemented. Worldwide, in 2019 One person in eight lived with mental illness And we are faced with a disastrous shortage of trained mental health professionals. In Australia, there was a growing shortage of mental health workforce that limits access to trained professionals.

The time of clinicians is one of the rarest resources of health care. It is understandable (even planned) that people are looking for alternatives. But turning to a chatbot for emotional support is not without risk, especially when the lines between advice, comfort and emotional dependence become blurred.

Many psychologists, including me, now encourage customers to establish limits around their use of chatgpt and similar tools. Its attractive availability “always on” and its friendly tone can involuntarily strengthen unnecessary behavior, especially for people with anxiety, OCD or problems related to trauma. The search for reallocation, for example, is a key characteristic of TOC and Chatgpt, by design, ensures abundance insurance. He never asks why you ask again. This never calls into question avoidance. He never says, “Sitting us with this feeling for a while and practicing the skills we work on.”

Tran has often reformulated the guests until the model gives it an answer that “felt good”. But this constant seam meant that he was not only looking for clarity; He externalized emotional treatment. Instead of learning to tolerate distress or explore the nuances, he asked for a certainty generated by AI. Over time, it has made it more difficult for him to trust his own instincts.

Beyond psychological concerns, there are real ethical problems. The information shared with Chatgpt is not protected by the same confidentiality standards as those recorded AHPRA professionals. Although OPENAI declares that user data is not used to train its models unless the authorization is given, the volume of the pure of fine attributions in the user agreements does not often read. Users may not realize how their inputs can be stored, analyzed and potentially reused.

There is also the risk of harmful or false information. These large language models are self -regressive; They predict the following word according to the previous models. This probabilistic process can lead to “hallucinations”, confident and polished responses that are completely false.

The AI also reflects biases integrated into its training data. Research shows that generative models can perpetuate and even amplify gender stereotypes, racials and disability – not intentionally, but inevitably. Human therapists also have clinical skills; We notice when the voice of a client is shaking, or when his silence could say more than words.

This does not mean that AI cannot have room. Like many technological progress before, the generator is there to stay. It can offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is seriously limited. But it must be used with care and never replacing relational and regulated care.

Tran was not wrong to ask for help. His instinct to give meaning to distress and communicate more thoughtful was logical. But relying so strongly on the AI meant that its development of skills suffered. His partner began to notice a strange detachment in his messages. “It didn’t seem to you,” she said later. It turned out that this was not the case.

She has also become frustrated by the lack of responsibility in her correspondence with her and that caused relations of relational friction and communication between them.

While Tran and I worked together in therapy, we explored what led him to look for a certainty in a chatbot. We have unpacked his fears of disappointing others, his discomfort with the emotional conflict and his conviction that the perfect words could prevent pain. Over time, he started writing his own answers, sometimes messy, sometimes uncertain, but authentically his.

Good therapy is relational. He thrives on imperfection, nuance and slow discovery. This implies recognition of models, responsibility and the type of discomfort which leads to a lasting change. A therapist does not only respond; They ask and they challenge. They hold space, offer a reflection and walk with you, while offering an uncomfortable mirror.

For Tran, the quarter was not only limiting its use of chatgpt; It was a question of recovering his own voice. In the end, he didn’t need a perfect answer. He needed to believe that he could sail in the disorder of life with curiosity, courage and care – not perfect scripts.

* Modified name and identification details to protect customers’ confidentiality
Carly Dober is a living psychologist and working in Naarm / Melbourne
In Australia, the support is available to Beyond blue 1300 22,4636, Lifeline 13 11 14, and Male line The 1300 789 978. In the United Kingdom, the charity Spirit is available at 0300 123 3393 and Construction site on 0800 1111. In the United States, call or SMS Mental Health America at 988 or cat 9888lifeline.org

Leave a Reply

Your email address will not be published. Required fields are marked *