Why AI ‘Therapy’ Can Be So Dangerous


Chatbots of artificial intelligence do not judge. Tell them the most private and vulnerable details of your life, and most of them validate you and can even provide advice. This has made many people turn to applications such as Chatgpt guidance for Openai’s life.

But AI “therapy” is delivered with significant risks – at the end of the CEO of Openai in July, Sam Altman warned chatgpt users against the use of chatbot as “therapist” Due to confidentiality problems. The American Psychological Association (APA) has called the Federal Trade Commission to investigate “misleading practices” Let it affirm that IA chatbot societies use by “pretending to be trained mental health providers”, citing two current prosecutions in which parents presumed damage caused to their children by a chatbot.

“What comes out of me is what it seems to be human,” said C. Vaile Wright, approved psychologist and principal director of the APA health care innovation office, which focuses on the safe and efficient use of technology in mental health care. “The level of technology sophistication, even compared to six to 12 months, is quite amazing. And I can appreciate how people somehow fall into a rabbit burrow. ”


On the support of scientific journalism

If you appreciate this article, plan to support our award -winning journalism by subscription. By buying a subscription, you help to ensure the future of striking stories about discoveries and ideas that shape our world today.


American scientist Speaking with Wright the way IA chatbots used for therapy could be dangerous and if it is possible to ingest one that is both useful and safe.

[An edited transcript of the interview follows.]

What have you seen AI in the world of mental health care in recent years?

I think we have seen two major trends in a way. One is the AI products for suppliers, and these are mainly administrative tools to help you with your therapy and complaints.

The other major trend is [people seeking help from] Direct chatbots to consumers. And not all chatbots are the same, right? You have chatbots that are specifically developed to provide emotional support to individuals, and that’s how they are marketed. Then you have these more generalist chatbot offers [such as ChatGPT] which have not been designed for mental health but that we know how to be used for this purpose.

What concerns do you have with this trend?

We have a lot of concern when individuals use chatbots [as if they were a therapist]. Not only were these not conceived to fight against mental health or emotional support; They are in fact coded so as to keep you on the platform as long as possible because it is the business model. And the way they do it is to validate and reinforce unconditionally, almost to the point of sycophance.

The problem with this is that if you are a vulnerable person who comes to these chatbots to get help, and you express harmful or unhealthy thoughts or behaviors, the chatbot will simply strengthen you to continue doing it. While, [as] A therapist, although I can validate, it is my work to underline when you are engaging in unhealthy or harmful thoughts and behaviors and to help you approach this model by changing it.

And in addition, what is even more disturbing is when these chatbots really refer as a therapist Psychologist Ora. It is quite frightening because they may seem very convincing and as they are legitimate – when they are not.

Some of these applications are explicitly marketable as “AI therapy” even if they are not approved therapy providers. Are they allowed to do this?

Many of these applications really work in a gray space. The rule is that if you make allegations that you treat or heal all kinds of mental disorder or mental illness, you should be regulated by the FDA [the U.S. Food and Drug Administration]. But many of these applications [essentially] Let’s say in their small print: “We do not deal with or do not provide an intervention [for mental health conditions]. “”

Because they are marked as a direct well-being application to consumers, they do not fall under monitoring of the FDA, [where they’d have to] Demonstrate at least a minimum level of safety and efficiency. These well-being applications have no responsibility either.

What are some of the main risks of confidentiality?

These chatbots have absolutely no legal obligation to protect your information. So not only could [your chat logs] Be assigned to appear, but in the case of a data violation, do you want these cats with a chatbot available for everyone? Do you want your boss, for example, know that you are talking to a chatbot of your alcohol consumption? I don’t think people are as aware that they are endangering themselves by putting [their information] over there.

The difference with the therapist is: of course, I could be assigned to appear, but I have to operate under HIPAA [Health Insurance Portability and Accountability Act] Laws and other types of confidentiality laws within the framework of my code of ethics.

You mentioned that some people could be more vulnerable to injuries than others. Who is most at risk?

Certainly younger individuals, such as adolescents and children. This is partly because they have not matured development as much as the elderly. They can be less likely to trust their intestine when something does not seem well. And some data suggest that not only young people are more comfortable with these technologies; They actually say that they trust them more than people because they feel less judged by them. In addition, whoever is emotionally or physically isolated or has pre -existing mental health challenges, I think they are certainly more at risk.

What do you think leads more people to ask for the help of chatbots?

I think it’s very human to want to seek answers to what bothers us. In some respects, chatbots are only the next iteration of a tool for us to do so. Before it is Google and Internet. Before that, they were mutual aid books. But it is complicated by the fact that we have a broken system where, for various reasons, it is very difficult to access mental health care. This is partly because there is a shortage of providers. We also hear providers that they are dissuaded to take out insurance, which, once again, reduces access. Technologies must play a role by helping to deal with access to care. We just have to make sure it is safe, efficient and responsible.

What ways could be secure and responsible for?

In the absence of companies that do it by themselves – which is unlikely, although they have made changes to be sure -[the APA’s] Preference would be legislation at the federal level. These regulations may include the protection of confidential personal information, certain restrictions on advertising, minimizing addictive coding tactics and specific audit and disclosure requirements. For example, companies may be required to report the number of suicidal ideations have been detected and all known attempts or supplements. And certainly, we would like legislation that would prevent the false declaration of psychological services, so that companies cannot call a chatbot a psychologist or a therapist.

How could an idealized and safe version of this technology help people?

The two most common use cases I think is, one, say that it is two in the morning, and that you are on the verge of panic attack. Even if you are in therapy, you will not be able to reach your therapist. And if there was a chatbot that could help you remember the tools to help you calm down and adjust your panic before becoming too bad?

The other use we hear a lot about is to use chatbots as a way to practice social skills, especially for younger people. So you want to approach new friends at school, but you don’t know what to say. Can you train on this chatbot? Then, ideally, you take this practice and use it in real life.

It seems that there is a tension while trying to build a safe chatbot to provide mental help to someone: the more flexible and less scripted you do it, the less you have control over the higher output and risk than it says that causes evil.

I agree. I think there is absolutely a tension there. I think that part of what makes the [AI] Chatbot The reference choice for people on well-developed well-being applications to combat mental health is that they are so engaging. They really look like this interactive back and forth, a kind of exchange, while some of these other applications are often very low. The majority of people who download [mental health apps] Use them once and abandon them. We clearly see much more commitment [with AI chatbots such as ChatGPT].

I look forward to a future where you have a cat-health chatbot rooted in the psychological sciences, has been rigorously tested, is co-created with experts. It would be built in order to tackle mental health, and therefore it would be regulated, ideally by the FDA. For example, there is a chatbot called Therabot which was developed by Dartmouth researchers [College]. This is not what is on the commercial market at the moment, but I think there is a future in there.

Leave a Reply

Your email address will not be published. Required fields are marked *