Using an AI chatbot for therapy or health advice? Experts want you to know these 4 things


While chatbots fueled by artificial intelligence explode in popularity, experts warn people to turn to technology of medical or mental health advice instead of counting on human health care providers.

There have been a number of examples during the last weeks of chatbot advice in a harmful manner. A 60 year old man He accidentally poisoned himself and entered a psychotic state after Chatgpt suggested that he eliminate salt, or sodium chloride, of his diet and replace it with sodium bromide, a toxin used to treat wastewaterAmong other things. Earlier this month, a study by the Center for Counter Digital Hate revealed that Chatgpt gave adolescents dangerous advice on drugs, alcohol and suicide.

Technology can be tempting, in particular with obstacles to health care access, including costs, waiting times to speak to a supplier and lack of insurance coverage. But experts told PBS News that chatbots are unable to offer advice adapted to the specific needs of a patient and medical history and are subject to “hallucinations” or to give incorrect information.

Here is what you need to know about the use of AI chatbots for health advice, according to mental health and health professionals that spoke to PBS News.

How do people use AI chatbots for medical and mental health advice?

People generally turn to trade in commercially available chatbots, such as the OpenAi chatgpt or Luka backing down, said Vaile Wright, principal director of health care innovation at American Psychological Association.

People can ask questions as distant as how to quit smoking, approach interpersonal violence, confront suicidal ideas or treat headache. More than half of adolescents said they used chatbot platforms I have several times a month, according to a survey produced by Common Sense Media. This report also mentioned that around a third of adolescents said they were turning to AI companions for social interaction, including role -playing, romantic relationships, friendship and the practice of conversation skills.

But the business model of these chatbots is to keep users committed “as long as possible”, rather than giving trustworthy advice in vulnerable moments, Wright said.

“Unfortunately, none of these products was designed for this purpose,” she said. “The products that are on the market, in some respects, are really antithetical therapy because they are built so that their coding is fundamentally addictive.”

Often, a bot reflects the emotions of the human which engages them in a sycophantic way and could “poorly manage really critical moments”, declared Dr. Tiffany Munzer, behavioral pediatrician of development at the Faculty of Medicine of the University of Michigan.

“If you are sadder, the chatbot could further reflect this emotion,” said Munzer. “The emotional tone tends to correspond and it agrees with the user. It can make advice more difficult for what the user wants to hear. ”

What are the risks of using AI for health advice?

Asking health questions about AI chatbots instead of pose to a health care provider includes several risks, said Dr. Margaret Lozovatsky, director of medical information for the American Medical Association.

Chatbots can sometimes give “a quick answer to a question that someone has and they may not have the capacity to contact their doctors,” said Lozovatsky. “That being said, the quick response may not be exact.”

  • Chatbots do not know a person’s medical history. Chatbots are not your doctor or practitioner nurse and cannot access your medical history or provide tailor -made information. Chatbots are built on automatic learning algorithms. This means that they generally produce the most likely answer to a question based on their diet constantly expanding the information collected from the wilderness of the Internet.
  • Hallucinations occur. The quality of the answer improves with AI chatbots, but hallucinations are still occurring and could be fatal in some cases. Lozovatsky recalled an example a few years ago, she asked a chatbot: “How do you treat a urinary tract infection (urinary tract infection)?” And the bot replied: “Drink urine”. As she and her colleagues laugh, Lozovatsky said that this type of hallucination could be potentially dangerous when a person asks a question where the accuracy of an answer may not be so obvious.
  • Chatbots strengthen false trust and atrophy of critical thinking skills. People need to read AI cat’s answers with a critical eye, experts said. It is particularly important to remember that these responses generally have confined origins (you must dig for source links) and that a chatbot “has no clinical judgment,” said Lozovatsky. “You lose the relationship you have with a doctor.”
  • Users are at risk of exposing their personal health data on the Internet. In many ways, the AI ​​industry is equivalent to a modern far West, in particular with regard to the protection of private data from people.

Why do people turn to AI as a mental and medical health resource?

It is not uncommon for people to look for answers by themselves when they have persistent headaches, sniffing or strange and sudden pain, said Lozovatsky. Before chatbots, people counted on search engines (picked all jokes on Dr Google). Before that, the mutual aid industry has struck money on the anxiety with low investigation of people on the way they felt today and how they could feel better tomorrow.

Today, a search engine request can produce results generated by the AI ​​that appear first, followed by a website chain which may or may not have information reflected with these responses.

“It’s a natural place that patients are running,” said Lozovatsky. “It’s an easy path.”

This ease of access contrasts with the obstacles that patients often encounter when they try to obtain advice from approved health professionals. These obstacles may include whether or not they have insurance coverage, if their supplier is in a network, if they can afford the visit, if they may wait for their supplier to be planned to see them, if they are concerned with the stigmatization linked to their question, and if they have reliable transport at the office of their supplier or to the clinic when the TV services are not an option.

Any of these obstacles can be enough to motivate a person to feel more comfortable asking a bot his sensitive question than a human, even if the answer he receives could potentially put them in danger. At the same time, an epidemic of well -documented loneliness at the national level partially feeds an increase in the use of AI chatbots, said Munzer.

“Children grow up in a world where they just don’t have social support or social networks they really deserve and need to prosper,” she said.

How can people protect themselves from the wrong advice of AI?

If people care about whether their child, their family member or their friend engages with a chatbot for mental health or medical advice, it is important to reserve a judgment during the attempted conversation on the subject, said Munzer.

“We want families, children and adolescents to have as much information at hand to make the best possible decisions,” she said. “Much of the AI ​​and literacy.”

Discussing the underlying technology of chatbots and motivations to use them can provide a critical understanding point, said Munzer. This could include asking why they become such an increasing part of daily life, the business model of AI companies and what else could support the mental well-being of a child or an adult.

A useful conversation invites Munzer suggested to caregivers is to ask: “What would you do if a friend revealed that he was using AI for mental health purposes?” This language “can delete judgment,” she said.

An activity that Munzer has recommended is that families test the chatbots of AI together, talk about what they find and encourage dear beings, in particular children, to seek hallucinations and prejudices in information.

But the responsibility of protecting individuals from the damage generated by the chatbot is too large to place on single families, said Munzer. Instead, it will take the regulatory rigor of decision -makers to avoid other risks.

On Monday, the Brookings Institution produced an analysis of state legislation in 2025 and found that “health care was a major objective of the legislation, all bills relating to potential issues resulting from AI systems taking processing and coverage decisions.” A handful of states, including Illinois, have prohibited the use of chatgpt to generate mental health therapy. A bill in Indiana would require health professionals to tell patients that they use AI to generate advice or clarify a decision to provide health care.

One day, chatbots could fill the gaps in the services, Wright said – but not yet. Chatbots can draw from deep information wells, she said, but that is not translated into knowledge and discernment.

“I think you will see a future where you have mental health chatbots that are rooted in science, which are rigorously tested, they are co-created for purposes, so they are regulated,” said Wright. “But that’s not what we have today.”

We are not going anywhere.

Defend really independent and reliable news that you can count on!




Leave a Reply

Your email address will not be published. Required fields are marked *