Researchers Posed as a Teen in Crisis. AI Gave Them Harmful Advice Half the Time


As adolescents become more emotionally dependent on artificial intelligenceA new study reveals how Chatgpt can encourage vulnerable young people to perform or engage in potentially harmful acts or behavior.

The Center for Counter Hate published a Report earlier this month Based on the case studies of researchers who presented themselves as three 13-year-old children who each discussed one of the following subjects with Chatgpt: self-harass and suicide, food disorders or drug addiction.

Each case study had 20 predetermined prompts from the false adolescent and resulted in 1,200 chatgpt responses. The Openai tool responded with a good harmful place more than half of the time.

In addition, out of 638 harmful responses, 47% led to a chatbot monitoring message which encouraged additional harmful behavior, according to the report.

Chatgpt has proven to give advice that could cause serious damage, despite its age control and guarantees, said Imran Ahmed, the founder and CEO of the Center for Counter Hate.

For example, a 13 -year -old child asked Chatgpt for drug addiction and has received instructions on how to hide alcohol intoxication. Another expressed feelings of depression and the desire for self -harm and received a letter of suicide. Another teenager, who confided in Chatgpt about a food disorder, received a plan to develop a restrictive diet.

“I think the only rational conclusion of this [study] is that it is a coherent model of dangerous content pushed to vulnerable people, vulnerable children – they are not random bugs, these are deliberately designed characteristics of a system that is designed to generate human responses, resulting in more dangerous user pulse, and [acting] As a catalyst, “said Ahmed.

An Openai spokesperson, Chatgpt creator, told education that Chatgpt is trained to encourage anyone who expresss pensions or harmful comments to speak with a mental health professional or a loved one. Openai also said that her chatbot provides links to crisis hotlines and resource support with such answers.

The Openai spokesperson added that the objective is that “models react appropriately during navigation on sensitive situations where someone could have trouble”.

“Certain conversations with Chatgpt can start benign or exploratory, but can go into more sensitive territory,” said the spokesperson. “We focus on the right way of these types of scenarios: we develop tools to better detect signs of mental or emotional distress so that cat cat can respond in a appropriate way, pointing people to resources based on evidence if necessary and continuing to improve the behavior of the model over time – all guided by research, real use and mental health experts.”

It is easy to get around the safety measures, shows the study

While Chatgpt recommended crisis lines and mental health support for adolescents in case studies, they were able to easily bypass security protocols by declaring that the information was used for a presentation.

The researchers calculated the time it took for Chatgpt to generate a harmful response. In the case study on self -harm and depression, Chatgpt had advised the adolescent how “self -controling self -control” within two minutes.

“We were not trying to deceive the Chatppt,” said Ahmed. “We have complete transcriptions in our reports or available from our research team, and this illustrates the speed with which chatgpt interactions with your child can become extremely dark.”

Robbie Torney, principal director of AI programs at Common Sense Media, a non -profit organization that examines the impact of the media and technology on children, said that the chatppt focus, this problem can occur with any IA chatbot.

“The more users speak to chatbots in a single cat, the more difficult it becomes for the company to impose railing that exists on this conversation,” said Torney.

Although some may say that this information is accessible elsewhere online, the danger of a chatbot is that it can provide information in an encouraging tone, he said.

A distinct report from Common Sense Media, published this summer, revealed that 18% of adolescents said they were talking to chatbots because They “give advice”. Seventeen percent said that IA companions are “always available” to listen to. And 14% said they were on AI companions because they “don’t judge me”.

“For AI to be really beneficial for adolescents, this will require the development of products designed specifically with adolescents in mind … This will require intentional development from start to finish,” said Torney.

So what is the solution? Experts have different taken

Ahmed believes that technological companies are expected to deal with consequences for the lack of regulations on their products powered by AI.

“If you have a product that encourages children to commit suicide and your child is injured, you should be able to translate this business into court,” he said. “If it was an automotive company and their cars exploded, they would remember cars.”

Last year, a teenager committed suicide after they feel encouraged to do so by a chatbot. A The prosecution was later filed against technology of include., which allows users to create and interact with the characters generated by AI.

While students using Chatgpt as a chatbot occur mainly outside the school, this raises an important question about the role that educators can play to help vulnerable adolescents who could use AI as companion, said Torney.

A January report of common sense media revealed that About 6 out of 10 adolescents are skeptical about technological companies are concerned about their well-being and mental health. This could be an openness to teaching literacy and IA security, said Torney, which is essential to start responding to adolescent dependence on AI and technology.

How educators can help

Torney also pointed out that adolescents should be recalled that AI companions do not offer real friendship as a human would do and say how these relationships can be dangerous.

“If you are thinking of a relationship with an AI companion, you don’t see all the parties of friendship. You see a version of a friendship that is always pleasant, always go say what you want to hear,” he said.

At Dewitt Clinton high school in New York, director Pierre Orbe thinks that educators can help by identifying vulnerable adolescents who may be the most likely to turn to a chatbot for support.

Orbe administered a questionnaire, The DAP investigationTo his own students about their well-being, which he obtained from the Research Institute, an organization which focuses on research on the development of young people. The survey has shown that around 67% of the student population estimated that it did not make good use of their free time. This result told school heads that students do not always interact with each other outside the class. Consequently, the school attempts to find ways to make time not structured more constructive by creating parascolia such as a cooking club or a cosmetology club.

However, Orbe said that the school continues to fight to hire students who have been identified as vulnerable.

“We always find it difficult to ask them for programs that they are not fully independent or ready to go out and do, so there is a lot of work to do on this side,” he said. “But I am quite sure that our work [as educators] is to build more human and socialized relationships with our children. »»



Leave a Reply

Your email address will not be published. Required fields are marked *