How Are AI Chatbots Affecting Teen Development?


Artificial intelligence is everywhere, based on recommendations on our social media products, the automatic completion of the text in our emails. The generative AI creates text, images, audio and even original video depending on the models it is identified in the data used to create them. AI chatbots, or interactive AI, transform this into a predictive power that corrects text to create human conversations, answer user questions and offer personalized commitment.

Increasingly, adolescents use a generative AI, popularized by platforms such as Chatgpt. According to a report of common sense for non -profit, 72% of adolescents used IA companionsOr chatbots designed to have personal or emotionally favorable conversations, and more than half of teenagers use them regularly.

I am a psychologist who studies how technology affects children. Recently, I was part of an advisory panel of experts summoned by the American Psychological Association (APA) to explore the effects that these tools can have on the well-being of adolescents.


On the support of scientific journalism

If you appreciate this article, plan to support our award -winning journalism by subscription. By buying a subscription, you help to ensure the future of striking stories about discoveries and ideas that shape our world today.


The truth? We always learn.

The AI landscape evolves quickly and the researchers rush to catch up. Will AI create a new border to support the well-being of adolescents, with personalized emotional support opportunities, active learning and creative exploration? Or will it flourish their real social relationships, will expose them to harmful content and fuel solitude and isolation?

The answer will probably be: all of the above, according to how the platforms IA are designed and how they are used. So where to start? And what can we – parents, educators, legislators, IA designers – to support the well -being of young people on these platforms?

The APA panel made a Series of recommendations In a new report. Here’s what we think parents should know.

AI for adolescents must be made differently from AI for adults

So often, in the development of new technologies, we do not think in advance on how children could use it. Instead, we run to create adult centered products and hope for general adoption. Then, years later, we try to modernize the guarantees on these products to make them safer for adolescents.

As with other new technologies such as smartphones and social media, the burden of managing these tools cannot be based alone on parents. It is not a fair fight. It is the responsibility of all – including legislators, educators and, of course, the technological companies themselves.

With AI, we have the opportunity to design, in particular, for young people from the start. For example, IA companies could aim to limit the exposure of adolescents to harmful content, to work with development experts to create experiences adapted to age, limit features designed to maintain children to use platforms longer, to make problems easier (such as inappropriate conversations or mental health problems) and regularly recall adolescents (For example, cat information can be inactive and that it should not replace human professionals). AI platforms should also take measures to protect the confidentiality of adolescent data, guarantee that the resemblances of young people (their images and voices) cannot be used badly and create effective and friendly parental controls.

Children must learn what IA is and how to use it safely, from school. It starts with basic education on the functioning of AI models, how to use safe and responsible for AI so as not to harm, how to identify false information or content generated by AI and on ethical considerations. Teachers will need advice on how to teach these subjects and resources to do so – an effort that will require collaboration with decision -makers, technological developers and school districts.

Speak early and talk often

As a parent, conversations on AI with adolescents may seem intimidating. What subjects should you cover? Where should you start? First of all, test some of these platforms for yourself: you feel that the way they work, where they can have limits and why your child may be interested in using them.

Then consider these key conversation subjects:

Human relations count

Of the 72% of adolescents who have already used IA companions, 19% say that they spend the same time or more with them as with their real friends. As technology improves, this trend can become more omnipresent, and adolescents who are already socially vulnerable or alone can be more likely to leave the relationships of chatbot interfere with real.

Talk to adolescents of the limits of AI companions in relation to human relations, including the number of AI models designed to keep them on the platform longer thanks to flattery and validation. Ask them if they used AI to have significant conversations and what types of subjects they discussed. Make sure they have many opportunities for social interactions in person with real friends and family. And remind them that these human relationships, as annoying, disorderly or complicated, are worth it.

Use AI for good

When they are well used, AI tools can offer Incredible learning opportunities and discovery. Many teenagers have already experienced some of these advantages, and this can be a good place to start conversations. Where did they find that AI was useful?

Ask how their schools are approaching AI in school work. Do children know their teachers’ policies on the use of AI with homework? Did they use AI in class? We want to encourage adolescents to use AI to support Active learning – stimulating critical thinking and deepening the concepts that interest them – rather than replacing critical thinking.

Be a consumer of criticism

AI models do not always get things right, and this can be particularly problematic in terms of health. Adolescents (and adults) frequently obtain information on physical and mental health online. In some cases, they can count on AI for conversations that would have already taken place with a therapist – and these models may not respond appropriately to disclosure around questions such as self -control, disorders or suicidal thoughts. It is important that adolescents know that any advice, “diagnosis” or recommendations from chatbots should be checked by a professional. It can be useful for parents to emphasize that AI chatbots are often designed to be convincing and authority, so we need to actively resist the desire to take their answers to their nominal value.

APA’s recommendations also highlight the risks associated with the content generated by AI, which adolescents can create themselves or meet via social media. Such content may not be trustworthy. It could be violent or harmful. In the case of Deepfakes, he could be against the law. As parents, we can remind adolescents to be critical consumers of images and videos and always check the source. We can also remind them of never creating or distributing images endowed with their peers, which is not only contrary to ethics but also, in certain states, illegal.

Watch out for harmful content

With few guarantees in place for young users, AI models can produce content that negatively affects the safety and well-being of adolescents. This could include text, images, audio or videos that are inappropriate, dangerous, violent, discriminatory or suggestive of violence.

Although the developers of AI have a crucial role to play in strengthening these systems, as parents, we can also have regular conversations with our children on these risks and set limits for their use. Talk to adolescents about what to do if they meet something that makes them uncomfortable. Discuss the appropriate and inappropriate uses for AI. And when it comes to communication on AI, try to keep the doors open by remaining curious and without judgment.

AI changes quickly and rigorous scientific studies are necessary to better understand its effects on the development of adolescents. APA’s recommendations end with an appeal to prioritize and finance this research. But just because there is a lot to learn, it does not mean that we have to wait to act. Start talking to your AI children now.

If you need help

If you or someone you know, you have trouble or have suicide thoughts, help is available. Call or send an SMS to 988 suicide & Crifeline Crifeline at 988 or use the line online Lauses line cat.

Leave a Reply

Your email address will not be published. Required fields are marked *