Mom who sued Character.AI over son’s suicide says the platform’s new teen policy comes ‘too late’


In a bid to make its platform safer for teenage users, Character.AI announced this week that it would ban users under 18 from chatting with its artificial intelligence-based characters.

For Megan Garcia, the Florida mother who sued the company last year over the suicide of her 14-year-old son, Sewell Setzer, the decision comes “about three years too late.”

“Sewell is gone; I can’t get him back,” she said in an interview Thursday following Character.AI’s announcement. “It’s unfair that I have to live the rest of my life without my sweet, adorable son. I think it’s collateral damage.”

Founded in 2021, the California-based chatbot startup offers what it describes as “personalized AI.” It features a selection of pre-made or user-created AI characters to interact with, each with a distinct personality. Users can also customize their own chatbots.

Garcia was the first of five families to sue Character.AI on behalf of the harm they claim their children suffered. Garcia’s case is one of two cases accusing him of being responsible for a child’s suicide, and all five families have accused his chatbots of engaging in sexually abusive interactions with their children.

In its previous response to Garcia’s lawsuit, Character.AI argued that speech produced by its chatbots was protected by the First Amendment, but a federal judge this year rejected the argument that AI chatbots enjoy the right to free speech.

Character.AI has also continued to emphasize its investment in trust and security resources. Over the past year he has written in a blog post on Wednesdayit implemented “the market’s first AI Parental Insights tool, tech protections, filtered characters, time-lapse notifications, and more – all designed to enable teens to be creative with AI safely.”

The company’s ban on minors, which goes into effect on November 25, is the largest measure taken so far.

However, Garcia expressed mixed emotions in response to the news, saying she felt the changes came at the expense of families whose children consider themselves users.

“I don’t think they made these changes just because they’re good corporate citizens,” she said. “If they had been, they wouldn’t have offered chatbots to kids when they first launched this product. »

Other tech companies, including Meta and OpenAI, have also rolled out more guardrails in recent years as AI developers face increased scrutiny over chatbots’ ability to mimic human connection. As people increasingly turn to these robots for emotional support and life advice, recent incidents have highlighted their ability to manipulate vulnerable people by facilitating a false sense of closeness or care.

Many parents and online safety advocates believe more can be done. Last month, Garcia and others urged Congress to push for more safeguards around AI chatbots, saying tech companies designed their products to “hook” kids.

Wednesdayconsumer advocacy organization Public Citizen echoed a similar call for action, writing that “Congress MUST ban Big Tech from making these AI robots available to children.”

Garcia said she was waiting to see proof that Character.AI would be able to accurately verify users’ ages. She also wants the company to be more transparent about what it does with the data it has collected from minors on the platform.

Character.AI Privacy Policy mentions that the company could use user data to train its AI models, offer personalized advertisements and recruit new users. It does not sell voice or text data for any of its users, a spokesperson told NBC News.

Also in its announcement Wednesday, the company said it was introducing an internal age assurance model for use with third-party tools, including online identity verification software Persona.

“If we have any doubts about whether a user is 18 or older based on these tools, they will go through a full age verification through Persona if they wish to use the adult experience,” the spokesperson wrote in an email. “Persona is highly regarded in the retirement insurance industry and companies such as LinkedIn, OpenAI, Block and Etsy use it.”

Matt Bergman, an attorney and founder of the Social Media Victims Law Center, said he and Garcia were “encouraged” by the decision to ban minors from chatting with its bots.

“This would never have happened if Megan had not come forward and taken this courageous step and other parents had not followed,” said Bergman, who represents several families who have accused Character.AI of allowing harm to their children.

“The devil is in the details, but this seems like a step in the right direction, and we urge other AI companies to follow Character.AI’s lead, even if they have been late,” Bergman said. “But at least now they seem a lot more serious than they were.”

Garcia’s lawsuit, filed last October in U.S. District Court in Orlando, has now reached the discovery phase. She said there was still “a long way to go” but she was ready to keep fighting in the hope that other AI companies would follow suit by implementing more safety measures for children.

“I’m just a mom in Florida going up against tech giants. It’s like a David and Goliath situation,” Garcia said. “But I’m not afraid. I think the love I have for Sewell and the fact that I want to hold them accountable is what gives me some courage in this situation.”

If you or someone you know is in crisis, call or text 988 to reach Suicide and Crisis Lifeline or live chat on 988lifeline.org. You can also visit SpeakingOfSuicide.com/resources for additional support.

Leave a Reply

Your email address will not be published. Required fields are marked *