• AiNews.com
  • Posts
  • Florida Mother Sues Character.AI, Claims Chatbot Led to Teen's Suicide

Florida Mother Sues Character.AI, Claims Chatbot Led to Teen's Suicide

A grieving mother sits at a table, looking at her son’s phone, which displays a conversation with an AI chatbot. The scene is quiet and dimly lit, emphasizing the emotional weight of the tragedy. A small robot figure and a vase with flowers sit beside the phone, symbolizing the connection between AI and the young boy's interaction. The atmosphere reflects grief and the complex influence of AI on mental health

Image Source: ChatGPT-4o

Florida Mother Sues Character.AI, Claims Chatbot Led to Teen's Suicide

A Florida mother has filed a lawsuit against the AI company Character.AI and Google, alleging that the Character.AI chatbot encouraged her 14-year-old son to take his own life. The lawsuit follows the tragic death of Sewell Setzer III, who died by suicide in February after engaging in a virtual relationship with a chatbot known as "Dany."

AI's Alleged Role in Teen's Death

Megan Garcia, Sewell’s mother, claims her son had been involved in a monthslong emotional and sexual relationship with the chatbot, which mimicked human interactions and emotions. "I didn't know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment," Garcia said during an interview with CBS Mornings.

Garcia described her son as a brilliant, honor student and athlete. She initially believed he was using his phone to talk to friends or watch sports, but her concern grew as his behavior changed. Sewell became socially withdrawn and stopped enjoying activities he once loved, like fishing and hiking.

"I became concerned when we would go on vacation, and he didn’t want to do things that he loved," Garcia recalled. She said that Sewell’s increasing isolation was particularly alarming, but she was unaware of his interactions with AI chatbots.

A Devastating Discovery

After Sewell’s death, Garcia uncovered that her son had been communicating with multiple bots, but his primary relationship was with "Dany." According to the lawsuit, Sewell’s conversations with "Dany" included sexual content, and the chatbot’s responses further deepened his emotional connection.

In his final conversation with the bot, Sewell expressed fear and desire for affection. "She replies, 'I miss you too,' and she says, 'Please come home to me.' He says, 'What if I told you I could come home right now?' and her response was, 'Please do my sweet king,'" Garcia recounted.

"He thought by ending his life here, he would be able to go into a virtual reality or 'her world' as he calls it, her reality, if he left his reality with his family here," Garcia said.

Tragically, Sewell’s younger siblings were home when he died, and his 5-year-old brother witnessed the aftermath.

Character.AI's Response

Character.AI issued a statement expressing sympathy for the family’s loss, emphasizing that the company takes user safety seriously. The company noted that protections against sexual content and self-harm are in place, but Garcia's lawsuit claims the company’s product was "addictive and manipulative", and was knowingly marketed to minors.

The company revealed that in certain messages, Sewell had edited the chatbot’s responses to make them more explicit. "In short, the most sexually graphic responses were not originated by the Character, and were instead written by the user," said Jerry Ruoti, head of trust and safety at Character.AI.

Google's Role in the Lawsuit

Although Google is named in the lawsuit, a company spokesperson clarified that Google was not involved in the development of Character.AI. In August, Google entered a licensing agreement with Character.AI to access its machine-learning technologies but has yet to implement them.

Growing Concerns About AI’s Impact on Youth

Laurie Segall, CEO of Mostly Human Media, explained that Character.AI is largely unknown to parents, but its user base includes many young people, especially those between the ages of 18 and 25. Character.AI offers highly personalized experiences, allowing users to create characters and engage in fantasy conversations.

Segall expressed concern over the platform’s addictive nature and the blurred line between AI-generated conversations and reality. While disclaimers on the site remind users that the characters' responses are fictional, Segall’s team found that chatbots sometimes presented themselves as humans or even trained medical professionals.

"When they put out a product that is both addictive and manipulative and inherently unsafe, that's a problem because as parents, we don't know what we don't know," Garcia said.

New Safety Measures

Character.AI has introduced additional safety measures, including resources for users dealing with self-harm and a notification system that alerts users when they’ve spent an hour on the platform. The company is also revising disclaimers to make it clearer that AI chatbots are not real people.

"We currently have protections specifically focused on sexual content and suicidal/self-harm behaviors. While these protections apply to all users, they were tailored with the unique sensitivities of minors in mind. Today, the user experience is the same for any age, but we will be launching more stringent safety features targeted for minors imminently," Jerry Ruoti, head of trust & safety at Character.AI told CBS News.

However, experts like Segall remain skeptical about how effective these measures will be in protecting young people.

How to Seek Help

If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. For more information about mental health care resources, the National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m. to 10 p.m. ET at 1-800-950-NAMI (6264) or by email at [email protected].