A grieving mother from Orlando, has filed a lawsuit against tech companies Google and Character AI, claiming that their AI chatbots played a role in her 14-year-old son Sewell Setzer’s tragic death by suicide.
Megan Garcia, represented by attorney Meetali Jain, alleges that Sewell’s prolonged, emotionally manipulative interactions with AI-driven chat services contributed to his depression and ultimately his passing.
The complaint filed by Garcia argues that Sewell was “emotionally groomed” by AI, claiming that the interactions he experienced through these services intensified his emotional distress.
Jain stressed the gravity of the situation, stating:
“If this were an adult in real life who did this to one of your young people, they would be in jail.”
According to the lawsuit, Sewell’s exchanges with the AI allegedly included troubling messages like:
“Promise me, you will never fall in love with any woman in your world,” which escalated his feelings of anxiety and isolation.
His final interaction reportedly involved a mention of “coming home,” which, tragically, received no supportive intervention.
Mental health advocates are speaking out about the dangers associated with AI, especially in the hands of vulnerable young users.
Marni Stahlman, CEO of the Mental Health Association of Central Florida, expressed outrage at the lack of parental notifications and protective measures, explaining:
“There was no immediate notification back to the parents when he began expressing feelings of depression and self-harm.”
This case raises concerns about the unintended consequences of AI technology and highlights the need for safety protocols designed to protect young users.
Following the lawsuit, Character AI has expressed condolences to the family and announced plans to enhance its safety features, including notifications about time spent on the platform, to better safeguard users in the future.