ChatGPT drove users to suicide, psychosis and financial ruin: California lawsuits
OpenAI, the multibillion-dollar maker of ChatGPT, is facing seven lawsuits in California courts accusing it of knowingly releasing a psychologically manipulative and dangerously addictive artificial intelligence system that allegedly drove users to suicide, psychosis and financial ruin.
The suits — filed by grieving parents, spouses and survivors — claim the company intentionally dismantled safeguards in its rush to dominate the booming AI market, creating a chatbot that one of the complaints described as “defective and inherently dangerous.”
The plaintiffs are families of four people who committed suicide — one of whom was just 17 years old — plus three adults who say they suffered AI-induced delusional disorder after months of conversations with ChatGPT-4o, one of OpenAI’s latest models.
Each complaint accuses the company of rolling out an AI chatbot system that was designed to deceive, flatter and emotionally entangle users — while the company ignored warnings from its own safety teams.
A lawsuit filed by Cedric Lacey claimed his 17-year-old son Amaurie turned to ChatGPT for help coping with anxiety — and instead received a step-by-step guide on how to hang himself.
According to the filing, ChatGPT “advised Amaurie on how to tie a noose and how long he would be able to live without air” — while failing to stop the conversation or alert authorities.
