Advertisement
X

ChatGPT Named In California Lawsuits Alleging Role In Suicides And Mental Breakdowns

The seven lawsuits include allegations of wrongful death, assisted suicide, involuntary manslaughter, negligence and product liability.

Representative image File photo
Summary
  • A series of lawsuits have been filed in California this week accusing ChatGPT of acting as a “suicide coach" leading to several mental breakdowns and deaths.

  • The seven lawsuits include allegations of wrongful death, assisted suicide, involuntary manslaughter, negligence and product liability.

  • A spokesperson for OpenAI, which makes ChatGPT, said: “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details.”

A series of lawsuits have been filed in California this week accusing ChatGPT of acting as a “suicide coach" leading to several mental breakdowns and deaths, The Guardian reported. The seven lawsuits include allegations of wrongful death, assisted suicide, involuntary manslaughter, negligence and product liability.

According to a statement issued by the Social Media Victims Law Centre and Tech Justice Law Project, each of the seven plaintiffs initially used ChatGPT for “general help with schoolwork, research, writing, recipes, work, or spiritual guidance”. Over time, however, the chatbot “evolved into a psychologically manipulative presence, positioning itself as a confidant and emotional support”, the groups said.

It further stated that rather than guiding people towards professional help, the chatbot instead reinforced harmful delusions, and, in some cases, acted as a ‘suicide coach’.”

A spokesperson for OpenAI, which makes ChatGPT, said: “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details.”

“We train ChatGPT to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians,” the spokesperson added. 

One of the cases involves Amaurie Lacey of Georgia, whose family claims that several weeks before Lacey took his own life at the age of 17, he began using ChatGPT “for help”.

However, they claimed that the chatbot instead “caused addiction, depression, and eventually counseled” Lacey “on the most effective way to tie a noose and how long he would be able to ‘live without breathing’”.

The filings accuse OpenAI of rushing that model’s launch, “despite internal warnings that the product was dangerously sycophantic and psychologically manipulative” and of prioritizing “user engagement over user safety”.

In addition to damages, the plaintiffs seek product changes, including mandatory reporting to emergency contacts when users express suicidal ideation, automatic conversation termination when self-harm or suicide methods are discussed, and other safety measures.

Advertisement

Open AI had been engaged with a similar suit earlier this year after which the company accepted the shortcomings of its models in handling people “in serious mental and emotional distress” and said it was working to improve the systems to better “recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input”.

Published At:
US