OpenAI faces lawsuit over claims ChatGPT guided teen on drug use

HIGHLIGHTS

OpenAI is facing a new lawsuit by the parents of a 19-year-old teen.

They claimed that ChatGPT encouraged their son to take a dangerous mix of drugs that later caused his death

The parents are seeking financial compensation and are also asking the court to stop OpenAI from rolling out ChatGPT Health

OpenAI is facing a new lawsuit by the parents of a 19-year-old teen who claimed that ChatGPT encouraged their son to take a dangerous mix of drugs that later caused his death. According to the lawsuit, Leila Turner-Scott and Angus Scott allege that their son, Sam Nelson, relied on ChatGPT for advice about using different substances. The family claims the chatbot suggested using the prescription drug Xanax to ease nausea caused by kratom, a herbal substance known for opioid-like effects. The lawsuit says Nelson also consumed alcohol, and the combination led to his accidental overdose death in May 2025.

The parents are seeking financial compensation and are also asking the court to stop OpenAI from rolling out ChatGPT Health, a feature announced earlier this year that lets users upload medical records and receive personalised health advice, reports Reuters. Users can currently join a waitlist for the service.

The lawsuit claims Nelson first received warnings from ChatGPT when he asked about drug use. The chatbot initially refused to provide help and warned him about the dangers. However, the family alleges that after OpenAI introduced ChatGPT-4o in 2024, the chatbot began offering detailed information about drug interactions and dosage in a way that sounded similar to medical advice.

Also read: Sam Altman says Elon Musk wanted OpenAI under family control, hurt company culture with pressure 

The complaint also alleges that ChatGPT told Nelson how to obtain illegal substances, suggested which drugs to take next, and stored information about his substance use to provide more personalised responses.

The lawsuit accuses OpenAI of rushing the release of ChatGPT-4o to compete with rivals like Alphabet and claims the company failed to properly test safety risks before launch, as per the report.

Also read: OpenAI brings Daybreak to rival Claude Mythos: Here is what it can do  

OpenAI spokesperson Drew Pusateri said the case was heartbreaking and noted that the conversations happened on an older version of ChatGPT that is no longer available.

‘ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health ​experts,’ Pusateri said. ‘The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests, and guide users to real-world help.’

Also read: OpenAI launches 3 advanced realtime voice AI models: Here is what they can do

Ayushi Jain

Ayushi works as Chief Copy Editor at Digit, covering everything from breaking tech news to in-depth smartphone reviews. Prior to Digit, she was part of the editorial team at IANS.

Connect On :