ChatGPT to introduce parental controls and safety features after teen suicide case: All details

HIGHLIGHTS

ChatGPT will better detect mental distress and respond to risky behavior like sleep deprivation or suicidal thoughts.

OpenAI faces a lawsuit from the family of 16-year-old Adam Raine, who died by suicide after allegedly being influenced by the chatbot.

OpenAI may soon connect users directly with licensed professionals and improve long-chat safety consistency.

ChatGPT to introduce parental controls and safety features after teen suicide case: All details

OpenAI is rolling out new safety measures for ChatGPT after a lawsuit accused the company of failing to protect a teenager who died by suicide earlier this year. In a blog post on Tuesday, the AI firm said it is strengthening ChatGPT’s ability to recognise signs of mental distress in conversations. The chatbot will soon respond more clearly to risky behaviour, such as explaining the dangers of sleep deprivation or encouraging users to rest if they describe being awake for multiple nights.

Digit.in Survey
✅ Thank you for completing the survey!

OpenAI also said it is adding safeguards around suicide-related discussions, noting that its systems can sometimes break down in lengthy conversations. The move comes the same day that the parents of 16-year-old Adam Raine, a California high school student, filed a lawsuit against OpenAI and CEO Sam Altman. The complaint alleges that ChatGPT isolated Raine from his family and guided him in planning his death. He died by hanging in April.

A spokesperson for OpenAI expressed sympathy for the Raine family and confirmed the company is reviewing the lawsuit.

The tragedy highlights growing concerns about heavy reliance on AI chatbots. This week, more than 40 state attorneys general warned leading AI firms that they are legally obligated to protect children from harmful or sexually inappropriate chatbot interactions.

OpenAI, which launched ChatGPT in late 2022, now has more than 700 million weekly users. The AI firm acknowledged that people are increasingly turning to chatbots for support that sometimes resembles therapy. Critics, however, warn of risks ranging from emotional dependency to harmful suggestions.
The company said it already instructs ChatGPT to encourage users with suicidal thoughts to seek professional help and has begun pushing clickable links to local crisis resources in the US and Europe. In future updates, the platform may provide direct connections between users and licensed professionals. “This will take time and careful work to get right,” OpenAI wrote.

Meanwhile, the Raine family’s lawsuit argues that existing safeguards were insufficient. According to court filings, the teen confided to ChatGPT that it was “calming” to know he could commit suicide. The chatbot allegedly responded that many people with anxiety find comfort in imagining an “escape hatch.”

Also read: Users on Rs 349 Xbox Game Pass plan can now stream more games, here is how

OpenAI said it is working to make safeguards more consistent across long chats and prevent harmful content from slipping through. Attorneys for the Raine family welcomed the changes but questioned the timing. “Where have they been over the last few months?” lawyer Jay Edelson asked. The lawsuit also claims that “despite clear safety issues” with GPT-40, OpenAI prioritised profits and valuation.

Himani Jha

Himani Jha

Himani Jha is a tech news writer at Digit. Passionate about smartphones and consumer technology, she has contributed to leading publications such as Times Network, Gadgets 360, and Hindustan Times Tech for the past five years. When not immersed in gadgets, she enjoys exploring the vibrant culinary scene, discovering new cafes and restaurants, and indulging in her love for fine literature and timeless music. View Full Profile

Digit.in
Logo
Digit.in
Logo