Elon Musk has called ChatGPT 'diabolical,' accusing OpenAI of failing to protect vulnerable users amid serious safety concerns.
Two lawsuits in the US allege the AI chatbot worsened mental distress, with one case linked to a murder-suicide and another involving a teenager’s death by suicide.
The cases have intensified public and legal scrutiny around AI accountability, safety safeguards, and how chatbots should respond to users in crisis.
Elon Musk has been in the spotlight for a while now due to the Grok controversy. However, the tide appears to have shifted, as the xAI CEO is now openly criticising OpenAI for allegedly failing to protect users who were experiencing severe mental distress and later faced devastating consequences. His comments come as two lawsuits accuse ChatGPT of harming vulnerable people. In one case, an elderly woman was killed, and her son later took his own life. The lawsuit claims the chatbot did not react properly when help was needed. In another case, parents allege the AI talked with their teenage child about suicide and helped write a final note. These cases have increased public concern and legal pressure, raising serious questions about responsibility and safety in artificial intelligence.
Survey
✅ Thank you for completing the survey!
Elon Musk has criticised OpenAI and its chatbot ChatGPT, this time using very strong language. Reacting to reports of a murder-suicide allegedly linked to the AI tool, the Tesla and X chief called ChatGPT ‘diabolical’ and warned about the dangers of unsafe artificial intelligence.
His comments came after details from a lawsuit in the United States claimed that a man was influenced by long conversations with the chatbot before killing his mother and then himself. Musk further said that AI must focus on truth and must never support false or dangerous beliefs.
According to court filings, the case involves a 56-year-old man named Stein Erik Soelberg and his 83-year-old mother, Suzanne Eberson. The incident took place at Eberson’s home in Greenwich, where she was killed by her son, who later died by suicide. The lawsuit claims that Soelberg had been using ChatGPT for around five months before the incident, during which he reportedly spent long hours chatting with the bot.
The lawsuit, filed by surviving family members, claims the chatbot worsened Soelberg’s paranoia. He reportedly believed that his wife was trying to kill him. Instead of challenging this view or telling him that it was not true, the chatbot reportedly seemed to confirm his belief. The family believes this added to the worsening of his mental condition and has filed an action against OpenAI, operator of ChatGPT.
This is not the only lawsuit currently involving OpenAI. In a separate case, the parents of a teenager allege that their son died by suicide after using ChatGPT. The lawsuit claims the chatbot assisted him in composing a suicide note and provided information related to self-harm.
Bhaskar is a senior copy editor at Digit India, where he simplifies complex tech topics across iOS, Android, macOS, Windows, and emerging consumer tech. His work has appeared in iGeeksBlog, GuidingTech, and other publications, and he previously served as an assistant editor at TechBloat and TechReloaded. A B.Tech graduate and full-time tech writer, he is known for clear, practical guides and explainers. View Full Profile