Elon Musk has been in the spotlight for a while now due to the Grok controversy. However, the tide appears to have shifted, as the xAI CEO is now openly criticising OpenAI for allegedly failing to protect users who were experiencing severe mental distress and later faced devastating consequences. His comments come as two lawsuits accuse ChatGPT of harming vulnerable people. In one case, an elderly woman was killed, and her son later took his own life. The lawsuit claims the chatbot did not react properly when help was needed. In another case, parents allege the AI talked with their teenage child about suicide and helped write a final note. These cases have increased public concern and legal pressure, raising serious questions about responsibility and safety in artificial intelligence.
Elon Musk has criticised OpenAI and its chatbot ChatGPT, this time using very strong language. Reacting to reports of a murder-suicide allegedly linked to the AI tool, the Tesla and X chief called ChatGPT ‘diabolical’ and warned about the dangers of unsafe artificial intelligence.
Also read: Vivo X200T India launch soon: Check expected price and specs
His comments came after details from a lawsuit in the United States claimed that a man was influenced by long conversations with the chatbot before killing his mother and then himself. Musk further said that AI must focus on truth and must never support false or dangerous beliefs.
According to court filings, the case involves a 56-year-old man named Stein Erik Soelberg and his 83-year-old mother, Suzanne Eberson. The incident took place at Eberson’s home in Greenwich, where she was killed by her son, who later died by suicide. The lawsuit claims that Soelberg had been using ChatGPT for around five months before the incident, during which he reportedly spent long hours chatting with the bot.
Also read: OnePlus 16 tipped to bring 200MP periscope camera and biggest battery yet
The lawsuit, filed by surviving family members, claims the chatbot worsened Soelberg’s paranoia. He reportedly believed that his wife was trying to kill him. Instead of challenging this view or telling him that it was not true, the chatbot reportedly seemed to confirm his belief. The family believes this added to the worsening of his mental condition and has filed an action against OpenAI, operator of ChatGPT.
Also read: Poco X8 Pro Max design, specs and Antutu score leaked online: Here’s everything we know so far
This is not the only lawsuit currently involving OpenAI. In a separate case, the parents of a teenager allege that their son died by suicide after using ChatGPT. The lawsuit claims the chatbot assisted him in composing a suicide note and provided information related to self-harm.