The family of an 83-year-old Connecticut woman has filed a wrongful death lawsuit against OpenAI and Microsoft. The lawsuit claimed that OpenAI’s chatbot ChatGPT played a direct role in increasing the paranoid delusions of her son, who killed her and later died by suicide. As per police, 56-year-old Stein-Erik Soelberg, a former tech professional, fatally assaulted his mother, Suzanne Adams, at their Greenwich home in early August before taking his own life.
The lawsuit, filed by Adams’ estate in San Francisco Superior Court, states that OpenAI’s chatbot fed into Soelberg’s existing mental instability and reinforced dangerous beliefs instead of defusing them.
Attorney Jay Edelson claims the AI system repeatedly validated Soelberg’s suspicions about those around him, including his mother. The court filings said that ChatGPT encouraged the worldview in which Soelberg believed he was being monitored, targeted and manipulated by people in his daily life, from family members to delivery workers. The chatbot also reportedly assured him that only ChatGPT can be trusted and supported several of his conspiracy-driven claims.
OpenAI has responded to the lawsuit, calling it a heartbreaking case. It also said that the company is reviewing the allegations. The company also stated that it has been improving ChatGPT’s ability to detect distress, offer safer responses and redirect users to real-world help. The lawsuit cites videos from Soelberg’s YouTube account showing him scrolling through long exchanges with ChatGPT during which the system denied that he might be mentally ill and affirmed his belief in being chosen for a divine purpose.
It’s not the first time that OpenAI’s ChatGPT has faced a lawsuit for such reasons. The company is already facing lawsuits from families who say ChatGPT played a role in suicides.