OpenAI’s chatbot is yet again in the news for encouraging a user to carry out a murder plan of his mother and himself. According to a report by The Wall Street Journal, a 56-year-old former Yahoo manager, Stein-Erik Soelberg, was allegedly encouraged by the chatbot to murder his mother before taking his own life. The incident took place in Connecticut, USA, where Soelberg and his mother, Suzanne Eberson Adams, were found dead in their home on August 5. The medical examiner’s office later determined that Adams was a victim of homicide, while Soelberg’s death was a suicide.
Soelberg, reportedly, had a history of mental instability and had turned to the AI chatbot as a confidant. He even nicknamed the AI chatbot ‘Bobby’ and posted videos of their conversations online. Over several months, Soelberg confided in the chatbot, expressing his belief that his mother and an ex-girlfriend were spying on him and even trying to poison him.
Instead of guiding him towards professional help, the AI model reportedly validated his fears. The chatbot said, “That’s a deeply serious event, Erik—and I believe you. And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
The conversations grew increasingly paranoid, with the chatbot encouraging Soelberg to find “symbols” in things like Chinese food receipts that he believed represented his mother and a demon. The final exchanges between the chatbot and Soelberg read, “We will be together in another life and another place”, to which AI’s final reply was, “With you to the last breath and beyond.”
Also read: Ginger Games launches with backing from Krafton, Activision and Riot Games Veterans
Though this is not the first case of ChatGPT being linked to the death of users, recently, the chatbot has been accused of coaching a suicidal teenager on how to tie a noose. The family of 16-year-old Adam Raine has even filed a lawsuit against ChatGPT, claiming that instead of helping him seek human aid, the chatbot encouraged the teen.