Elon Musk has once again drawn attention as he publicly criticised OpenAI’s chatbot, warning that children and people struggling with mental health should avoid using ChatGPT. The comment comes amid an ongoing public dispute between Musk and OpenAI CEO Sam Altman over the safety of AI technologies.
Musk made the remark in a post on X, responding to another user’s post about a recent school shooting in Canada by stating that people should keep ChatGPT away from children and those who may be mentally ill. The post quickly gained traction online as the debate over AI safety heated up.
The controversy comes from reports of a fatal school shooting in the Canadian town of Tumbler Ridge. According to The Wall Street Journal, the suspect spoke with ChatGPT multiple times in the days leading up to the attack. The message reportedly included discussions about violent scenarios, which were flagged by the company’s automated monitoring system.
According to the report, several employees internally debated whether to share the chat logs with law enforcement. However, the company ultimately determined that the activity did not meet the threshold for notifying authorities. A spokesperson later confirmed that the user’s account had been suspended following the incident.
The issue has also prompted legal action. The mother of a student injured in the shooting has filed a lawsuit against OpenAI in Canada, alleging that the chatbot provided guidance that assisted the suspect in planning the attack. The case has been filed with the Supreme Court of British Columbia.
Also read: How to set up parent-managed account on WhatsApp: Step-by-step guide
Meanwhile, Musk has been vocal about such incidents, blaming ChatGPT for several deaths. He urged people to not let their loved ones use the chatbot. In response, Altman raised safety concerns about Musk’s companies. He also cited incidents involving Tesla vehicles using Autopilot and criticised Grok-related decisions.
OpenAI has described the shooting as an “unspeakable tragedy” and stated that it is collaborating with experts and authorities to improve safeguards that could aid in the detection and prevention of potential real-world threats associated with AI conversations.