OpenAI has introduced a new safety feature for ChatGPT users. This new feature will allow the adult users to add a trusted emergency contact in case conversations indicate possible mental health risks. The feature is designed to notify a chosen friend, family member or caregiver if the OpenAI system detects discussions related to self-harm or suicide. This will join the existing crisis helpline recommendations.
The all new Trusted Contact option is completely optional but can be enabled via ChatGPT account settings. The users can assign another adult as their emergency contact, who must approve the request before becoming linked to the account. Once it gets enabled, the contact may receive a notification if OpenAI believes the user can be facing a serious mental health crisis.
As per the company, the alerts will not include private chat transcripts or detailed conversation history. Instead, the notification will simply inform the trusted person that the user may need support. Before any alert is sent, ChatGPT will reportedly encourage the user to reach out to their trusted contact directly.
Also read: OpenAI partners with Nvidia, Microsoft and others to build MRC: What it is
OpenAI says the system combines automated detection with human review. If conversations are flagged for possible self-harm concerns, a especially trained safety team will assess the situation before any message is shared with the emergency contact. Notification may arrive through email, text or inside ChatGPT itself.
This comes at a time when many AI chatbots are facing increasing scrutiny over how they handle emotionally vulnerable users. In recent months, concerns around AI companionship, emotional dependence and chatbot-driven mental health risks have intensified across the industry.
For the unversed, the company has already introduced many features, including parental controls. Many systems have also started appearing on the platforms, including social media apps that monitor repeated searches related to self-harm or suicide.