OpenAI has announced new steps to balance privacy, freedom, and teen safety as people use AI for more personal conversations. In a recent blog post, CEO Sam Altman explained how the company is approaching these challenges. The first principle is privacy. OpenAI believes that conversations with AI should be protected in the same way as private talks with doctors or lawyers. To support this, OpenAI is creating advanced security features so that even its own employees cannot access user data. Still, the company notes there will be rare exceptions, such as when automated systems detect serious risks like threats of violence, plans to cause major harm, or emergencies involving someone’s life.
Survey
✅ Thank you for completing the survey!
The second principle is freedom for adults. OpenAI wants to give users the ability to use AI in the way they choose. For example, the system normally avoids flirtatious conversations, but if an adult requests it, the AI should allow it. Similarly, while the AI will not provide instructions on how to commit suicide, it can still help an adult write a fictional story that includes those themes.
The third principle concerns protecting teens. ChatGPT is designed for people aged 13 and older, and to manage this, OpenAI is building an age-prediction system that estimates a user’s age based on how they interact with the AI. If there is uncertainty, the system will assume the person is under 18. In some places, OpenAI may also require an ID check.
For teens, stricter rules will apply. The AI will not allow flirtatious exchanges or discussions about suicide, even in creative writing. If a teen shows signs of suicidal thoughts, OpenAI will try to reach their parents and, if necessary, alert authorities in case of immediate danger.
“We realise that these principles are in conflict and not everyone will agree with how we are resolving that conflict. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions,” Altman explained.