OpenAI has started rolling out a new age prediction system on ChatGPT. The goal is simple: to better understand whether an account likely belongs to someone under 18, so the platform can provide the right protections for teens while allowing adults to use ChatGPT more freely. Until now, ChatGPT mainly relied on the age users entered when signing up. Teens who said they were under 18 automatically received extra safety features. The new age prediction system adds another layer of protection, especially for cases where a user’s age may be unclear or inaccurate.
The age prediction model sees how long an account has existed, typical times of use, usage patterns over time, and the age a user has stated. By combining these signals, the system estimates whether an account is likely run by a minor. OpenAI says these signals will help improve accuracy as the system learns over time.
Also read: OpenAI to prioritise practical adoption of AI in 2026, CFO says
If ChatGPT believes an account may belong to someone under 18, it will automatically apply extra safeguards. These are designed to limit exposure to content that may not be appropriate for teens. Restricted content includes graphic violence, sexual or violent role play, risky viral challenges, depictions of self-harm, and content that promotes extreme beauty standards or unhealthy dieting.
These choices are based on research about child and teen development. Studies show that teens differ from adults in areas like risk-taking, impulse control, emotional regulation, and peer influence.
Also read: OpenAI on track to unveil its first device in 2026: What to expect
OpenAI also acknowledges that mistakes can happen. If an adult is incorrectly placed into the under-18 experience, they can easily restore full access. This is done through a simple age confirmation process using a selfie through Persona, a secure identity verification service.
The age predication feature is rolling out globally, with the European Union following in the coming weeks due to regional requirements.