OpenAI Launches Age Prediction System for ChatGPT in 2025

OpenAI is redesigning how age is handled on ChatGPT by replacing static, self-declared age fields with a continuous, probability-based age prediction system. The new approach estimates whether an account is likely operated by someone under 18 and automatically applies enhanced safety protections when that threshold is reached.

This change addresses long-standing gaps in teen safety systems that relied on users accurately disclosing their age during signup.

Key Changes in Age Detection

From Self-Declared Age to Continuous Prediction

Previously, ChatGPT relied primarily on a one-time age declaration provided during account creation. This method often failed to identify teenage users who chose not to disclose their real age.

The new system evaluates a range of behavioral and account-level signals, including:

  • Account age and long-term usage patterns
  • Typical activity hours
  • Historical interactions and previously stated age
  • Consistency of behavior over time

Based on these signals, the model determines whether an account should default to an under-18 experience.

How the Age Prediction System Works

OpenAI treats age classification as a dynamic safety decision rather than a fixed profile attribute. During rollout, the company continuously evaluates how individual signals perform by comparing predictions against “ground truth” data from users who later verify their age.

Signals that prove reliable gain more weight, while those associated with frequent misclassification are down-weighted or removed. When age information is incomplete or uncertain, the system defaults to the safer, under-18 experience, prioritizing protection over unrestricted access.

Safeguards and Appeal Mechanisms

To reduce the risk of adult users being incorrectly classified, OpenAI has introduced an appeal process. Adults who are placed into an under-18 experience can restore full access through a selfie-based age verification handled by Persona.

This design allows OpenAI to rely on inference as the default, while reserving stronger identity verification only for disputed cases. The rollout also incorporates regional compliance requirements, including specific considerations for the European Union.

Enhanced Safety Features for Under-18 Users

When an account is predicted to belong to someone under 18, ChatGPT applies stricter controls across several sensitive areas, including:

  • Graphic violence and sexual or violent role play
  • Risky viral challenges
  • Self-harm content
  • Body-image issues and unhealthy dieting material

Additional parent-oriented tools include quiet hours, controls over memory and model training, and notifications if the system detects signs of acute emotional distress.

Why This Matters

By focusing on behavioral signals rather than self-reporting, OpenAI is introducing a more adaptive and scalable approach to youth safety. The continuous monitoring framework allows protections to adjust as usage patterns evolve, offering greater responsiveness than traditional age-gating methods.

If effective, this model could serve as a blueprint for how consumer AI platforms implement teen protections at scale without mandating universal identity checks.

Outlook for 2025

With more than 150 safety controls planned for rollout in 2025, OpenAI’s age prediction system represents a significant shift in platform governance. The approach balances accessibility for adults with stronger, default protections for younger users, potentially redefining how age-based safety is enforced across AI-driven consumer platforms.

Sources: Twitter post by Rohan Paul

en_USEnglish