OI
Open Influence Assistant
×
OpenAI Adds Age Prediction and Parental Controls to ChatGPT: A New Model for Teen Safety in AI

OpenAI rolled out an age prediction system and expanded parental controls for ChatGPT to limit graphic content and improve self harm prevention, aiming to strengthen teen online safety while balancing privacy concerns.

OpenAI Adds Age Prediction and Parental Controls to ChatGPT: A New Model for Teen Safety in AI

OpenAI announced new safety measures for ChatGPT users under 18 on September 16, 2025. The update centers on an age prediction system and expanded parental controls designed to limit access to graphic or sexual content and improve responses to self harm risks. Given rising public and congressional scrutiny of AI and young people, these changes matter because they attempt to balance targeted safety interventions with user privacy.

Why AI teen safety is now central to conversational tools

AI chatbots are increasingly used by adolescents for homework help, social interaction, and mental health guidance. Policymakers and parents have grown concerned about exposure to harmful content and how AI might amplify risks around self harm or sexual material. OpenAI is responding by applying ChatGPT age prediction so that content and safety responses can be tailored when a session is likely to involve someone under 18.

What is age prediction in plain language

An age prediction system is an algorithm that estimates whether a user is likely under 18 based on signals available to the service, rather than asking for ID. In practice, the system assigns a probability that a session belongs to someone underage and then applies stricter safety settings when that probability is high. The goal is to reduce reliance on explicit age attestations that can be falsified, while avoiding intrusive identity checks and preserving teen online privacy.

Key changes in this update

  • ChatGPT age prediction: Sessions likely to be under 18 will trigger tailored content policies and safeguards.
  • Expanded parental controls for AI platforms: Parents or guardians will have broader ability to limit graphic sexual content and adjust interaction settings for teens.
  • Improved self harm prevention: The assistant will be adjusted to provide safer and more supportive responses when it assesses a user may be at risk.
  • Privacy emphasis: OpenAI frames the design to preserve user privacy while still applying targeted protections.

Numerical context

The update was announced on 2025 09 16. The policy change specifically targets users under 18. For broader context, surveys show roughly 95 percent of US teens report access to a smartphone, underscoring why online safety measures for adolescents are consequential.

Implications for parents, businesses, and regulators

There are trade offs to consider. Age prediction can reduce exposure to harmful material for minors, but it introduces risks of false positives and false negatives. Misclassifying an adult as a minor could restrict legitimate access, while failing to identify a teen would leave them unprotected. Any system that infers age from behavior raises questions about what data is used and how long inferences are stored. Transparency about data inputs and model behavior will be essential to build trust.

The move also arrives amid increased legislative attention on AI and children. Companies may use targeted interventions to show they are mitigating risk, but regulators could still demand stronger safeguards or independent audits. For schools and youth organizations, enhanced parental controls mean they will need clear guidance on integrating AI tools safely.

Industry precedent and best practices

If a major AI provider applies age based safety settings, competitors will likely follow, creating pressure to standardize approaches and definitions of acceptable content thresholds. Recommended best practices include combining platform controls with education and oversight, and adopting transparency measures such as published accuracy rates, data retention limits, opt out mechanisms, and appeals processes.

Expert perspective

OpenAI aligns with a broader trend toward contextual safety. Using models to detect situations that require extra care and applying different behavior accordingly is preferable to a one size fits all restriction. However, the approach will need independent evaluation to assess effectiveness for AI teen safety and self harm prevention.

Practical takeaways

  • Parents should review parental controls for AI platforms and set boundaries that match their family privacy preferences.
  • Businesses deploying AI should prepare policies for age inference and parental control integration, and create audit processes for safety features.
  • Regulators and advocates should seek transparency on how age prediction works and demand independent testing of accuracy and harms.

Frequently asked questions

Is ChatGPT safe for underage users?

OpenAI is adding measures aimed at improving safety, but no system is perfect. Combining platform controls with education and supervision remains important.

How accurate is ChatGPT at predicting user age?

OpenAI has not published comprehensive accuracy rates. Independent evaluation of age prediction performance and error rates will be important for public trust.

What privacy concerns should parents know about AI for teens?

Key concerns include what signals are used to infer age, how long inferences are stored, and whether data is shared. Parents should look for transparency features and data controls in parental settings.

Conclusion

OpenAI's rollout of age prediction and broader parental controls is a notable step toward making conversational AI safer for young people. The initiative recognizes that adolescent users require different guardrails, but it also highlights trade offs between protection and privacy. Observers should watch for transparency on how age is inferred, independent evaluations of accuracy and harms, and whether regulators set binding standards for AI teen safety.

selected projects
selected projects
selected projects
Get to know our take on the latest news
Ready to live more and work less?
Home Image
Home Image
Home Image
Home Image