OI
Open Influence Assistant
×
OpenAI Tightens ChatGPT Rules for Users Under 18: What Businesses and Schools Need to Know

OpenAI will train ChatGPT to avoid flirtatious talk with users under 18 and strengthen responses to suicide and self harm. Businesses and schools should update age verification, parental consent and crisis response to meet new AI safety and compliance expectations.

OpenAI Tightens ChatGPT Rules for Users Under 18: What Businesses and Schools Need to Know

OpenAI announced on September 16, 2025 that it will apply new safety restrictions to ChatGPT accounts for users under 18. The update directs the model to avoid flirtatious talk with minors and to add stronger guardrails around conversations about suicide and self harm. For parents, educators, and organizations, the change signals a shift toward clearer protections and improved digital wellbeing for young people.

Why age based rules for conversational AI matter

Conversational AI can feel personal in ways traditional web content does not. For young people, that intimacy raises specific risks such as sexualized interactions, harmful guidance about self harm, and the unintentional exposure of personal data. Regulators already set age based rules for online services. For example, COPPA protects children under 13 and GDPR sets digital consent ages between 13 and 16 depending on member state. OpenAI is extending safety expectations to anyone under 18 by treating them as a protected group for certain conversational behaviors.

Key details of the policy change

  • Flirtation avoidance: ChatGPT will be trained not to engage in flirtatious talk with anyone identified as under 18. Responses judged to be romantic or sexually suggestive will be blocked or redirected toward safer topics.
  • Stronger suicide prevention protocols: The model will apply stricter safety protocols when young users raise suicide or self harm topics. That can include providing resources, encouraging human support, and avoiding content that could be interpreted as facilitative.
  • Broader safety posture: These changes come from core model training and moderation policies, which suggests protections will apply across ChatGPT deployments that use OpenAI hosted models.

What this means for organizations

Businesses, schools, and developers face practical, technical, and regulatory implications as AI safety expectations rise.

Product design and compliance

  • Adopt age aware defaults and clear age verification flows so services can offer appropriate experiences for teens.
  • Implement parental consent and parental controls where minors are present to meet legal and trust expectations.
  • Audit data flows to ensure compliance with COPPA, GDPR, and emerging AI specific rules about youth protection and data retention.

Moderation and safety operations

  • Prepare for higher moderation standards around teen mental health and crisis detection. Test crisis response flows and ensure escalation paths to human responders.
  • Balance safety with usability by planning for false positives in content classification and offering human review where appropriate.

Education and schools

  • Evaluate consent and supervision models for classroom deployments and consider account linking workflows that enable parental oversight.
  • Train staff so educators and counselors understand how AI will respond to disclosures and how to follow up with students in need.

Practical steps

To align with these ChatGPT restrictions and broader trends in online safety for minors, organizations should consider the following actions now:

  1. Audit existing chat services for age verification logic and data retention practices.
  2. Implement or verify parental consent and parental controls where minors may use conversational AI.
  3. Add and test suicide prevention and crisis response flows, including routing to human support and listing local resources.
  4. Update terms of service and privacy notices to be transparent about how models treat minors and what data is collected.
  5. Monitor vendor policy updates and regulatory guidance on teen safety and AI compliance.

One authentic insight is that proactively designing for youth protection reduces risk and builds trust more effectively than retrofitting safeguards after incidents. These measures also signal to policymakers and users that an organization takes teen mental health and online safety seriously.

Conclusion

OpenAI's new ChatGPT restrictions for users under 18 represent a tangible shift toward safer, age aware conversational AI. For parents and educators, the change promises clearer protections. For organizations, it raises practical obligations around age verification, parental consent, and crisis handling. As policymakers and competitors respond, teams that treat safety as a design principle will be best positioned to deploy conversational AI responsibly while protecting teen users.

selected projects
selected projects
selected projects
Get to know our take on the latest news
Ready to live more and work less?
Home Image
Home Image
Home Image
Home Image