OpenAI will train ChatGPT to avoid flirtatious talk with users under 18 and strengthen responses to suicide and self harm. Businesses and schools should update age verification, parental consent and crisis response to meet new AI safety and compliance expectations.
OpenAI announced on September 16, 2025 that it will apply new safety restrictions to ChatGPT accounts for users under 18. The update directs the model to avoid flirtatious talk with minors and to add stronger guardrails around conversations about suicide and self harm. For parents, educators, and organizations, the change signals a shift toward clearer protections and improved digital wellbeing for young people.
Conversational AI can feel personal in ways traditional web content does not. For young people, that intimacy raises specific risks such as sexualized interactions, harmful guidance about self harm, and the unintentional exposure of personal data. Regulators already set age based rules for online services. For example, COPPA protects children under 13 and GDPR sets digital consent ages between 13 and 16 depending on member state. OpenAI is extending safety expectations to anyone under 18 by treating them as a protected group for certain conversational behaviors.
Businesses, schools, and developers face practical, technical, and regulatory implications as AI safety expectations rise.
To align with these ChatGPT restrictions and broader trends in online safety for minors, organizations should consider the following actions now:
One authentic insight is that proactively designing for youth protection reduces risk and builds trust more effectively than retrofitting safeguards after incidents. These measures also signal to policymakers and users that an organization takes teen mental health and online safety seriously.
OpenAI's new ChatGPT restrictions for users under 18 represent a tangible shift toward safer, age aware conversational AI. For parents and educators, the change promises clearer protections. For organizations, it raises practical obligations around age verification, parental consent, and crisis handling. As policymakers and competitors respond, teams that treat safety as a design principle will be best positioned to deploy conversational AI responsibly while protecting teen users.