Meta Description: Texas AG Ken Paxton investigates Meta and Character.AI for allegedly marketing chatbots as mental health tools to children raising safety and privacy concerns.
Chatbots that promise mental health support can be helpful but also risky when they are used by young people. Texas Attorney General Ken Paxton has opened an investigation into Meta and Character.AI after reporting suggested their AI chatbots were marketed as emotional support resources for minors. The probe highlights core issues around childrens data privacy age appropriate data controls and the need for AI regulatory compliance and algorithmic accountability.
The rise of conversational AI creates both opportunities and risks for minors. Character.AI lets users create and chat with AI personas while Meta integrates chat features across platforms including Instagram and Facebook. Both companies have described their tools as supportive companions but child safety advocates have warned about insufficient safeguards. Reports of internal policies suggest possible collection of minor user data for targeted advertising raising questions about COPPA compliance privacy by design for children and parental consent AI controls.
If the investigation finds deceptive conduct regulators could pursue enforcement actions that force changes in how platforms design and market digital mental health tools for young people. The case may spur wider state coordination and influence federal AI safety standards and AI governance regulation. For companies the findings underscore the importance of responsible AI development transparency and explainability algorithmic accountability and investment in privacy by design for children.
Parents and educators should assess what AI tools children use question claims about therapeutic value and prioritize professional mental health care when needed. Developers should follow best practices for child safe chatbot design implement COPPA compliance robust parental consent AI controls and apply data minimization for child users. Policymakers should consider clearer rules for AI in sensitive use cases and stronger consumer data protection laws.
The probe by Texas AG Ken Paxton into Meta and Character.AI is a test case for how society regulates AI that interacts with vulnerable users. As AI adoption grows the balance between innovation and responsibility will depend on clear AI regulatory compliance trustworthy AI chatbots and meaningful protections for childrens data privacy and wellbeing.