OI
Open Influence Assistant
×
Anthropic to Train AI Models on Claude Chats
Anthropic to Train AI Models on Claude Chats

Your conversations with Claude may be used to improve future AI models. On August 28 2025 Anthropic announced it will use Claude chat transcripts from Free Pro and Max users for model training by default. You can opt out in account settings but many people do not change defaults so it is important to review your privacy controls.

Why Anthropic is using chat transcripts

AI development needs large amounts of real world data. Live conversations provide authentic examples of how people ask questions and solve problems with AI. Anthropic says access to Claude chat transcripts will help Claude better understand user intent and deliver more useful responses. This is similar to other companies that use user interactions to improve model accuracy and helpfulness.

Key details you should know

  • Default opt in All Free Pro and Max users are enrolled in data collection unless they manually opt out in Claude account settings
  • Five year retention Conversations used for training may be kept for up to five year
  • 30 day deletion Chats from users who opt out are removed from backend servers within 30 day
  • Policy violation storage Data flagged for safety or policy issues may be retained longer for review and safety metrics
  • Enterprise and API customers Business accounts and API customers typically have separate contracts that limit training use or retention

How to opt out of Anthropic data use

To stop your conversations from being used to train models open Claude account settings and find the privacy or data preferences section. Look for the option to opt out of data use for model training and follow the steps provided. After you opt out Anthropic says your backend copies will be deleted within 30 day. If you have a business account check your contract or contact your account manager to confirm data handling terms.

Privacy implications and practical advice

Default opt in policies often catch users who do not review updates. If you have used Claude to draft legal documents discuss personal matters or brainstorm confidential business ideas consider opting out before those conversations are stored for training. Even with opt out options remember that some flagged content may be retained longer for safety analysis.

Simple checklist

  • Open Claude account settings
  • Find privacy or data preferences
  • Choose to opt out of model training if you want your chats excluded
  • For business use check your API or enterprise contract terms
  • Delete any sensitive chats you do not want retained

Bottom line

Anthropic's change reflects the tension between improving AI and protecting user privacy. Better training data can lead to more capable AI but users deserve clear control over how their data is used. If you want to keep your Claude chats private act now and review your settings to opt out of data use for model training.

selected projects
selected projects
selected projects
Unlock new opportunities and drive innovation with our expert solutions. Whether you're looking to enhance your digital presence
Ready to live more and work less?
Home Image
Home Image
Home Image
Home Image