Your conversations with Claude may be used to improve future AI models. On August 28 2025 Anthropic announced it will use Claude chat transcripts from Free Pro and Max users for model training by default. You can opt out in account settings but many people do not change defaults so it is important to review your privacy controls.
AI development needs large amounts of real world data. Live conversations provide authentic examples of how people ask questions and solve problems with AI. Anthropic says access to Claude chat transcripts will help Claude better understand user intent and deliver more useful responses. This is similar to other companies that use user interactions to improve model accuracy and helpfulness.
To stop your conversations from being used to train models open Claude account settings and find the privacy or data preferences section. Look for the option to opt out of data use for model training and follow the steps provided. After you opt out Anthropic says your backend copies will be deleted within 30 day. If you have a business account check your contract or contact your account manager to confirm data handling terms.
Default opt in policies often catch users who do not review updates. If you have used Claude to draft legal documents discuss personal matters or brainstorm confidential business ideas consider opting out before those conversations are stored for training. Even with opt out options remember that some flagged content may be retained longer for safety analysis.
Anthropic's change reflects the tension between improving AI and protecting user privacy. Better training data can lead to more capable AI but users deserve clear control over how their data is used. If you want to keep your Claude chats private act now and review your settings to opt out of data use for model training.