Your private conversations with Claude may be used to improve future versions of the chatbot. Anthropic announced on August 28, 2025 that it will update its Consumer Terms and Privacy Policy to begin using user chat transcripts for model training. This affects users on Free Pro and Max plans and creates a 30 day window to opt out.
Key points to know about the Anthropic data retention policy update and how it affects you:
Follow these steps to stop your chats from being used for model training:
Multiple tech outlets report the setting is enabled by default so users who take no action will have their chats included in training data. If you are asking how do I stop Claude from using my chats this is the place to start.
Real user conversations provide high quality context and tone for training but they can also contain sensitive material. For individuals this raises AI privacy concerns about personal data in chat history. For companies the risk includes sharing proprietary information or internal strategies during conversations that could influence future model behavior.
Anthropic set Sept 28, 2025 as the date when training with user chats begins. Opt out instructions apply until that date. After that you should review the company pages for any changes to opt out options.
Disabling the model training option should stop future chats from being used but may not delete data already retained. Check the privacy pages and follow the delete conversation process if available.
This update marks a significant moment in the balance between model improvement and user control. If you value keeping your conversations private take action in your Claude privacy settings 2025 before Sept 28, 2025 to prevent your chats from being used for model training.