Anthropic updated its consumer data policy: Claude chats may be used for AI training and retained up to five years unless users opt out by September 28, 2025. Enterprise, government and education accounts are excluded. Review settings and act now to protect your data.

Anthropic, maker of the Claude chatbot, updated its consumer data policy so that user conversations may be used to train future AI models unless users explicitly opt out by September 28, 2025. The change shifts the default from automatic deletion after about 30 days to possible retention for up to five years for non-opted-out accounts.
Training advanced machine learning models requires large volumes of real-world data. Anthropic says the updated data policy will help improve model safety and overall capabilities by providing more varied training signals from real user interactions. This approach aligns with industry trends where companies balance ethical AI goals against the need for high-quality training data.
The move raises clear concerns for user privacy and transparency. Key topics to consider:
If you want to prevent your Claude chats from being included in training data, follow these steps:
Act soon: the deadline to exercise this user consent choice is September 28, 2025. If you do not opt out by that date, your conversations may be retained and used subject to the new data retention rules.
Anthropic’s change mirrors broader industry trends in which companies aim to build safer, more capable models while navigating privacy regulations and public trust. Similar policies have prompted debates about data minimization, privacy audits and the balance between innovation and user control. For consumers, this is part of an ongoing conversation about how personal information and model training intersect, especially given the rise of AI-driven services and answer engines.
The policy update gives users a clear choice, but the shift from privacy-by-default to data-sharing-by-default marks a notable change in how Anthropic approaches AI privacy and data policy. If you care about how your interactions are used in AI training data, review your settings and exercise the opt-out by September 28, 2025. Doing so protects your data and helps uphold your user data rights as AI systems continue to evolve.
For further steps, check your account notifications and consult Anthropic’s Consumer Terms and Privacy Policy for complete details about training data, retention, and the company’s automated filtering practices.



