OI
Open Influence Assistant
×
Anthropic Updates AI Privacy Policy: Opt Out by Sept 28
Anthropic Updates AI Privacy Policy: Opt Out by Sept 28

Anthropic, maker of the Claude chatbot, updated its consumer data policy so that user conversations may be used to train future AI models unless users explicitly opt out by September 28, 2025. The change shifts the default from automatic deletion after about 30 days to possible retention for up to five years for non-opted-out accounts.

What changed: key points

  • Data retention extended: Conversations from non-opted-out accounts can be retained up to five years, replacing the previous roughly 30-day deletion window.
  • Training data usage: Anthropic may use consumer chat data to improve model safety and capabilities.
  • Opt-out deadline: Users must opt out by September 28, 2025 to prevent their chats from being used in training datasets.
  • Enterprise exclusion: Enterprise, government and education customers are reportedly excluded from this change and will not have their data used for training.
  • Automated filtering: Anthropic says it applies automated filters to remove sensitive information before data is used, though full filter details are limited.

Why Anthropic made the change

Training advanced machine learning models requires large volumes of real-world data. Anthropic says the updated data policy will help improve model safety and overall capabilities by providing more varied training signals from real user interactions. This approach aligns with industry trends where companies balance ethical AI goals against the need for high-quality training data.

Privacy implications and user data rights

The move raises clear concerns for user privacy and transparency. Key topics to consider:

  • User consent: The default has shifted toward data-sharing-by-default, making active opt-out the mechanism for users who prefer not to contribute their chats to AI training datasets.
  • Data retention and security: A five-year retention period increases exposure if a security incident occurs and challenges data minimization practices often advocated under privacy regulations like GDPR and CCPA.
  • Algorithmic transparency: Users and advocates have asked for clearer explanations of how automated filters anonymize or remove sensitive information and how data is audited before being used for training.
  • Fairness and parity: The enterprise exclusion raises questions about whether all user groups should be treated equally when it comes to data usage and privacy protections.

How to opt out of AI training data collection

If you want to prevent your Claude chats from being included in training data, follow these steps:

  1. Sign into your Anthropic/Claude account.
  2. Open Account settings and look for privacy or data controls.
  3. Find the opt-out option for training data or analytics and toggle it on to withhold your conversations from training datasets.
  4. Verify any notification emails or in-app prompts from Anthropic to confirm your choice.

Act soon: the deadline to exercise this user consent choice is September 28, 2025. If you do not opt out by that date, your conversations may be retained and used subject to the new data retention rules.

Practical advice for users

  • Review sensitive content in past conversations and delete any chats you do not want retained, when possible.
  • Use account privacy controls and confirm your opt-out setting is active if you do not wish to contribute to training data.
  • Consider alternative accounts or dedicated workflows for sensitive work that should remain private, especially if enterprise protections do not apply.
  • Stay informed about updates to Anthropic’s privacy policy and any additional details on data anonymization, retention, and safeguards.

Context in the AI ecosystem

Anthropic’s change mirrors broader industry trends in which companies aim to build safer, more capable models while navigating privacy regulations and public trust. Similar policies have prompted debates about data minimization, privacy audits and the balance between innovation and user control. For consumers, this is part of an ongoing conversation about how personal information and model training intersect, especially given the rise of AI-driven services and answer engines.

Final thoughts

The policy update gives users a clear choice, but the shift from privacy-by-default to data-sharing-by-default marks a notable change in how Anthropic approaches AI privacy and data policy. If you care about how your interactions are used in AI training data, review your settings and exercise the opt-out by September 28, 2025. Doing so protects your data and helps uphold your user data rights as AI systems continue to evolve.

For further steps, check your account notifications and consult Anthropic’s Consumer Terms and Privacy Policy for complete details about training data, retention, and the company’s automated filtering practices.

selected projects
selected projects
selected projects
Unlock new opportunities and drive innovation with our expert solutions. Whether you're looking to enhance your digital presence
Ready to live more and work less?
Home Image
Home Image
Home Image
Home Image