HyperAIHyperAI

Command Palette

Search for a command to run...

Anthropic to Train AI Models on User Chats Unless Opted Out by September 28

Anthropic will begin training its AI models on user data, including new and resumed chat transcripts and coding sessions, unless users actively opt out. The company is also extending its data retention policy to five years for users who do not choose to opt out. All users must make a decision by September 28, 2025. Users who select “Accept” now will allow Anthropic to use their new or resumed conversations and coding sessions to train and improve its AI models, with data stored for up to five years. However, this does not apply to past chats or coding sessions that have not been resumed. If a user continues an existing conversation or coding session, the new policy will apply to all activity from that point forward. These changes affect all consumer subscription tiers of Claude, including Claude Free, Pro, and Max, as well as Claude Code for users on those plans. The update does not apply to commercial tiers such as Claude Gov, Claude for Work, Claude for Education, or API usage through platforms like Amazon Bedrock or Google Cloud’s Vertex AI. New users will need to choose their preference during the signup process. Existing users will be prompted via a pop-up notification, which can be deferred by clicking “Not now,” but they will be required to make a decision by the September 28 deadline. The pop-up interface may lead to unintended consent. It prominently displays “Updates to Consumer Terms and Policies” in large text, followed by the message: “An update to our Consumer Terms and Privacy Policy will take effect on September 28, 2025. You can accept the updated terms today.” A large black “Accept” button is placed at the bottom. Below it, in smaller text, is a toggle switch labeled “Allow the use of your chats and coding sessions to train and improve Anthropic AI models,” which is set to “On” by default. Many users may click “Accept” without adjusting the toggle, unknowingly agreeing to data use. To opt out, users can toggle the switch to “Off” when prompted. Those who have already accepted can change their choice later by going to Settings > Privacy > Privacy Settings, and turning off the “Help improve Claude” option. While users can update their preference at any time, the change only applies to future data — previously used data cannot be removed from training sets. Anthropic states that it uses automated tools and processes to filter or obscure sensitive information and emphasizes that it does not sell user data to third parties.

Related Links