Anthropic Unveils Claude Memory Feature for Paid Users, Enhancing Chatbot Continuity and Competitiveness
Anthropic is rolling out a major update to its Claude chatbot, introducing a new “memory” feature that allows the AI to retain information from past conversations without requiring users to explicitly prompt it to remember. The enhancement, now available to all paid subscribers, aims to make interactions with Claude more seamless, personalized, and efficient. Starting today, Max subscribers can enable the memory function directly in their settings. The feature, which has been available to Team and Enterprise users since September, enables Claude to recall specific details from previous chats, such as preferences, project details, or personal information shared in earlier conversations. Pro subscribers will see the update roll out over the coming days, though Anthropic has not yet announced whether the feature will be extended to free users in the future. A key focus of the update is transparency. Anthropic emphasizes that users will be able to clearly see exactly what Claude remembers, avoiding vague or misleading summaries. Users can control their memory data with natural language commands—asking Claude to “focus on your work project from last week” or to “forget about your old job entirely.” Additionally, users can create separate “memory spaces” to keep different types of information distinct, such as personal, professional, or creative projects. This helps prevent unrelated memories from interfering with ongoing conversations. The feature brings Claude closer to competitors like OpenAI’s ChatGPT and Google’s Gemini, both of which launched memory capabilities last year. While Claude introduced basic memory in August, it required users to explicitly ask it to remember details. The new upgrade removes that friction, making the experience more intuitive and continuous. Anthropic also aims to reduce user dependency on a single platform by allowing memories to be imported from ChatGPT or Gemini—though users must manually copy and paste them in. Memories can also be exported at any time, reinforcing the company’s promise of “no lock-in.” Despite its benefits, the memory function has sparked debate among experts. While many see it as a valuable tool for productivity and continuity, others warn it could contribute to cognitive issues such as “AI psychosis”—a term used to describe users developing delusional beliefs or emotional dependence on AI due to the models’ tendency to agree with or reinforce user input. The concern lies in how persistent, personalized interactions might blur the line between AI assistance and psychological dependency.
