Judge Rules Claude AI Chats Not Privileged, Setting Precedent for Legal AI Use
A federal judge has ruled that conversations with Anthropic’s AI chatbot Claude do not enjoy legal privilege, even when used by someone in the process of seeking legal advice. The decision, handed down by Judge Jed Rakoff, allows prosecutors to access 31 chat transcripts between Brad Heppner, a former finance startup executive, and Claude, which were found on electronic devices seized during his arrest. Heppner, co-founder of the now-defunct financial firm Beneficient, is accused of orchestrating a $150 million fraud scheme that contributed to the collapse of GWG Holdings. His lawyers argued that the AI-generated chats were protected under attorney-client privilege because Heppner used Claude to draft defense strategies and legal arguments he intended to discuss with his attorneys. However, Judge Rakoff rejected that claim, stating that Heppner shared the information with a third party—specifically, an AI system—whose privacy policy explicitly states that user inputs may be disclosed. The judge emphasized that the chats were not protected by the work product doctrine either, since Heppner’s lawyers did not direct him to use Claude. The ruling has sparked concern among legal professionals. Moish Peltz, an attorney whose commentary went viral on social media, said the decision is “directionally correct,” highlighting the risks of using AI tools to handle sensitive legal information. Others described the growing use of chatbots in legal contexts as creating a “discovery nightmare,” with vast amounts of potentially privileged data now at risk of being exposed in litigation. This case is not isolated. In November, a dispute over a video game company acquisition revealed that an executive used ChatGPT to attempt to avoid paying an earn-out, a detail later included in court filings. Additionally, in the ongoing lawsuit between The New York Times and OpenAI, a judge ordered the preservation of millions of chat logs to assess potential copyright violations. Noah Bunzl, an employment lawyer, said the outcome may come as a surprise to many, underscoring the risks of treating AI tools as private spaces. While some legal experts believe AI could eventually enhance legal communication, the current legal landscape remains unclear. Arlo Devlin Brown, a white-collar defense attorney, noted that this may be the first known case where AI use led to a loss of privilege, though courts may treat enterprise-grade AI tools differently from consumer-facing ones. He advised lawyers to caution clients that sharing privileged information with AI platforms could result in unintended disclosure during litigation. Representatives from the U.S. Attorney’s Office for the Southern District of New York, Anthropic, and Heppner’s legal team did not respond to requests for comment.
