HyperAIHyperAI

Command Palette

Search for a command to run...

Anthropic Agrees to $1.5 Billion Settlement in Authors’ Copyright Lawsuit

Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit brought by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, marking the largest copyright settlement in U.S. history. The lawsuit alleged that Anthropic illegally used millions of copyrighted books—many obtained from pirate sites like Library Genesis and Pirate Library Mirror—to train its AI chatbot, Claude. While a federal judge previously ruled in June that training AI models on copyrighted works constitutes fair use due to its transformative nature, he also found that Anthropic’s practice of downloading and storing pirated books in a centralized digital library violated copyright law, warranting a separate trial. The settlement resolves that pending trial. Under the agreement, Anthropic will pay roughly $3,000 per book to hundreds of thousands of eligible authors, with payments disbursed in four tranches tied to court milestones. The first $300 million is due within five days of preliminary court approval, followed by another $300 million after final approval, and two additional $450 million payments within a year. Anthropic will also destroy the datasets containing the allegedly pirated material. Despite the massive payout, the settlement is not a legal victory for authors but rather a costly resolution for Anthropic, which recently raised $13 billion in funding and was valued at $183 billion. The company maintains that its AI training practices are protected under fair use, a doctrine unchanged since 1976 and ill-equipped to address modern AI. Judge William Alsup’s June ruling emphasized that training AI on books is akin to a human learning to write by reading classics—transformative and not infringing. “Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them—but to turn a hard corner and create something different,” he wrote. Critics argue the settlement undermines copyright protection. Authors didn’t win because their work was used to train AI; they won because Anthropic broke the law by pirating content instead of purchasing it legally. As one legal expert noted, the settlement sends a message that “taking copyrighted works from pirate websites is wrong,” but it also signals that companies can legally use copyrighted material for AI training—so long as they avoid outright piracy. The case is part of a broader wave of lawsuits against AI firms, including OpenAI, Meta, Google, and Midjourney, over the use of copyrighted content. While Anthropic’s settlement won’t set a binding legal precedent—since it was resolved out of court—it provides a critical reference point for future cases. It shows that while AI training may be fair use, the method of acquiring data matters. If a company legally purchases books, it may avoid liability. If it downloads them from pirate sites, even for training, it risks massive penalties. The settlement underscores the tension between innovation and intellectual property in the AI era. While Anthropic claims to support responsible AI development, the $1.5 billion payout reflects the high cost of bypassing legal channels. As AI continues to evolve, the balance between technological progress and creators’ rights remains unresolved—and likely to be contested in courts across the country.

Related Links