Law Firms Combat AI Hallucinations with Specialized Detection Tools Amid Rising Legal Risks
Lawyers are turning to artificial intelligence to combat the very AI tools that are causing problems—specifically, the hallucinations that plague generative chatbots like ChatGPT, Claude, and Gemini. These AI systems, trained to predict the next word in a sequence, often invent plausible-sounding but entirely fictional cases, statutes, and legal precedents. The consequences can be serious: in one high-profile case, two lawyers at Cozen O'Connor were sanctioned after submitting a brief filled with fake citations generated by ChatGPT, despite the firm’s ban on using public chatbots. In response, Cozen O'Connor is now testing Clearbrief, a startup that offers an AI-powered citation checker. The tool functions as a Microsoft Word plug-in, scanning legal briefs for fabricated or inaccurate citations. Using natural language processing, it identifies false references, flags potential errors, and links directly to real case law or documents. It also highlights instances where a cited source doesn’t actually support the claim being made. The goal is not just to catch mistakes but to create a verifiable audit trail. Cozen O'Connor is integrating the tool into its workflow so that every draft can be accompanied by a cite-check report. This report logs who ran the check, when, and what was flagged—providing a paper trail that could protect lawyers if a judge ever questions the accuracy of a filing. The problem is widespread. Legal data analyst Damien Charlotin has tracked over 660 documented cases of AI hallucinations in legal filings between April 2023 and May 2025, with the rate accelerating to four or five new cases per day. Most involve solo practitioners or junior staff at larger firms, often during routine tasks like footnote formatting or summarizing case law. To reduce risk, major legal tech providers like Thomson Reuters and LexisNexis are emphasizing their proprietary databases—curated, vetted collections of case law and legal texts. Their AI tools are restricted to these trusted sources, drastically reducing the chance of hallucinations. LexisNexis has deepened its advantage by partnering with Harvey, a legal AI startup valued at $8 billion, feeding its vast legal database into Harvey’s generative models. Harvey also works directly with AI model providers like OpenAI and Anthropic, limiting the data sets the models can access and adding proprietary legal content. This allows lawyers to trace how an answer was generated and review the sources used. Despite these safeguards, experts agree that hallucinations won’t disappear anytime soon. The best defense remains a combination of training and technology: teaching lawyers to treat AI output as a starting point, not a final product, and using AI tools to verify that output. As one Cozen O'Connor partner put it, the solution to AI hallucinations may be more AI—used not to write, but to check.
