AI in UK Justice System Risks Masking Deep-Rooted Funding Issues Rather Than Solving Them
The UK’s justice system, already strained by over a decade of underfunding, is now turning to artificial intelligence as a solution to its mounting challenges. With significant case backlogs, canceled court dates, and widespread logistical failures, the system—fragmented across England and Wales, Scotland, and Northern Ireland—is under immense pressure. In response, the Labour government has pledged to "unleash" AI across public services, aiming to boost efficiency and modernize operations. While AI tools, particularly large language models (LLMs) like those behind ChatGPT, are being promoted as productivity enhancers, their real-world impact in justice remains deeply contested. These tools can draft statements, schedule meetings, transcribe conversations, and summarize legal documents—tasks that could free up time for frontline staff. Early examples, such as the Old Bailey’s use of AI to process evidence overviews, have reportedly saved £50,000 per case. However, the benefits are not evenly distributed. The risks are greatest in areas with limited resources and where vulnerable individuals—often with little means to challenge decisions—rely on the system. A Home Office pilot using LLMs to summarize asylum cases found that 9% of outputs contained inaccuracies, including missing interview references, and 23% of users lacked confidence in the summaries despite time savings. The Ministry of Justice’s AI Action Plan for Justice, published in July 2025, acknowledges these risks. It introduces a chief AI officer, establishes ethical guidelines, and emphasizes that AI should support, not replace, human judgment. The plan aims to roll out AI tools to 95,000 justice staff by December, including Microsoft’s Copilot Chat for judicial officers. Yet concerns remain. A senior UK judge warned in June 2025 that LLMs can "hallucinate"—generate false or fabricated information—posing serious dangers when used in legal proceedings. International cases have already shown AI-generated citations and evidence being submitted in court, leading to errors and legal challenges. Moreover, previous digital reforms—such as online guilty pleas and automated convictions—have disproportionately affected marginalized groups, particularly women who may plead guilty to crimes they didn’t commit due to lack of support or understanding. These patterns suggest that AI-driven automation, if not carefully managed, could deepen existing inequalities. Ultimately, while AI has potential to reduce administrative burdens and improve efficiency, it cannot solve systemic failures rooted in years of underinvestment. Without addressing the underlying lack of funding, staffing, and access to justice, AI risks becoming a band-aid solution—masking deeper flaws rather than fixing them. The real danger lies not in the technology itself, but in using it to justify further cuts while failing to ensure that justice remains fair, transparent, and accessible to all.