The emergence of generative artificial intelligence has created a common misconception that legal expertise is now accessible through a simple chat interface. For individuals navigating the courts in England and Wales without a solicitor, the appeal of using AI to draft complex documents is obvious.
However, the courts have recently highlighted a significant danger: the “hallucination”. In a legal context, this refers to instances where an AI convincingly invents non-existent case law or statute, leading to severe consequences for the person relying upon it.
The Reality of “Phantom” Cases
Modern generative AI is a powerful engine capable of simulating complex legal reasoning and drafting sophisticated arguments. However, even the most advanced systems lack a fundamental “grounding” in objective reality. Unlike a human solicitor who verifies facts against primary sources, an AI generates content by synthesizing patterns from its vast training data to provide the most contextually relevant response.
The danger arises when the AI’s drive for “contextual relevance” overrides factual accuracy. In an effort to be helpful, the system may bridge gaps in its knowledge by creating “hallucinations” – legally plausible but entirely fictional precedents, complete with invented case names and official-looking citations. This can result in legal submissions that appear professional but are built on entirely fabricated foundations.
Case Study: Felicity Harber v Commissioners for HMRC [2023] In this instance, a litigant in person (LiP) provided the First-tier Tribunal with nine summaries of previous cases. Upon inspection, the Tribunal found that none of these cases existed in any law report. While the Tribunal accepted the LiP’s sincerity, the authorities were disregarded, and the integrity of the appellant’s legal position was significantly diminished.
More recently, in Ayinde v London Borough of Haringey [2025] EWHC 1383, practitioners who relied on unverified AI research cited five false cases, resulting in wasted costs orders being issued against them by the High Court. These cases demonstrate that the court’s patience for “AI-assisted” errors is thin, regardless of whether the user is a professional or a layperson.
The Risks of Using AI for Legal Assistance
For those who choose to proceed without professional legal representation, relying on AI tools introduces several unmitigated risks:
- Lack of Verification: AI models often cite cases that have been overturned by higher courts or are simply made up. A qualified solicitor cross-references every citation against official primary sources.
- Procedural Errors: AI frequently struggles with the nuances of the Civil Procedure Rules (CPR), leading to the submission of incorrect forms or missing mandatory deadlines.
- Personal Liability: The courts have made it clear that a person is responsible for the content of any document they sign. If an AI tool generates defamatory content or false statements, the individual – not the software provider – is liable.
The Safety Net of Professional Indemnity Insurance
One of the most significant risks for a litigant in person using AI is the complete absence of a safety net if things go wrong. While a chatbot can provide “information,” it does not provide the professional security that comes with a regulated legal practice.
In England and Wales, all solicitors in private practice are required by the Solicitors Regulation Authority (SRA) to carry professional indemnity (PI) insurance. This insurance is a fundamental pillar of consumer protection. If a solicitor provides negligent advice, PI insurance ensures that there are funds available to pay for any loss suffered by the client.
By contrast, an AI provider typically has terms of service that disclaim all liability for “hallucinated” or inaccurate output. If an AI tool makes a mistake, a litigant in person has no insurance to fall back on and no legal personality to sue for negligence. The “saving” made by using a free tool can be wiped out in an instant by a single hallucinated citation.
The judiciary is actively monitoring the use of these tools. Current guidance from the Master of the Rolls states that while AI may assist in preparing a case, the user must ensure the accuracy of the final output. The courts have highlighted the “extraordinary” nature of AI-generated errors and reiterated that the court will not tolerate the waste of its time on “hallucinated” submissions.
Final Considerations
While technology provides new ways to access information, it does not replace the professional judgement required in litigation or in any legal work. For those managing a commercial dispute, the use of unverified AI research is a gamble. In the English legal system, the cost of a “hallucinated” argument far outweighs the perceived saving of bypassing independent qualified legal advice.