Senior English Judge Warns That Lawyers Who Use AI Must Check Their Legal Citations Thoroughly – Or Face ‘Severe Sanction’

from the professional-and-ethical-obligations dept

One of the legitimate criticisms of large language models, generative AI, and chatbots is that they produce hallucinations –- output that is plausible but wrong. That’s a problem in all domains, but arguably it’s a particularly serious one in the field of law. Hallucinated citations undermine the entire edifice of common law, which is based on precedent, as expressed in previous court decisions. This isn’t a new problem: back in May 2023 Techdirt wrote about a case involving a lawyer who had submitted a brief in a personal injury case that had a number of made up citations. Nor is it a problem that is going away. A recent case involved a lawyer representing the AI company Anthropic, who used an incorrect citation created by the company’s Claude AI chatbot in its current legal battle with music publishers.

Similar cases have been cropping up in the UK, and a High Court judge there has had enough. In a recent ruling, High Court Justice Victoria Sharp explores two cases involving hallucinated citations, makes some general observations about the use of AI by lawyers, and lays down their responsibilities if they do so.

One case involved a filing with 45 citations, 18 of which did not exist; in the other, five non-existent cases were cited. The court’s judgment [pdf] provides full details of how the hallucinations came to light, and how the lawyers involved responded when they were confronted with the non-existent citations. There is also an appendix with other examples of legal hallucinations from around the world: five from the US, four from the UK, three from Canada, and one each from Australia and New Zealand. But more important is the judge’s discussion of the broader points raised. Sharp begins by pointing out that AI tools can certainly be useful, and are likely to become an important tool for the legal profession:

Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal. It is used for example to assist in the management of large disclosure exercises in the Business and Property Courts. A recent report into disclosure in cases of fraud before the criminal courts has recommended the creation of a cross-agency protocol covering the ethical and appropriate use of artificial intelligence in the analysis and disclosure of investigative material. Artificial intelligence is likely to have a continuing and important role in the conduct of litigation in the future.

But that positive view comes with an important proviso:

Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.

This is not to be understood as a vague call to do better. Sharp wants to see action from the UK’s legal profession beyond the existing guidance from regulatory bodies (also discussed by her):

There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers [groups of barristers] and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence.

And for those who fail to do this, the court has a range of punishments at its disposal:

Where those duties are not complied with, the court’s powers include public admonition of the lawyer, the imposition of a costs order, the imposition of a wasted costs order, striking out a case, referral to a regulator, the initiation of contempt proceedings, and referral to the police.

In one of the two cases discussed by the judge in her ruling, a serious punishment was not handed out to a lawyer who had failed to check the citations, despite sufficient grounds for doing so. Sharp gave a number of reasons for this in her judgment, including:

our overarching concern is to ensure that lawyers clearly understand the consequences (if they did not before) of using artificial intelligence for legal research without checking that research by reference to authoritative sources. This court’s decision not to initiate contempt proceedings in respect of Ms Forey [the lawyer in question] is not a precedent. Lawyers who do not comply with their professional obligations in this respect risk severe sanction.

It will probably take a few “severe sanctions” being meted out to lawyers who use hallucinated precedents without checking them, before the profession starts taking this problem seriously. But the ruling by Sharp is a clear indication that, while English courts are quite happy for lawyers to use AI in their work, they won’t tolerate the errors such systems can produce.

Follow me @glynmoody on Mastodon and on Bluesky.

Filed Under: , , , , , , , , , , , , ,
Companies: anthropic

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Senior English Judge Warns That Lawyers Who Use AI Must Check Their Legal Citations Thoroughly – Or Face ‘Severe Sanction’”

Subscribe: RSS Leave a comment
9 Comments
MathFox says:

Re:

“AI” is based on pattern recognition and a well trained AI is more often right than wrong. Problem is that the AI does not know when it is wrong and that “fixing” a bad AI model may introduce problems elsewhere.

So using an AI in an area where there’s something at stake is risky. When a lawyer signs off on an AI generated court, (s)he declares it to be a correct representation of his/hers client’s position. Mistakes may cause loss of claims in the lawsuit. Hallucinations, when pointed out by the opposing party reduce trust by the judges. (And the opposing party has to waste time and paper to debunk the hallucinations.)

Extremely fair that courts uphold the legal standards in court filings. It also should be made clear that companies are liable for any bullshit their “AI agents” produce.

Anonymous Coward says:

Re: Re:

No, not really. i mean, it is possible they don’t understand how “AI” works (and depending on what sort you are talking about, no one knows how some models work), but that doesn’t make them wrong, because it is irrelevant, and your crtique is a non sequitur.

It is still a huge tech industry lie. The marketing and evangelists make unevidenced claims worthy of any bog standard hallucination.

Anonymous Coward says:

Re:

What AI is exposing is how many shortcuts have traditionally been taken in the legal profession. Prior to AI, the shortcuts usually worked, and when they failed, could usually be hidden behind other excuses.

The truth is that while a barrister/lawyer is required to validate all citations by law, they rarely do a full review as it takes too much time.

B-Rex (profile) says:

On that High Court Judge

Dame Victoria Sharp (or Lady Justice Sharp which is her official Judicial title – as she is also a judge in the Court of Appeals), is the President, and therefore head of the High Court of England and Wales.

Just pointing this out as this isn’t some random high court judge. She took the case to make a point.

When she speaks, everybody listens.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...