Will Clients Now Just Blame All Bad Lawyering On ChatGPT?

from the the-ai-made-me-do-it dept

We all remember the infamous case from earlier this year, in which lawyer Steven Schwartz had to admit that he had used ChatGPT to help construct a brief (which was then signed by another lawyer, a partner at his firm), and that neither lawyer bothered to check whether or not the citations were made up (they were). Schwartz had to pay $5k in sanctions, and readily admits that the punishment is that his reputation is now “the lawyer who used ChatGPT”

That story went so viral that you would hope it would mean that most lawyers heard about it, and understood not to do that kinda thing. However, there was recently another story suggesting a similar situation. LAist has the details of an infamous eviction lawyer who was sanctioned for fabricated cases in a filing, which many people think were probably AI generated.

At first glance, the filing from April looks credible. It’s properly formatted. Block’s signature at the bottom lends a stamp of authority. Case citations are provided to bolster Block’s argument for why the tenant should be evicted.

But when L.A. Superior Court Judge Ian Fusselman took a closer look, he spotted a major problem. Two of the cases cited in the brief were not real. Others had nothing to do with eviction law, the judge said.

“This was an entire body of law that was fabricated,” Fusselman said during the sanction hearing. “It’s difficult to understand how that happened.”

Apparently a bunch of people believe that it was another Schwartz situation with ChatGPT making up cases.

But, an even more “lawyer uses GPT” story is shaping up in a different case. Former Fugees star Pras Michel was convicted on 10 felony counts earlier this year, but he’s made a motion for a new trial, arguing in part, that his previous lawyer used AI to write the closing arguments in the case. There are many arguments in the motion regarding claimed problems with the trial, and the ineffective counsel arguments include a bunch of things as well.

But the AI claims… well… stand out. The argument is not just that Michel’s lawyers used AI to write the closing arguments, but that they were investors in the company that made the AI, which is why they used it:

It is now apparent that the reason Kenner decided to experiment at Michel’s trial with a never-before-used AI program to write the closing argument is because he and Israely appear to have had an undisclosed financial interest in the program, and they wanted to use Michel’s trial as a test case to promote the program and their financial interests. Indeed, the press release the AI company issued after the trial that quotes Kenner praising the AI program states that the company launched the program “with technology partner CaseFile Connect.” Zeidenberg Decl. ¶ 7 & Ex. C. The CaseFile Connect website does not identify its owners, but it lists its principal office address as 16633 Ventura Blvd., Suite 735, which the California Bar website indicates is the office address for Kenner’s law firm. Id., Ex. F. Open sources further indicate that the third office address CaseFile Connect’s website provides is associated with Kenner’s co-counsel and friend, Israely. Id., Ex. G. The reason they used the experimental program during Michel’s trial and then boasted about it in a press release is now clear: They wanted to promote the AI program because they appear to have had a financial interest in it. They did this even though this experiment adversely affect Michel’s defense at trial, creating an extraordinary conflict of interest.

To be honest, there’s not much more in the motion regarding the AI, how it was actually used, or if it made bad arguments. There is a lot more detail regarding the judge telling Michel that it was possible his lawyer had conflict-of-interest issues, and how those needed to be resolved.

But this did make me wonder if we’re going to see more claims like this going forward, whenever someone loses a case and wants to argue ineffective assistance of counsel. Will they just throw in some random “and he used AI!” to make it sound more compelling?

Filed Under: , , , , , ,
Companies: casefile connect

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Will Clients Now Just Blame All Bad Lawyering On ChatGPT?”

Subscribe: RSS Leave a comment
9 Comments
Anonymous Coward says:

…many licensed lawyers are lax or incompetent, since long before AI or computers showed up.

Most legal work is routine but unnecessarily & deliberately made complex — and screams for the use of modern data processing.
AI will fill that function well, but is still in its very early development.

80% of Lawyers will eventually be replaced by AI; consumers will rejoice.

Of course, the current legal profession cartel will fight such AI progress tooth & nail every step of that path.

Anonymous Coward says:

Re:

“80% of Lawyers will eventually be replaced by AI; consumers will rejoice.”

Doubtful on both counts.

The only thing AI will replace are the people who write the Grocery store tabloid articles about how the
Incredible Frog Boy is on the loose again.

Consumers would not rejoice if lawyers were replaced – you do realize that lawyers would still cost a shitload, right?

“AI progress”

LOL

Benjamin Jay Barber says:

Re: Re:

GPT 4 gets better answers than 90% of lawyers on the bar exam, but the real problem is that the law itself is not amenable to computation, because 1) the process of the decisions are often political and not logical and 2) there is no clear standard, just a bunch of opinions that often disagree, that have to be knitted into an adhoc standard.

Even Justice Roberts joked yesterday about how it is not clear about what the test for strict scrutiny in First Amendment cases pertaining to the application of trademark law to the facts of “Trump to little” satirical t-shirts.

Lastly the current AI systems have reduced the dimensionality of the problem from logical inference, to next token prediction, which just so happens to appear like logical inference. You could hypothetically do inference directly on all the text, but to practically do so would require an exponential growth of computation, as the size of the corpus grows.

However it just so happens that people literally do the same thing to save effort, because for the vast majority of people doing logical inference is hard enough, and they take the shortcuts which amount to rationalization, and those rationalizations and their contradictions make themselves into case law.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...