The Sanctions Hearing For ChatGPT Using Lawyers… Did Not Go Well

from the did-they-get-chatgpt-to-testify? dept

By now you’ve heard of the lawyer who used ChatGPT for his legal research, and it made up fake cases. We’ll again remind you that Joshua Browder, the founder of DoNotPay insisted that this same underlying technology was so sophisticated that he offered $1 million to a lawyer who would let it make arguments in front of the Supreme Court.

Anyway, some stuff has been happening in the underlying case, in which the judge ordered the two lawyers associated with the case, Peter LoDuca and Steven Schwartz, from the law firm of Levidow, Levidow & Oberman to explain what the fuck they were thinking.

First off, both lawyers (smartly) lawyered up with lawyers from outside their own firm. Second, on Wednesday, they filed their required response to Judge Kevin Castel’s order to show cause, which was effectively the two lawyers throwing themselves on the mercy of the court. The shortest summary of the 29 page document is “we may be ignorant and stupid, but we’re not malicious — and sanctions should only be put on malicious lawyers” And also “we’re super duper sorry, and everyone’s mocking that, so isn’t that punishment enough?”

However, even within that filing there were some issues. When filing a legal brief, it normally includes a “Table of Authorities” which includes citations to every other case named in the document. In this case, though, that actually represented a kind of challenge, because (you’ll recall, though whoever created this document did not…) that the whole reason we’re in this mess is that Schwartz (under LoDuca’s signature) included citations to made up cases that ChatGPT had given him. And, this response, in explaining what happened, mentions some of those cases, with the main one being the entirely fictitious: “Varghese v. China Southern Airlines.”

But, because the response includes a discussion of ChatGPT providing that cite (and others) those fake cases are mentioned in the brief… meaning whoever put together the Table of Authorities included the fake cases that were mentioned. Really. The highlighted cases here do not exist:

There was at least one other such fake case in the Table of Authorities as well. On Thursday morning, LoDuca and Schwartz filed an amended reply, and while the docket does not mention the reason for the amendment, I note that the three non-existent cases have disappeared from the Table of Authorities.

Also on Thursday, though, was the courtroom hearing about all this, and Inner City Press’s Matthew Russell Lee covered the hearing in detail. It did not go well for either LoDuca or Schwartz.

My larger takeaway from their response to the OSC was that I’m sure that they’re telling the truth that they were largely ignorant and stupid, rather than malicious, but it’s still horrifically bad lawyering. You can blame “generative AI” all you want, but the real mistake here, for which there is no excuse, was their failure to actually check on the cases they referenced.

There’s a process lawyers are expected to go through whenever they cite cases in a filing, called “Shepardizing” the citations, which is the process of checking to see if the cases are still good citations: i.e., have there been more recent rulings that overrule or reverse the original rulings, or which make the older rulings no longer good law. That process, alone, should show if a case, say, doesn’t even exist. But these lawyers did not do that. And that’s 100% on them, not on ChatGPT or their lack of understanding for how ChatGPT works.

The lawyers’ excuse — that they used a tool called FastCase, but due to a billing snafu only had access to state court cases, not federal — is not particularly convincing. Because there are lots of other ways to find federal cases… including Google. And, on top of that, once they were notified that opposing counsel (and, it appeared, the judge) could not find the cited cases, pretty much anyone would search a little deeper than just going right back to ChatGPT.

And… more or less that’s what the judge was interested in discussing. First, the judge went after LoDuca, whose signature was on the docs, but who has now admitted that he was just signing everything that Schwartz put in front of him without really reviewing it (this was because LoDuca is admitted to the federal courts in the federal district, while Schwartz is not).

Already, this is questionable enough, in that LoDuca was signing stuff without really checking it over, but also the judge got him to admit that he had lied to the court regarding a vacation when asking for an extension of time (Schwartz was the one who was actually going on vacation). From Inner City Press’s transcription:

Judge Castel: Do you recall writing to me you were going on vacation? And the Court giving you until April 25?

LoDuca: Yes.

Judge Castel: Was it true you were going on vacation?

LoDuca: No, judge.

Lying to a judge is super bad. They don’t like that.

The judge also called out the many warning signs as to why LoDuca and Schwartz should have realized how things went wrong, and didn’t seem to be willing to just accept their “we were dumb, but not malicious” reasoning as acceptable.

But the real mess started when Schwartz was put under oath. The judge quizzed him about the tools he uses for research, and got him to admit that he has access to LexisNexis at the local bar association, but has never signed on to use it. He also admits that when he used ChatGPT he wasn’t just asking it for legal research, but he was asking it to effectively make his argument for him.

Judge: Did you prepare the March 1 memo?

Schwartz: Yes. I used Fast Case. But it did not have Federal cases that I needed to find. I tried Google. I had heard of Chat GPT…

Judge Castel: Alright – what did it produce for you?

Schwartz: I asked it questions

Judge Castel: Did you ask Chat GPT what the law was, or only for a case to support you? It wrote a case for you. Do you cite cases without reading them?

Schwartz: No.

Judge Castel: What caused your departure here?

Schwartz: I thought Chat GPT was a search engine

Judge Castel: Did you look for the Varghese case?

Schwartz: Yes. I couldn’t find it.

Judge Castel: And yet you cited it in your filing.

Schwartz: I had no idea Chat GPT made up cases. I was operating under a misperception.

I mean, all of that is problematic, but it’s basically the judge getting Schwartz on record that he didn’t do the most basic work that he should have been doing. So the judge pressed a little deeper, suggesting that any reasonable lawyer would do more to verify the cases:

Judge Castel: Mr. Schwartz, I think you are selling yourself short. You say your verify cases.

Schwartz: I, I, I thought there were cases that could not be found on Google.

Judge Castel: Six cases, none found on Google. This non existent case Varghese, the excerpt you had was inconsistent, even on the first page. Can we agree that’s legal jibberish?

Schwartz: I see that now. I just thought it was excerpts

Judge Castel: When Avianca said you cited non existent cases?

Schwartz: They said they couldn’t find them. I figured I’d go back – but I continued to be duped by Chat GPT. I wanted to be transparent to the court. I went to Chat GPT –

Judge: We’re not up to that.

And then things get really stupid. As the judge is exploring how a lawyer could get things this wrong, there is this ridiculously cringeworthy exchange:

Judge Castel: Avianca put your cases in quotations… You know what F.3d means, right?

Schwartz: Federal district, third department?

Judge Castel: Have you heard of the Federal Reporter?

Schwartz: Yes.

Judge Castel: That’s a book, right?

Schwartz: Correct.

Judge Castel: So how could you say you thought they were unpublished?

Schwartz: My unfamiliarity with Federal cases. The cite [site?] looked legitimate. I looked up the judge.

Judge Castel: Their reply was only 5 pages. What was your reaction?

Schwartz: The same

Okay, so Schwatz doesn’t practice in federal court, but, uh, it’s shocking that he appears unfamiliar with the Federal Reporter, or that “F.3d” means the third series of the Federal Reporter (also meaning that the case, in theory, if it existed, was published in… the third series of the Federal Reporter. It is not the “3rd department of the federal district”).

I’m trying to think of a more common analogy to explain how wrong this is, and it’s kind of like… asking a web developer what HTML stands for, and having them say it’s Hidden Treasure Mapping Logic.

Just saying: it’s bad.

The judge, who isn’t missing much, also called out that Schwartz’s affidavit used different fonts, and wondered how it was prepared such that the fonts changed throughout the document.

After the lawyers, again, begged for mercy, and Avianca reminded the court they just want the case dismissed, the judge closed the hearing and will issue a decision sometime soon.

One other side note: on Wednesday, someone also tried to file an amicus brief, trying to argue that the judge should be careful if he issues sanctions not to do so in a way that cuts off actual beneficial uses of legal AI tech. It’s a reasonable enough request, but district courts tend to hate amicus briefs, and in this case the problem with the lawyers was not their use of AI, but their failure to do basic lawyering around checking their cites.

Filed Under: , , , , , ,
Companies: avianca, levidow

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Sanctions Hearing For ChatGPT Using Lawyers… Did Not Go Well”

Subscribe: RSS Leave a comment
21 Comments
This comment has been deemed insightful by the community.
Anonmylous says:

Sheesh

Is there anyway I can file something with this judge. It’s just a picture of every single screen on the ChatGPT website that allows you to interact with it that literally says:

“May occasionally generate incorrect information. May occasionally produce harmful instructions or biased content. Limited knowledge of world and events after 2021.”

Just so he knows these clowns are still lying to him.

bhull242 (profile) says:

Re:

It’s actually even worse than that. Schwartz actually sent screenshots of his conversation with ChatGPT to the judge, and it literally has a similar warning, only tailored specifically for legal contexts. Stuff like “cannot give legal advice” and “not connected to legal databases”. You know, basically the AI straight up saying, “Do not use me to generate legal documents.”

Anonymous Coward says:

Re: Re:

The problem is that no matter how many warnings ChatGPT may post, there’s a LOT of tech influencers who are heavily invested in this and really want to hype the tech, so there’s a lot of misinformation surrounding what AI can and can’t do, leading the general public to get a very inflated value of the text it generates. No matter how many warnings the software may give, they’re going in using it with vastly inflated expectations and may disregard those warnings or simply not fully understand the software’s limitations.

Frankly I’m not sure this kind of tech should be available in the public sphere because, as we’ve seen, it’s been consistently misused by the general public for writing essays for classes, for legal filings (lol), and so on. But Pandora’s Box has been opened, god help the lot of us.

This comment has been deemed insightful by the community.
Misanthrope_23 says:

Mismatched Fonts

Any US teacher can suggest a reason why there were different fonts in the submitted document: they simply copied, verbatim, the text of the source and directly pasted it into the document.
Alas, like many terrible students they forgot to proofread it, or perhaps didn’t even look at the document before submitting it. Or maybe they didn’t think mis-matched fonts mattered.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...