Lawyers Who Used ChatGPT In Lieu Of Research Must Pay $5k, Alert Other Judges

from the chatgpt-didn't-write-this-sanctions-order dept

The lawyers who ridiculously did not check the citations ChatGPT gave them, and when called on it, doubled down with having ChatGPT make up the matching opinions, have now been sanctioned: they have to pay $5k. That may seem light, but with all of the details, it kinda makes sense.

So, yes, by now you’ve heard about the lawyers who used ChatGPT for research, let it make up totally fake cases to support the (bad) legal argument they were making and got called on it by the judge, at which point they had ChatGPT make up the fake opinions of those fake cases. The lawyers then had to appear before the judge in a hearing in which their main defense was that they were incredibly stupid, but not malicious. The hearing did not go well, and the judge did not seem happy. After all, the real “error” here was not so much the use of ChatGPT, but rather not doing the absolute most basic work of checking citations to see if they were still good law (which, you know, should have turned up the fact that those cases didn’t even exist).

Still, it’s pretty rare for judges to actually sanction lawyers, and the lawyers noted that their own reputations were already completely trashed by this fuckup. Many people expected some big massive slap down, but that’s rare unless there is a trail of repeated awful behavior.

Thus, it’s not all that surprising that the actual sanctions here, announced Thursday, were somewhat mild. Of course, the other filing was that they lost the underlying case. Turns out that when the best you can do is make up cases, because the case law doesn’t actually support the argument you’re making, you’re going to lose.

As for the sanctions issue, Judge Castel’s opinion for that was six times longer than the order dismissing the case. Clocking in at 43 pages, the judge has some things he wants to say. Thankfully, he kicks it off by making clear that the problem here was not using technology. It was bad lawyering:

In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings…. Peter LoDuca, Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C. (the “Levidow Firm”) (collectively, “Respondents”) abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.

The judge notes that the submission of fake opinions can create real mischief and even notes (in a footnote) the tidbit we called out in an earlier post about how their initial filing in response to the order to show cause listed three fake cases in the “table of authorities” because however that table was put together, the fake case names were simply recognized as case names and then added.

But also, the judge finds the whole thing a complete mess:

Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.

And even as LoDuca and Schwartz threw themselves on the mercy of the court, the Judge notes that if they had just come clean when he first asked about the faked cases, there likely would not have been sanctions. It was the next filing, in which the faked opinions were submitted, that really sealed the deal, and showed “bad faith” by the lawyers:

The narrative leading to sanctions against Respondents includes the filing of the March 1, 2023 submission that first cited the fake cases. But if the matter had ended with Respondents coming clean about their actions shortly after they received the defendant’s March 15 brief questioning the existence of the cases, or after they reviewed the Court’s Orders of April 11 and 12 requiring production of the cases, the record now would look quite different. Instead, the individual Respondents doubled down and did not begin to dribble out the truth until May 25, after the Court issued an Order to Show Cause why one of the individual Respondents ought not be sanctioned.

For reasons explained and considering the conduct of each individual Respondent separately, the Court finds bad faith on the part of the individual Respondents based upon acts of conscious avoidance and false and misleading statements to the Court.

Judge Castel notes that Schwartz admits he was unable to find any evidence of the cases after being asked, but still thought it was fine to submit the ChatGPT results. And he also calls out (as came out at the hearing) LoDuca for lying about needing an extension, claiming falsely that he was on vacation (when it was actually Schwartz on vacation):

Mr. LoDuca’s statement was false and he knew it to be false at the time he made the statement. Under questioning by the Court at the sanctions hearing, Mr. LoDuca admitted that he was not out of the office on vacation. (Tr. 13-14, 19.) Mr. LoDuca testified that “[m]y intent of the letter was because Mr. Schwartz was away, but I was aware of what was in the letter when I signed it. . . . I just attempted to get Mr. Schwartz the additional time he needed because he was out of the office at the time.” (Tr. 44.) The Court finds that Mr. LoDuca made a knowingly false statement to the Court that he was “out of the office on vacation” in a successful effort to induce the Court to grant him an extension of time. (ECF 28.) The lie had the intended effect of concealing Mr. Schwartz’s role in preparing the March 1 Affirmation and the April 25 Affidavit and concealing Mr. LoDuca’s lack of meaningful role in confirming the truth of the statements in his affidavit. This is evidence of the subjective bad faith of Mr. LoDuca.

The court then goes through some of the fake decisions highlighting the myriad red flags that should have caused skepticism for any lawyer who read it, but which apparently raised zero red flags for Schwartz (and LoDuca, who admits to not having actually read them), even as their skepticism levels should already have been high, given the inability of opposing counsel or the judge to find the cases and the inability of Schwartz to find the case anywhere other than ChatGPT.

The next element of “bad faith” was that Schwartz again misrepresented himself to the court in his original affidavit, which claimed he used ChatGPT to “supplement” his research, but under questioning Schwartz admitted that it was all of his research.

Mr. Schwartz’s statement in his May 25 affidavit that ChatGPT “supplemented” his research was a misleading attempt to mitigate his actions by creating the false impression that he had done other, meaningful research on the issue and did not rely exclusive on an AI chatbot, when, in truth and in fact, it was the only source of his substantive arguments. These misleading statements support the Court’s finding of subjective bad faith.

(Somewhat amusingly, here Judge Castel has a footnote quoting Lewis Carroll’s Alice’s Adventures in Wonderland).

The next bit in support of bad faith was Schwartz admitting that, after the first order to show cause, he began to have doubts about ChatGPT’s accuracy, but in the affidavit, and elsewhere, Schwartz repeatedly claimed he could not fathom that ChatGPT would make up cases. That’s… contradictory:

These shifting and contradictory explanations, submitted even after the Court raised the possibility of Rule 11 sanctions, undermine the credibility of Mr. Schwartz and support a finding of subjective bad faith.

Oh yeah, also, the court points out multiple times that, even to this day, the two lawyers have not sought to withdraw the original document. Which, uh, yeah, they probably should have done that.

In exploring the legal issues here there are a few eyebrow-raising bits, including a discussion about how forging a judge’s signature is a crime. And while the court says what they did does not quite reach that level, it does “raises similar concerns.” Yikes!

In the end, the court notes that the purpose of sanctions is to deter future behavior, and while it’s clear that the two lawyers acted in bad faith, it appeared to mostly be about covering their own asses after making the first mistake, rather than something more nefarious. And thus the sanction may seem somewhat minimal, but the Judge concludes that it should serve the purpose of deterrence.

The three parts of the sanction are basically: (1) inform their client about just how stupid his lawyers are and how they’ve been sanctioned (2) alert all of the very real judges who were falsely named as authors of the fake cases about all of this and (3) pay $5k (jointly & severally, meaning that a total of $5k must be paid between the two lawyers and their law firm, and they’re all equally on the hook until it’s paid).

While some suggest this is too lenient, it seems reasonable given everything else. While the lawyers could, in theory, appeal, I’m guessing they will pay the $5k and try to move on.

Filed Under: , , , , , , , ,
Companies: avianca, levidow

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Lawyers Who Used ChatGPT In Lieu Of Research Must Pay $5k, Alert Other Judges”

Subscribe: RSS Leave a comment
10 Comments

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...