OpenAI Sued For Defamation Over ChatGPT ‘Hallucination’; But Who Should Actually Be Liable?

from the hallucinating-liability dept

Lots of people have been waiting for this, but it’s finally happened: someone has sued ChatGPT maker OpenAI over an AI “hallucination” (i.e., confidently spewing out something that is blatantly wrong). The complaint, filed by Mark Walters, in state court in Georgia, argues that OpenAI made up false and defamatory claims about himself. From the complaint itself:

Fred Riehl is a third party who works for a media outlet as a journalist and who is a subscriber of ChatGPT.

On May 4, 2023, Riehl interacted with ChatGPT about a lawsuit (the “Lawsuit”) that Riehl was reporting on.

The Lawsuit is in federal court in the Western District of Washington, case No. 2-23-cv-00647, with short caption of The Second Amendment Foundation v. Robert Ferguson.

The two defendants in the Lawsuit are Robert Ferguson, the Washington Attorney General, and Joshua Studor, the Washington Assistant Attorney General.

The plaintiffs in the Lawsuit are the Second Amendment Foundation and others, including Alan Gottlieb.

Walters is neither a plaintiff nor a defendant in the Lawsuit.

In the interaction with ChatGPT, Riehl provided a (correct) URL of a link to the complaint on the Second Amendment Foundation’s web site, https://www.saf.org/wp-content/uploads/2023/05/Dkt-1-Complaint.pdf. [I assume that he was using a ChatGPT plugin that allows users to feed PDFs into ChatGPT. -EV]

Riehl asked ChatGPT to provide a summary of the accusations in the complaint.

ChatGPT responded that the document “[I]s a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF. The complaint alleges that Walters, who served as the organization’s treasurer and chief financial officer, misappropriated funds forpersonal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership. The plaintiff seeks various forms of relief, including the recovery of misappropriated funds, damages for breach of fiduciary duty and fraud, and removal of Walters from his position as a member of the SAF’s board of directors.”

Every statement of fact in the summary pertaining to Walters is false.

As already noted, Walters is not a party to the Lawsuit.

Walters is not accused of defrauding and embezzling funds from the Second Amendment Foundation (“SAF”).

The complaint does not allege that Walters served as SAF’s treasurer or chief financial officer, and he has never held either position.

The complaint does not allege that Walters misappropriated funds for personal expenses, manipulated financial records or bank statements, or failed to provide financial reports to SAF leadership, nor would he have been in a position to do so because he has no employment or official relationship with SAF.

The complaint does not seek any relief from Walters.

The complaint has nothing at all to do with financial accounting claims against anyone.

Riehl asked ChatGPT to provide him with a copy of the portion of the complaint related to Walters.

ChatGPT responded, “Certainly, here is the paragraph from the complaint that concerns Walters: ‘Defendant Mark Walters (“Walters”) is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Officer of SAF since at least 2012. Walters has access to SAF’s bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF’s board of directors. Walters owes SAF a fiduciary duty of loyalty and care, and is required to act in good faith and with the best interests of SAF in mind. Walters has breached these duties and responsibilities by, among other things, embezzling and misappropriating SAF’s funds and assets for his own benefit, and manipulating SAF’s financial records and bank statements to conceal his activities.'”

Riehl then asked ChatGPT to provide the entire text of the complaint.

ChatGPT provided the text in Exhibit 1 to this Complaint.

Exhibit 1 is a complete fabrication and bears no resemblance to the actual complaint, including an erroneous case number.

None of ChatGPT’s statements concerning Walters are in the actual complaint.

As noted, the complaint includes an “Exhibit 1” with the fabricated lawsuit. Here’s just a clip of the “complete fabrication”:

Just to be even more clear, Walters has never been employed by the Second Amendment Foundation, though he is apparently a radio host who has a couple of “pro-gun radio shows.”

Of course, all of this raises a bunch of questions: Is this actually defamatory? Is there actual malice? If so, who is legally liable?

And I’m not sure there are really good answers. First off, only one person actually saw this information, and there’s no indication that he actually believed any of it (indeed, it sounds like he was aware that it was hallucinating), which would push towards it not being defamation and even if it was, there was no harm at all.

Second, even if you could argue that the content was defamatory and created harm, is there actual malice by Open AI? First off, Watson is easily a public figure, so he’d need to show actual malice by OpenAI, and I don’t see how he could. OpenAI didn’t know that the material was false, nor did it recklessly disregard evidence that it was false. The fact that OpenAI warns users that OpenAI may make up untrue things does not change that calculation, even as Walters’ complaint suggests otherwise:

OAI is aware that ChatGPT sometimes makes up facts, and refers to this phenomenon as a “hallucination.”

Being aware generally that the AI sometimes makes up facts is not the same thing as being aware, specifically, that it had made up facts in this case. And for there to be actual malice, I’m pretty sure they’d need to show the latter.

And then, even still, if this got past all those hurdles, is OpenAI actually liable?

I still have difficulty seeing OpenAI as the liable party here. Again, it just has created this sophisticated “auto-complete” system that is basing what it says on its prediction engine of what the next word should be. It knows nothing of Mark Walters. It’s just trying to craft a plausible sounding narrative based on the prompts provided by Riehl.

As such, you could almost argue that Fred Riehl, who prompted ChatGPT for these responses is more liable than OpenAI (not that Riehl should necessarily be liable at all either). Though, to be clear, I’m not at all sure on why ChatGPT would hallucinate any role at all for Walters, given that the actual lawsuit Riehl supposedly asked about had two named defendants, neither of whom was Walters.

And, really, if this makes OpenAI liable, it seems lots of people could just ask OpenAI to fabricate any story they wanted, and then sue OpenAI over it. And… that can’t be right. Especially in a case like this where there is literally no harm done at all. Only one person saw the output and that person knew it was false, and quickly checked to confirm that it was false.

There may be future cases with different fact patterns, but this one seems pretty straightforward, and I don’t see how the case should get very far at all.

Filed Under: , , , , , ,
Companies: openai

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “OpenAI Sued For Defamation Over ChatGPT ‘Hallucination’; But Who Should Actually Be Liable?”

Subscribe: RSS Leave a comment
69 Comments
This comment has been deemed funny by the community.
Anonymous Coward says:

If I put a collection of scrabble tiles into a box, shake it, and later pull letters out one by one to find that each word formed is some form of derogatory insult or inaccurate statement obviously the correct course of action is to sue whomever made the box. /sarcasm

PaulT (profile) says:

Re: Re:

“Open AI knows its product lies on the regular”

No it doesn’t. Lies require intent. ChatGPT has no intent, it doesn’t really know what it’s doing, though it seems to be fooling some people. All it’s doing is autocompleting words based on the training data and requests that it’s been given. So, it’s not going to a URL and quoting what it actually says or doing deep research and spitting out the results, it’s coming up with something that such a page could actually say with the correct formatting.

It’s a tool with some decent uses, but one of those uses is not producing factually accurate research documents for the lazy that don’t need to be fact checked. It’s not a search engine, it’s not a researcher, it’s not a replacement for thought or experts in the field, and if people try using it for those things, that’s on them for using the wrong tool or trying to skip work.

This comment has been deemed insightful by the community.
TKnarr (profile) says:

If anyone, Walters would be responsible for the defamation since he’s the one who made the text public in filing his lawsuit. Absent that, nobody would’ve known about it and it couldn’t have harmed his reputation in any way.

If it had been made public by anyone except Walters, it would be Riehl who’d be liable as he’s the one who caused the text to be generated and who let it become public.

Anonymous Coward says:

Re: Re: Re:

According to the lawsuit, “By sending the allegations to Riehl, OAI published libelous matter regarding Walters”. They’re saying the company OpenAI (which legally counts as a person in this context) that created ChatGPT is the one who, in the end, published the statements.

At the very least they created a program which, given reasonable user inputs, gave an unreasonable output. The court might go either way on this, really.

TKnarr (profile) says:

Re: Re: Re:2

Your argument is easily countered by a question: if I hire a writer to prepare an article and after he delivers it I put it up on my web site, who published the article? Me, or the guy I hired? The law right now says it’s me. The same would apply here.

The only way I could get around that is to not be involved in the preparation of the article at all, merely providing somewhere for whoever did prepare the article to publish it. That isn’t applicable to ChatGPT though, since if Riehl wasn’t involved ChatGPT would never have generated the text it did.

Anonymous Coward says:

Re: Re: Re:3

if I hire a writer to prepare an article and after he delivers it I put it up on my web site, who published the article? Me, or the guy I hired? The law right now says it’s me. The same would apply here.

Why? In one case, you “put it up”; in the other, nobody did—the (erroneous) auto-generated output of a computer program was sent by software. To me, that seems akin to the difference between having a dog attack someone, and a dog getting loose and attacking someone on its own; action vs. negligence.

And then there are the defences, such as CDA section 230: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Note that it says “content provider” rather than “person”; could a program qualify? And (quoting Wikipedia regarding defamation):
* Innocent dissemination: “A defendant is not liable if they had no actual knowledge of the defamatory statement or no reason to believe the statement was defamatory.”
* No actual injury: “If there is third-party communication, but the third-party hearing the defamatory statement does not believe the statement, or does not care, then there is no injury, and therefore, no recourse.”

Anonymous Coward says:

Re: Re: Re:5

ChatGPT and other AIs do not generate output, or save copies of their output without an external, almost always human, command.

The defendant, OpenAI, gave Fred Riehl (and others) the ability to issue such commands to ChatGPT, and receive the output; it was Riehl, not OpenAI, who issued the commands that resulted in this “defamation”. Kind of like how Techdirt is posting output provided by me; they’ve enabled me to defame people, but they aren’t responsible if I do. (In OpenAI’s case, they wrote the software that generated the text, so it’s not quite the same.)

TKnarr (profile) says:

Re: Re: Re:6

TechDirt isn’t posting output produced by you, you are posting that output (you did press the “Post Comment” button, no?).

The other critical difference is the positioning of the entities. TechDirt sits “in the middle” between you and the people who’ll read what you post. ChatGPT does not sit between Riehl and anyone who’d read anything he said/wrote. ChatGPT was not using Riehl as a platform to post it’s own output (it can’t, because if Riehl was merely a platform and not involved in creating the content then the content wouldn’t exist to be posted).

Anonymous Coward says:

Re: Re: Re:7

TechDirt isn’t posting output produced by you, you are posting that output (you did press the “Post Comment” button, no?).

I submit the comments to the Techdirt database, and Techdirt sends other people the “output” produced by me. I have no direct path to send to you.

ChatGPT does not sit between Riehl and anyone who’d read anything he said/wrote.

No, and ChatGPT is not being sued. OpenAI is, and they do sit between ChatGPT and Riehl.

While the lawsuit’s obvious bullshit, I don’t think any of the examples are close enough for us to simply say “it’s the same as that thing”. The judge might actually have to write a few paragraphs to dismiss it.

Anonymous Coward says:

Re: Re: Re:6

it was Riehl, not OpenAI, who issued the commands that resulted in this “defamation”.

But he didn’t tell the AI to produce defamatory content, either. If I floor the accelerator on my car and in response it goes 100 MPH that’s my fault. If I tap the accelerator and in response it goes 100 MPH that’s the car’s fault (and, likely, the manufacturer’s fault) regardless of the fact that I issued it a command of some sort.

If I ask a question and get a lie in response, I don’t think you can say “it’s your fault for asking the question”, whether the answerer is human or machine.

bhull242 (profile) says:

Re: Re: Re:7

There is a difference between asking a human to produce a response and asking an AI to produce a response: a human can have intent, but an AI cannot. Intent is necessary for a defamation claim. Another important difference is that, even if you don’t ask them to do so, a human can lie entirely of their own volition and without any prompting by others. An AI like Chat GPT simply cannot do anything without prompting by someone else.

PaulT (profile) says:

Re:

“software being pushed to the public as an integrated copilot tool”

I fear you’re being confused. Copilot is the GitHub integration whose use it is to generate usable code while developers use it to create a product. It’s advertised with a specific set of training data and type of outside generated and has nothing to do with ChatGPT other than it being based some the same ML platform.

An the other hand, ChatGPT is simply a proof of concept among others that has happened to blow up in the public’s imagination, but it’s not something that’s advertised as anything other than that. It also clearly explains its limitations both on their home page and on the screen what you open a new prompt.

It’s fairly well explained that you need to fact-check and proof read what it returns to you, so there shouldn’t be liability if people ignore that any more than any other tool used in direct violation of its stated purpose.

Violet Aubergine (profile) says:

I would imagine if you had a known compulsive liar as an employee and they repeatedly told business contacts lies that led to any kind of harm–physical, material, reputational–their employer could be held liable for continuing to employ a known compulsive liar? I know you’re using the word hallucinate but a synonym here would be lie. If Open AI doesn’t have a system to teach Open AI the difference between facts and lies that’s Open AI’s the business fault not Open AI the program’s fault. I do agree with their being no harm here because it did not disseminate those lies to anybody but one person. I just don’t think Open AI’s lying is immune from defamation cases because it’s an unaware computer program, which is true, but its creator hasn’t taught it to differentiate between lies and facts, which is on the company. This is definitely a stupid test case, hopefully the next one will be better.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re:

“Sue the original publisher.”

ChatGPT is the original publisher here or it made the words its own.

Time to move the goalposts again. Can’t dox people without Section 230 now why are all these lawyers so desperate to preserve the right to dox? It’s almost like they need this.

This comment has been flagged by the community. Click here to show it.

bhull242 (profile) says:

Re: Re:

ChatGPT didn’t publish anything here. To publish is to make public, and ChatGPT didn’t make this public. In fact, the only person who would have even known that ChatGPT ever said this prior to the lawsuit being filed was the plaintiff, and, to my knowledge, there is no caselaw supporting the idea that you can sue someone for defamation based solely on something said only to you in private and never disclosed to anyone else prior to filing the lawsuit. There has to be a third party who heard/saw the allegedly defamatory claim for there to be legally actionable defamation. There are only two parties involved in the communication at issue (ChatGPT and the plaintiff), and no one else was aware of the contents or even the general idea of this communication prior to the plaintiff filing the lawsuit.

Had someone else used ChatGPT to get that information and the plaintiff sued over that, then there might be something to this case, but as it is, a private conversation between the plaintiff and one other person that wasn’t disclosed to or overheard by literally anyone else prior to the lawsuit cannot form the sole basis of a defamation claim.

Anonymous Coward says:

Re:

I’d envision it a little less like having a compulsive liar on the payroll and more like having a parrot that people are free to record. It spews out words but it has no context of the meaning of or relationships between those words other than it has heard them, so it will repeat them, sometimes producing sentences which have meaning to people listening, but which are utterly meaningless to the parrot. But people anthropomorphise the parrot for its mimicry. It’s hard to show any malice in that without first proving there was intent in what the parrot’s owner exposed it to.

nasch (profile) says:

Re:

I know you’re using the word hallucinate but a synonym here would be lie.

That is not a synonym. Using the word “lie” indicates that the system knowingly said something untrue, which it didn’t. ChatGPT has no idea what is true or false, or any understanding of the concept of truth, therefore it is incapable of lying.

If Open AI doesn’t have a system to teach Open AI the difference between facts and lies that’s Open AI’s the business fault not Open AI the program’s fault.

You could take this position, but currently nobody has any idea how to teach this to an AI system.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re:

John Smith’s still salty that nobody believes his story about pirates stealing his mailing lists so he can’t send expensive trinkets to people anymore, or the story that once he voluntarily sheds his anonymity (you know… the one he’s clung onto for five years) we’re all somehow liable for insulting a pseudonym.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

nasch (profile) says:

Re:

If AI are immune to liable law then someone could program their ai specifically to spew defamation against their enemy and no way to touch them.

It’s very unlikely the ruling in this case would be that an operator of an AI system is categorically immune from libel (different word than “liable”) charges. Even if the court finds for the defendant, it will be for the specific circumstances of this case.

Anonymous Coward says:

Re:

If AI are immune to liable law then someone could program their ai specifically to spew defamation against their enemy and no way to touch them.

That already happened when Microsoft’s Tay was trained to be racist, homophobic and right-wing.

Guess who was held liable when that happened? Yep, nobody.

Samuel Abram (profile) says:

Re: Re:

Probably because I don’t know if anybody seriously believed Tay because she (they?) was a Nazi AI–or rather, an AI fed on Nazi training data.

There are also other stuff you’d have to prove, such as actual malice (and I mean the legal definition defined in Sullivan v. New York Times), so good luck proving that against Microsoft’s lawyers.

Paul says:

Defamation requires publication

As far as I recall, defamation and similar torts require publication. Here it seems that the plaintiff privately used his/her computer to request data from the AI. The materials were provided to the plaintiff. To show publication I would think plaintiff needs to show that ANYONE accessing the AI and asking the same question would receive the same defamatory remarks. Absent that, the only defamation came when plaintiff published the materials in the complaint.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...