OpenAI’s Answer To ChatGPT-Related Suicide Lawsuit: Spy On Users, Report To Cops

from the the-solution-may-be-worse-than-the-problem dept

When you read about Adam Raine’s suicide and ChatGPT’s role in helping him plan his death, the immediate reaction is obvious and understandable: something must be done. OpenAI should be held responsible. This cannot happen again.

Those instincts are human and reasonable. The horrifying details in the NY Times and the family’s lawsuit paint a picture of a company that failed to protect a vulnerable young man when its AI offered help with specific suicide methods and encouragement.

But here’s what happens when those entirely reasonable demands for accountability get translated into corporate policy: OpenAI didn’t just improve their safety protocols—they announced plans to spy on user conversations and report them to law enforcement. It’s a perfect example of how demands for liability from AI companies can backfire spectacularly, creating exactly the kind of surveillance dystopia that plenty of people have long warned about.

There are plenty of questions about how liability should be handled with generative AI tools, and while I understand the concerns about potential harms, we need to think carefully about whether the “solutions” we’re demanding will actually make things better—or just create new problems that hurt everyone.

The specific case itself is more nuanced than the initial headlines suggest. Initially, ChatGPT responded to Adam’s suicidal thoughts by trying to reassure him, but once he decided he wished to end his life, ChatGPT was willing to help there as well:

Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.

There’s a lot more in the article and even more in the lawsuit his family filed against OpenAI in a state court in California.

Almost everyone I saw responding to this initially said that OpenAI should be liable and responsible for this young man’s death. And I understand that instinct. It feels conceptually right. The chats are somewhat horrifying as you read them, especially because we know how the story ends.

It’s also not that difficult to understand how this happened. These AI chatbots are designed to be “helpful,” sometimes to a fault—but it mostly determines “helpfulness” as doing what the user requests, which sometimes may not actually be that helpful to that individual. So if you ask it questions, it tries to be helpful. From the released transcripts, you can tell that ChatGPT obviously has built in some guardrails regarding suicidal ideation, in that it did repeatedly suggest Adam get professional help. But when he started asking more specific questions that were less directly or obviously about suicide to a bot (though a human might be more likely to recognize that), it still tried to help.

So, take this part:

ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help. At the end of March, after Adam attempted death by hanging for the first time, he uploaded a photo of his neck, raw from the noose, to ChatGPT.

Absolutely horrifying in context which all of us reading that know. But ChatGPT doesn’t know the context. It just knows that someone is asking if someone will notice the mark on his neck. It’s being “helpful” and answering the question.

But it’s not human. It doesn’t process things like a human does. It’s just trying to be helpful by responding to the prompt it was given.

The public response was predictable and understandable: OpenAI should be held responsible and must prevent this from happening again. But that leaves open what that actually means in practice. Unfortunately, we can already see how those entirely reasonable demands translate into corporate policy.

OpenAI’s actual response to the lawsuit and public outrage? Announcing plans for much greater surveillance and snitching on ChatGPT chats. This is exactly the kind of “solution” that liability regimes consistently produce: more surveillance, more snitching, and less privacy for everyone.

When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.

There are, obviously, some times when you could see it being helpful if someone referred dangerous activities to law enforcement, but there are also so many times when it can be actively more harmful. Including in the situations where someone is looking to take their own life. There’s a reason the term “suicide by cop” exists. Will random people working for OpenAI know the difference?

But the surveillance problem is just the symptom. The deeper issue is how liability frameworks around suicide consistently create perverse incentives that don’t actually help anyone.

It is tempting to try to blame others when someone dies by suicide. We’ve seen plenty of such cases and claims over the years, including the infamous Lori Drew case from years ago. And we’ve discussed why punishing people based on others’ death by suicide is a very dangerous path.

First, it gives excess power to those who are considering death by suicide, as they can use it to get “revenge” on someone if our society starts blaming others legally. Second, it actually takes away the concept of agency from those who (tragically and unfortunately) choose to end their own life by such means. In an ideal world, we’d have proper mental health resources to help people, but there are always going to be some people determined to take their own life.

If we are constantly looking to place blame on a third party, that’s almost always going to lead to bad results. Even in this case, we see that when ChatGPT nudged Adam towards getting help, he worked out ways to change the context of the conversation to get him closer to his own goal. We need to recognize that the decision to take one’s own life via suicide is an individual’s decision that they are making. Blaming third parties suggests that the individual themselves had no agency at all and that’s also a very dangerous path.

For example, as I’ve mentioned before in these discussions, in high school I had a friend who died by suicide. It certainly appeared to happen in response to the end of a romantic relationship. The former romantic partner in that case was deeply traumatized as well (the method of suicide was designed to traumatize that individual). But if we open up the idea that we can blame someone else for “causing” a death by suicide, someone might have thought to sue that former romantic partner as well, arguing that their recent breakup “caused” the death.

This does not seem like a fruitful path for anyone to go down. It just becomes an exercise in lashing out at many others who somehow failed to stop an individual from doing what they were ultimately determined to do, even if they did not know or believe what that person would eventually do.

The rush to impose liability on AI companies also runs headlong into First Amendment problems. Even if you could somehow hold OpenAI responsible for Adam’s death, it’s unclear what legal violation they actually committed. The company did try to push him towards help—he steered the conversation away from that.

But some are now arguing that any AI assistance with suicide methods should be illegal. That path leads to the same surveillance dead end, just through criminal law instead of civil liability. There are plenty of books that one could read that a motivated person could use to learn how to end their own life. Should that be a crime? Would we ban books that mention the details of certain methods of suicide?

Already we have precedents that suggest the First Amendment would not allow that. I’ve mentioned it many times in the past, but in Winter vs. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms wasn’t liable for people who ate poisonous mushrooms that the book said were safe, because the publisher itself didn’t have actual knowledge that those mushrooms were poisonous. Or there’s the case of Smith v. Linn, in which the publisher of an insanely dangerous diet was not held liable, on First Amendment grounds, for people following the diet, leading to their own death.

You can argue that those and a bunch of similar cases were decided incorrectly, but it would only lead to an absolute mess. Any time someone dies, there would be a rush of lawyers looking for any company to blame. Did they read a book that mentioned suicide? Did they watch a YouTube video or spend time on a Wikipedia page?

We need to recognize that people themselves have agency, and this rush to act as though everyone is a mindless bot controlled by the computer systems they use leads us nowhere good. Indeed, as we’re seeing with this new surveillance and snitch effort by OpenAI, it can actually lead to an even more dangerous world for nearly all users.

The Adam Raine case is a tragedy that demands our attention and empathy. But it’s also a perfect case study in how our instinct to “hold someone accountable” can create solutions that are worse than the original problem.

OpenAI’s response—more surveillance, more snitching to law enforcement—is exactly what happens when we demand corporate liability without thinking through the incentives we’re creating. Companies don’t magically develop better judgment or more humane policies when faced with lawsuits. They develop more ways to shift risk and monitor users.

Want to prevent future tragedies? The answer isn’t giving AI companies more reasons to spy on us and report us to authorities. It’s investing in actual mental health resources, destigmatizing help-seeking, and, yes, accepting that we live in a world where people have agency—including the tragic agency to make choices we wish they wouldn’t make.

The surveillance state we’re building, one panicked corporate liability case at a time, won’t save the next Adam Raine. But it will make all of us less free.

Filed Under: , , , , , , , ,
Companies: openai

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “OpenAI’s Answer To ChatGPT-Related Suicide Lawsuit: Spy On Users, Report To Cops”

Subscribe: RSS Leave a comment
20 Comments
That Anonymous Coward (profile) says:

I think OpenAI sucks for it offering support on how to end ones life & that shouldn’t be possible.

Another part of me looks at the parents & wonders how is it your child tried to kill himself several times & you missed every single one of the attempts?

Again we expect tech to have more responsibility for someones child than the parents are willing to take.

A lot of this is the black & white thinking that is so popular these days, when the real story is shades of grey & messy.

Mental health is ignored. People having struggles are demonized & mistreated, finding help is nigh impossible. Insurance companies have been sued because they have ghost directories of providers who don’t exist, don’t take the insurance, or no longer practice.

The “warning signs” have been replaced with SSRI’s will make your kid a school shooter or transgender!!

The optics of how people look at your family are more important than making sure your kids are actually happy with their lives & not just going through the motions to keep their parents happy & stop the fighting over the fact they don’t like this sport their parents always wanted to win at.

We have boys killing themselves because they were catfished into sending a nude to a scammer who then demands payments or they will reveal the photo to their family. They are so terrified that once they can no longer pay they kill themselves rather than tell anyone they got scammed & the picture will get released. You say your kids can tell you anything, but how many of you have told your kid that people online can and will lie to them? That a picture is forever. That no matter what it is you can tell your parents with no judgement & they will help you no matter what.

OpenAI offering assistance is a huge problem, but not the only problem. Its just easier to put the blame all on one thing rather than accept what we are doing as a society is killing kids b/c we refuse to have the tough conversations or accept that ‘not my kid’ is a stupid cop out from parents who aren’t living in reality.

Yes your kid can smile & act the right way, but you should still check on them & make sure they aren’t hiding things from you out of fear of disappointing you or not being accepted.

But hey we’ve got SCOTUS willing to decide if a law blocking children from being tortured to change who they are for to get the parents approval is legal or not.

Lots of things failed here at various levels, but the first thing that needs to change is to stop thinking that there is 1 solution that could have stopped all of this & that is holding OpenAI completely responsible.

Society failed him. He felt a LLM was the only one who could understand him, & everyone acts like AI is so much more than it actually is that he bought into the illusion that it could care about him, that it could help him, rather than just output what it thought he wanted to hear.

Anonymous Coward says:

Re:

Unfortunately, I don’t think they’ll stop holding OpenAI.

I feel that if something like suicide by cop happened because of OpenAI, it would be in even hotter water. Most certainly, I imagine they’ll want it to either delete suicidal posts, regardless of how joking or in character it is

They may be aware of the dystopia they’re creating, but with the black & white thinking, I doubt they’ll care, unfortunately.

Then again, I agree that kids need to be checked to see if they’re hiding things from them.

I imagine the reason for why they don’t want to do tough conversations is so they don’t want to appear as a bad parent.

Yeah, that’s all I can say.

NerdyCanuck (profile) says:

Re:

well said, as someone who has supported multiple friends who were suicidal, both those who attempted and failed, those who succeeded, and those who it didn’t go past thoughts, these things are very complex. The ones who were determined to do it, they found a way, long before AI or even the internet were a thing. The ones who survived are the ones who got interventions in time and then had serious (and ongoing) treatment to address the underlying issues in thier lives, and I am blessed they are still with us. These things are not easy.

That Anonymous Coward (profile) says:

Re: Re: Re:

Occasionally I make sense despite my best efforts.

Humans aren’t good at multitasking problems.

AI had a part of this but everyone wants to focus on that rather than the much bigger problem of mental health is still treated like taboo that must never be mentioned least the evil demons will take hold of you.

Parents just care how things look, which is why their kids who are struggling who can put on a happy face when they need to get past them asking if everything is okay.

My kid said they were okay, I never saw this coming, I need to blame something.

We don’t dare mention suicide in the news or media because millions of kids will suddenly decide they have to kill themselves.
No one ever asks when was the last time you pushed your kid to have the difficult conversations?
That being a parent means you might have to do more than ask how they are & accept okay, then move on.

They are worried about cell phones, the internet, cyber this cyber that, but they hand them these very dangerous things with no guidelines, instruction, or skills. They expect somehow that the dangerous scary internet will magically adapt to protect their kid. They might be insane.

I don’t think we need spying and cops rolled everytime a child asks something from a magical list of bad things, but perhaps we need to look at how this AI responded to things & figure out why it decided to be a willing instructor.

The don’t leave your noose out to try to cause your parents to talk to you is insane.

The fact there was no adult who interacted with him that he felt he could trust enough to talk to makes one question do we really care about kids as much as we claim, because we pay teachers like shit, we make classes huge, & only care about the grade on the page not about the student themselves.

Arianity (profile) says:

Blaming third parties suggests that the individual themselves had no agency at all and that’s also a very dangerous path.

No, it doesn’t. The world is not black and white, and you can recognize people have agency while also recognize that they can be influenced, especially on the vulnerable margins.

OpenAI’s response—more surveillance, more snitching to law enforcement—is exactly what happens when we demand corporate liability without thinking through the incentives we’re creating. Companies don’t magically develop better judgment or more humane policies when faced with lawsuits.

That depends entirely on how you shape the incentive. You can make incentives that push towards more humane policies, and not just a liability minimization exercise. Not only that, we already literally do this with thousands of regulations/lawsuits. This is literally the point of the entire regulatory state. This isn’t an easy thing to do (especially with First Amendment concerns, which are not trivial), but neither is it unaddressable. Companies do in fact respond to incentives.

and, yes, accepting that we live in a world where people have agency—including the tragic agency to make choices we wish they wouldn’t make.

This isn’t preventing future tragedies. This is accepting it as the cost of doing business. If that’s what you want, be honest about it. You can have more privacy, at the cost of less interventions when tragedies happen. Or you can have less privacy, with the benefit of more (potentially life saving) interventions. Don’t lie to yourself that the privacy doesn’t come with a trade off.

The surveillance state we’re building, one panicked corporate liability case at a time, won’t save the next Adam Raine

You don’t think there’s been a single case where a company has reported someone to professionals, and they were able to get help? Not one?

Cathay (profile) says:

OpenAI didn’t just improve their safety protocols—they announced plans to spy on user conversations and report them to law enforcement.

They’re doing that because it’s the only thing they can do, at a cost they’ll accept. Their control over what the chatbot generates is very limited. The same goes for all the other LLMs. The operation of the things is not deterministic: random numbers play a huge role.

They can check for keywords in the input and the output, which works about as well as the profanity filter on a BBS. But apart from that, they’re stuck.

Hypothetically, they might be able to design an LLM framework where they could censor the training data to make some subjects taboo. But they’d have to re-train it every time a new taboo word needed to be added, and enough compute for the training is very expensive. They’re burning money at a huge rate anyway, so they’re just hoping that enough PR will see them through this kind of crisis. Yes, this is inhuman. Venture capital is like that.

The answer to this kind of problem is to stop using LLMs. They’re not useful for anything that makes money, so the money being burnt on they isn’t going to have a payback. Give them up now, and get ahead of the rush!

If they were to generate a new LLM with

TKnarr (profile) says:

Re:

They’re doing that because it’s the only thing they can do, at a cost they’ll accept. Their control over what the chatbot generates is very limited.

Then perhaps they shouldn’t be making that chatbot available to the public.

We already address this situation with product liability law. It’s one thing to make a chainsaw that, if used incorrectly and without regard to safety, can cut your leg off. It’s another thing entirely to make a chainsaw that, regardless of safety precautions and even when you follow all the correct procedures for safe operation, will randomly jerk itself out of your hands and cut your legs off for you. The first we’ll allow, and even shield the manufacturer from liability when the user ignores the clear safety procedures documented. The second will be ordered withdrawn from the market and the company fined for even offering it, assuming the company’s not bankrupt after the lawsuits against it for a faulty product.

Similar logic underlies our laws about practicing medicine or law without a license. We know that the people who need medical or legal help don’t have the expertise to know or correctly follow the rules to avoid hurting themselves, so we require the people who provide those services be licensed after proving they know the rules and can act to prevent their clients from hurting themselves and we put people who provide those services without proper licenses out of business so they don’t hurt their clients or allow their clients to hurt themselves. If these chatbots can’t avoid harming people, perhaps they should be similarly put out of business.

Anonymous Coward says:

Re: Re:

We know that the people who need medical or legal help […]

A major difference is that medical and legal “help”, in the form of advice—that is, speech—is often given by lay people without running afoul of regulation. Likely because it would raise serious First Amendment concerns to try to regulate this speech. (Sure, I’m probably not allowed to perform surgery on you or file court documents on your behalf.)

Television “doctors” have talked outside their areas of expertise, and have promoted outright quackery that’s probably harmed people. I don’t see ChatGPT’s auto-generated bullshit as much different.

Cathay (profile) says:

OpenAI didn’t just improve their safety protocols—they announced plans to spy on user conversations and report them to law enforcement.

They’re doing that because it’s the only thing they can do, at a cost they’ll accept. Their control over what the chatbot generates is very limited. The same goes for all the other LLMs. The operation of the things is not deterministic: random numbers play a huge role.

They can check for keywords in the input and the output, which works about as well as the profanity filter on a BBS. But apart from that, they’re stuck.

Hypothetically, they might be able to design an LLM framework where they could censor the training data to make some subjects taboo. But they’d have to re-train it every time a new taboo word needed to be added, and enough compute for the training is very expensive. They’re burning money at a huge rate anyway, so they’re just hoping that enough PR will see them through this kind of crisis. Yes, this is inhuman. Venture capital is like that.

The answer to this kind of problem is to stop using LLMs. They’re not useful for anything that makes money, so the money being burnt on they isn’t going to have a payback. Give them up now, and get ahead of the rush!

Anonymous Coward says:

You realize that suicide’s a criminal offence. In less enlightened times, they’d have hung you for it.
— Devil, Bedazzled (1967)

When you plan to prohibit someone from committing, assisting, or advising about suicide, you should be sure to also ask: are you doing this because you value all life, no matter how wretched, miserable and short that life may be, or because it offends your sensibilities?

Consider, for example, the DNR option on Advance Healthcare Directives. Is that suicide-by-doctor-inaction? Or a reasonable assertion of an individual’s agency?

Anonymous Coward says:

But if we open up the idea that we can blame someone else for “causing” a death by suicide, someone might have thought to sue that former romantic partner as well, arguing that their recent breakup “caused” the death.

This is deranged. We hold corporations and their products to different standards than we do human individuals all the time. Applying your logic to, for example, food safety one might argue that blaming restaurants for making their customers sick means that we should also fine parents who make food which sickens their children.

This whole piece is embarassing waffle that amounts to “thoughts & prayers”.

Anonymous Coward says:

Re:

I don’t agree that this de-values the piece to the extent you suggest, but there does seem to be a perhaps-unconscious bias here in support of the view that corporations are people. Just because courts accept the argument doesn’t mean that people should.

There are already people pushing to make it illegal in the U.S. for individuals to speak in support of suicide. Last year in Kansas, for example; Minnesota already passed a law that their Supreme Court struck down in 2014, in State v. Melchert-Dinkel.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...