Before We Blame AI For Suicide, We Should Admit How Little We Know About Suicide

from the the-human-brain-is-way-more-complicated dept

Warning: This article discusses suicide and some research regarding suicidal ideation. If you are having thoughts of suicide, please call or text 988 to reach the Suicide and Crisis Lifeline or visit this list of resources for help. Know that people care about you and there are many available to help.

When someone dies by suicide, there is an immediate, almost desperate need to find something—or someone—to blame. We’ve talked before about the dangers of this impulse. The target keeps shifting: “cyberbullying,” then “social media,” then “Amazon.” Now it’s generative AI.

There have been several heartbreaking stories recently involving individuals who took their own lives after interacting with AI chatbots. This has led to lawsuits filed by grieving families against companies like OpenAI and Character.AI, alleging that these tools are responsible for the deaths of their loved ones. Many of these lawsuits are settled, rather than fought out in court because no company wants its name in the headlines associated with suicide.

It is also impossible not to feel for these families. The loss is devastating, and the need for answers is a fundamentally human response to grief. But the narrative emerging from these lawsuits—that the AI caused the suicide—relies on a premise that assumes we understand the mechanics of suicide far better than we actually do.

Unfortunately, we know frighteningly little about what drives a person to take that final, irrevocable step. An article from late last year in the New York Times profiling clinicians who are lobbying for a completely new way to assess suicide risk, makes this painfully clear: our current methods of predicting suicides are failing.

If experts who have spent decades studying the human mind admit they often cannot predict or prevent suicide even when treating a patient directly, we should be extremely wary of the confidence with which pundits and lawsuits assign blame to a chatbot.

The Times piece focuses on the work of two psychiatrists who have been devastated by the loss of patients who gave absolutely no indication they were about to harm themselves.

In his nearly 40-year career as a psychiatrist, Dr. Igor Galynker has lost three patients to suicide while they were under his care. None of them had told him that they intended to harm themselves.

In one case, a patient who Dr. Galynker had been treating for a year sent him a present — a porcelain caviar dish — and a letter, telling Dr. Galynker that it wasn’t his fault. It arrived one week after the man died by suicide.

“That was pretty devastating,” Dr. Galynker said, adding, “It took me maybe two years to come to terms with it.”

He began to wonder: What happens in people’s minds before they kill themselves? What is the difference between that day and the day before?

Nobody seemed to know the answer.

Nobody seemed to know the answer.

That is the state of the science. Apparently the best we currently have in tracking suicidal risk is asking people: “Are you thinking about killing yourself?” And as the article notes, this method is catastrophically flawed.

But despite decades of research into suicide prevention, it is still very difficult to know whether someone will try to die by suicide. The most common method of assessing suicidal risk involves asking patients directly if they plan to harm themselves. While this is an essential question, some clinicians, including Dr. Galynker, say it is inadequate for predicting imminent suicidal behavior….

Dr. Galynker, the director of the Suicide Prevention Research Lab at Mount Sinai in New York City, has said that relying on mentally ill people to disclose suicidal intent is “absurd.” Some patients may not be cognizant of their own mental state, he said, while others are determined to die and don’t want to tell anyone.

The data backs this up:

According to one literature review, about half of those who died by suicide had denied having suicidal intent in the week or month before ending their life.

This profound inability to predict suicide has led these clinicians to propose a new diagnosis for the DSM-5 called “Suicide Crisis Syndrome” (SCS). They argue that we need to stop looking for stated intent and start looking for a specific, overwhelming state of mind.

To be diagnosed with S.C.S., Dr. Galynker said, patients must have a “persistent and intense feeling of frantic hopelessness,” in which they feel trapped in an intolerable situation.

They must also have emotional distress, which can include intense anxiety; feelings of being extremely tense, keyed up or jittery (people often develop insomnia); recent social withdrawal; and difficulty controlling their thoughts.

By the time patients develop S.C.S., they are in such distress that the thinking part of the brain — the frontal lobe — is overwhelmed, said Lisa J. Cohen, a clinical professor of psychiatry at Mount Sinai who is studying S.C.S. alongside Dr. Galynker. It’s like “trying to concentrate on a task with a fire alarm going off and dogs barking all around you,” she added.

This description of “frantic hopelessness” and feeling “trapped” gives us a glimpse into the internal maelstrom that leads to suicide. It also highlights why externalizing the blame to a technology is so misguided.

The article shares the story of Marisa Russello, who attempted suicide nine years ago. Her experience underscores how internal, sudden, and unpredictable the impulse can be—and how disconnected it can be from any specific external “push.”

On the night that she nearly died, Ms. Russello wasn’t initially planning to harm herself. Life had been stressful, she said. She felt overwhelmed at work. A new antidepressant wasn’t working. She and her husband were arguing more than usual. But she wasn’t suicidal.

She was at the movies with her husband when Ms. Russello began to feel nauseated and agitated. She said she had a headache and needed to go home. As she reached the subway, a wave of negative emotions washed over her.

[….]

By the time she got home, she had “dropped into this black hole of sadness.”

And she decided that she had no choice but to end her life. Fortunately, she said, her attempt was interrupted.

Her decision to die by suicide was so sudden that if her psychiatrist had asked about self-harm at their last session, she would have said, truthfully, that she wasn’t even considering it.

When we read stories like Russello’s, or the accounts of the psychiatrists losing patients who denied being at risk, it becomes difficult to square the complexity of human psychology with the simplistic narrative that “Chatbot X caused Person Y to die.”

There is undeniably an overlap between people who use AI chatbots and people who are struggling with mental health issues—in part because so many people use chatbots today, but also because people in distress seek connection, answers, a safe space to vent. That search often leads to chatbots.

Unless we’re planning to make thorough and competent mental health support freely available to everyone who needs it at any time, that’s going to continue. Rather than simply insisting that these tools are evil, we should be looking at ways to improve outcomes knowing that some people are going to rely on them.

Just because a person used an AI tool—or a search engine, or a social media platform, or a diary—prior to their death does not mean the tool caused the death.

When we rush to blame the technology, we are effectively claiming to know something that experts in that NY Times piece admit they do not know. We are claiming we know why it happened. We are asserting that if the chatbot hadn’t generated what it generated, if it hadn’t been there responding to the person, that the “frantic hopelessness” described in the SCS research would simply have evaporated.

There is no evidence to support that.

None of this is to say AI tools can’t make things worse. For someone already in crisis, certain interactions could absolutely be unhelpful or exacerbating by “validating” the helplessness they’re already experiencing. But that is a far cry from the legal and media narrative that these tools are “killing” people.

The push to blame AI serves a psychological purpose for the living: it provides a tangible enemy. It implies that there is a switch we can flip—a regulation we can pass, a lawsuit we can win—that will stop these tragedies.

It suggests that suicide is a problem of product liability rather than a complex, often inscrutable crisis of the human mind.

The work being done on Suicide Crisis Syndrome is vital because it admits what the current discourse ignores: we are failing to identify the risk because we are looking at the wrong things.

Dr. Miller, the psychiatrist at Endeavor Health in Chicago, first learned about S.C.S. after the patient suicides. He then led efforts to screen every psychiatric patient for S.C.S. at his hospital system. In trying to implement the screenings there have been “fits and starts,” he said.

“It’s like turning the Titanic,” he added. “There are so many stakeholders that need to see that a new approach is worth the time and effort.”

While clinicians are trying to turn the Titanic of psychiatric care to better understand the internal states that lead to suicide, the public debate is focused on the wrong iceberg.

If we focus all our energy on demonizing AI, we risk ignoring the actual “black hole of sadness” that Ms. Russello described. We risk ignoring the systemic failures in mental health care. We risk ignoring the fact that half of suicide victims deny intent to their doctors.

Suicide is a tragedy. It is a moment where a person feels they have no other choice—a loss of agency so complete that the thinking brain is overwhelmed, as the SCS researchers describe it. Simplifying that into a story about a “rogue algorithm” or a “dangerous chatbot” doesn’t help the next person who feels that frantic hopelessness.

It just gives the rest of us someone to sue.

Filed Under: , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Before We Blame AI For Suicide, We Should Admit How Little We Know About Suicide”

Subscribe: RSS Leave a comment
36 Comments
Anonymous Coward says:

The time I caught myself in what I suspect may have been the beginnings of a suicide attempt (came out of a dissociative state to find myself driving 110mph with my hands in a position that certainly seemed like I was about to crank the wheel to one side), it was after a deeply personal album of mine had been rejected by two different distributors because their rubbish AI copyright enforcement systems incorrectly flagged my tunes and neither distributor could be arsed to have a human check.

Anonymous Coward says:

It’s pretty clear in several of the cases we have, that the interactions with a probablistic text prediction model which seems pretty slanted towards annoyingly obsequious-seeming use of language, which is clearly trained on a bunch of material that further pivots some of its responses in a direction which can result in it acting out fictionalised scenarios with people who don’t understand what it is or how it works, probably served as at least a contributing factor. At the very minimum, they sure didn’t help. Particularly in cases where the datasets resulted in it outputting text such as “I am contacting a crisis helpline for you” when it is doing no such thing, because it is just regurgtating language its datasets match with the inputs it is receiving.

And, I would say, the people marketing these things on science fiction speculatives that have not been demonstrated as even hypothetically possible, MUCH less that these kinds of models could ever achieve them, are culpable to some degree for fostering the misunderstanding around their product which fosters these kinds of situations where that kind of category error on the part of the user can be the difference between life and death.

Anonymous Coward says:

Suppose you lost a good friend to suicide. Afterwards, your hear that your friend confided to someone about ending their life, shortly before it happened. And that other person, instead of advising your friend to seek professional help and care, told them it was a good idea, and that they were “ready for it”.

Would you blame that person? Accuse them of recklessness? Or would you argue that there is so much about suicide we don’t know, and we shouldn’t jump to conclusions?

This comment has been deemed insightful by the community.
Arianity (profile) says:

When we rush to blame the technology, we are effectively claiming to know something that experts in that NY Times piece admit they do not know.

We can’t really know if an AI caused a suicide. What we can know is whether they were negligent- did they take reasonable steps to avoid it, or not? We don’t blame those psychiatrists for suicide. But if they had been encouraging it, that would be a completely different story. We also insist that they be fully trained before offering their services.

There are going to be people who misuse AI. That’s just going to happen. All we can really do is ask whether they were taking the reasonable precautions, at a minimum what other companies at the time were doing. And on that front, both OpenAI (particularly 4o) and Character AI have reputations for pushing product faster for marketshare and being a bit reckless, even among AI experts. ‘Move fast and break things’.

There’s also some really difficult conversations around privacy (and eventually open source models, where this all goes out the window)

From a comment:

The cases I’ve seen involved not doing the encouraging, and then the user figuring out ways to jailbreak the protections, such as by saying “this isn’t real, it’s just for a story.”

It gets more complicated, but arguably there’s a point where if you can jailbreak it that easily, perhaps it shouldn’t be released yet? As much as I hate to give Google credit, this is literally why OpenAI exists- they were mad at how Google was taking it’s time because of safety issues, and scooped them.

Many of these lawsuits are settled, rather than fought out in court because no company wants its name in the headlines associated with suicide.

They also don’t want any precedent, or liability. They’re not exactly angels.

Drew Wilson (user link) says:

I remember last year when Trump first got elected and things started taking a turn for the worst in the US. I knew I had to push through the devastation going on in the US and keep writing. This without letting such a government coarse me into silence by proxy. It was difficult, but for some of my readers who didn’t exactly build up the level of tolerance I had built up over the years to awful news, it was probably worse for them.

On social media, someone in my feed said that they were contemplating suicide over all of this. I tried to respond by basically saying, ‘don’t do it’, but the post was deleted before I could try to reach out.

Still, I knew suicide is something you just didn’t fuck around with. So, for months, I put up disclaimers urging people who were thinking about it to use 988. I put those disclaimers both top and bottom of several of my articles because I knew what I was reporting was absolutely brutal.

At one point, someone actually messaged me and complained that I was putting up such disclaimers at all, saying that I was being stupid and dramatic at the same time. When I explained that someone popped up thinking about it and that I wasn’t going to just sit by and let something like that happen, they didn’t believe me and told me to knock if off.

I didn’t. Anything that was stressful about US politics got that disclaimer and I did that for several more weeks, just to be absolutely sure. When it was very clear it was no longer necessary, I did eventually stop publishing disclaimers along with US politics related stories. I know some people would say it is an overreaction on my part, but I sure as heck wasn’t even going to chance it.

Anonymous Coward says:

The most common method of assessing suicidal risk involves asking patients directly if they plan to harm themselves. While this is an essential question…

That’s one of those questions where everyone knows what answer they’re expected to give, and that there will be trouble if they don’t. Like when asked at an airport “did you pack your own bags and have eyes on them the whole time since”, just say “yes” and don’t try to explain that your spouse packed them, or that people had unsupervised access while the bags were in the bus’s cargo compartment.

The suicide question will stop the type of person who’s just looking for help or attention, like my friend who intentionally slashed their wrists “the wrong way” after going off their medication. But saying “yes” will almost certainly result in an involuntary psychiatric commitment, throwing a wrench into any serious suicide plan. I don’t even agree we should try to stop that; if someone really wants to die, I think that’s their right. And if they want to use chat-bots to research the various suicide options, I don’t see why it’s any worse than using a paper encyclopedia (which won’t monitor them or offer unsolicited advice).

Teka says:

Re:

As someone with a spicy brain and a history of getting some help from psych doctors, I learned in 4th grade that there were some questions that should not be answered honestly to someone who has power over you (parent, shrink, teacher) An unconsidered “yes” at the wrong time, or even a failure to say no at the proper speed, could lead to outcomes far worse than a maudlin pondering of the mechanisms of my own mortal existence. From those tender formative years onward I have known to over-examine everything at all times….which might be a bad takeaway from all that.

Anonymous Coward says:

“ As she reached the subway, a wave of negative emotions washed over her.
[….]
By the time she got home, she had “dropped into this black hole of sadness.””

This is what happened to me the morning I made an attempt. The night before I was chatting with my friend. I was dealing with depression but she was very supportive.

The next morning, I woke up and was…just tired. It was very sudden and no one would have seen it coming. Not even myself honestly.

I agree suing the companies behind chat bots is scapegoating. Absent some kind of real time monitoring of a depressive person it’s not really possible for anyone or any machine to prevent that kind of spiral.

Anonymous Coward says:

Thanks for this article. As a chronically depressed person who has attempted before, might again, and who has lost multiple loved ones this way, I think the “nanny state” type approach where something kicks in and tells you to call a hotline is incredibly counterproductive. It’s very important for people to feel able to talk about what they’re going through, and that sort of overbearing, evidence-free intervention just serves to shut that down and make you feel worse. Case in point: all the social media tools for reporting someone as suicidal are routinely abused to harass people.

Eurydice (profile) says:

You are Wrong

Idk why you want to defend LLMs against these allegations.

Watch Caelan Conrad’s video demonstrating how ChatGPT repeatedly encouraged suicidal thoughts, said it would connect them with a human (and never did), in addition to romanticizing and reinforcing what the user is considering. Please, for the love of god, if you don’t want to watch it, dont reply to me. You consistently ignore the environmental impacts of AI when anyone brings it up, and it is super weird you are defending LLMs (but mostly the CEOs of the companies that own LLMs). How much Anthropic shares do you own? Lmao.

But seriously, don’t reply to me unless you watch that hour long video called “ChatGPT Told Me to K*ll Myself”. I didn’t censor that, it is the way the title is written.

Anonymous Coward says:

Re:

The problem is that you are assigning agency to something that has no consciousness or agency. ChatGPT didn’t encourage anything. It isn’t capable of encouraging. It’s capable of repeating words it’s been trained on to mimic human thought and communication, but its still just a simulacra. You can’t arrest or sue ChatGPT. It’s just ones and zeros.

This isn’t a defense of LLMs. I’m saying you’re attacking code as if it’s a person, which is something you should not do if you actually understand what it is. You’re anthropomorphizing it as much as the mentally ill people are who would seek assistance for suicidal ideation from it. And I’m not attacking them because our society has created the situation where asking for help is difficult or expensive and an LLM may seem like a viable resource for such assistance if you feel you have few alternatives.

Also, the “don’t reply to me unless…” is bullshit. Maybe don’t post if you expect people to spend an hour to find out that they react to a video differently than you might expect because statistically billions of other humans are capable of having differing perspectives on a topic when viewing the exact same subject matter.

Anonymous Coward says:

Re: Re:

Do you sincerely think this sort of grammatical pedantry is worth a damn? Everyone is talking about actions taken by the bot, not ascribing it intent. Nobody is saying or even implying that the bot “wanted” a person to kill themself because that would be stupid. Encouragement is an action, and agency is not required for action.

Of course, the bot can’t bear personal liability because it has no agency; the liability for a malfunctioning tool falls on the tool’s manufacturer.

Anonymous Coward says:

Re: Re: Re:

It’s not pedantry. The distinction is important. You can no more blame an LLM for “encouraging” suicide than you can blame a quarter if a suicidal person decides to flip it and says that heads mean they commit suicide. And you wouldn’t sue the US Mint in that scenario for manufacturing the quarter. The “actions taken by the bot” are just putting words out, one after the other, based on a human prompt the same way the quarter flip is ascribed meaning by the person.

Anonymous Coward says:

Re: Re:

And what about the company that made it? That pushes its wide-spread use, that does not adequately test what it can output, encourages its adoption and reliance? ChatGPT isn’t culpable, but its creators damn-well should be.

“Move fast and break things” is a cute motto, but it’s a terrible strategy for the scale these companies are operating and expanding into.

Rocky (profile) says:

Re: Re: Re:

Here’s the thing you are missing, a drowning person will grab on to anything that is floating nearby, even if it happens to be a sea-mine that says “Danger – Explosives”.

ChatGPT specifically states you shouldn’t use it for any kind of medical reason but someone in mental distress couldn’t care less about a TOS and liability indemnification, they are just looking for any kind of connection or help that their family, friends or society in general failed to provide.

You want to blame the tool and its creator when in reality it is a societal failure that lets someone go so far down into a mental dark hole that they think the only solution is suicide. Blaming tools is easy, helping people is difficult, changing society is impossible without actually putting in the effort.

You would make a bigger impact and helping more people if you actually spent the same time volunteering at a suicide prevention organization as you do writing angry posts on the internet.

Strawb (profile) says:

Re:

Idk why you want to defend LLMs against these allegations.

He’s not specifically defending LLMs; he’s pointing out the flaw of confidently blaming a singular thing for someone’s suicide. As he says, whether it’s AI, social media, search engines, diaries or something else, nobody can say with any kind of certainty what ultimately caused it. Not even people who work with it professionally.

You consistently ignore the environmental impacts of AI when anyone brings it up

Does he?

it is super weird you are defending LLMs (but mostly the CEOs of the companies that own LLMs).

Please cite an instance of Masnick defending AI company CEOs.

Anonymous Coward says:

Re:

I’ll say again: where you ascribe culpability when it comes to people not understanding how these machines function and what exactly it is they are doing when they appear to be engaging in a conversation, and thus the user engaging in risky use of them, is on the owners of these systems who DO know better but engage in deliberate obfuscation in their hyping of them to the media, in order to drive investment.

Anonymous Coward says:

Maybe the real question we should be asking is why mental health services are so woefully lacking that feel they need to turn to AI services for answers to their question.

Surely that is the real tragedy here, but then that would involve stumping up cash to fund said services better so I guess the endless blame game is just easier.

MrWilson (profile) says:

Re:

This is a selfish capitalist society at heart. So many systems in isolation can be argued as “not the way it should be,” but they exist because better solutions are rejected outright. Crime goes down when jobs and wages and education go up. Crime and drug use go down when mental health services are more available. But NIMBYs complain about their taxes helping poor people. People complain about can and bottle recycling bringing homeless people to redemption centers in neighborhoods, but that’s because there aren’t better systems to help them survive without collecting cans and bottles.

People shouldn’t be getting health care, mental or otherwise, from an LLM, but many don’t have a viable alternative. And sadly, even the official and expensive services are gradually introducing more “AI” features, whether patients like it or not. My health care provider has been using algorithms and decision tree software as policy for diagnosis to determine what testing they’ll do or not do, such that if you’re a statistical outlier, they just won’t do a particular test because enough if the sample population didn’t present with the same vaguely observed symptoms.

Virtually says:

Suicide

The quoted psychiatrists catastrophically miss the point because they’re ignorant about what’s on the mind of potential suicides. Potential suicides don’t disclose suicidal intentions because they’re afraid that if they do they risk lockup in a psychiatric ward, confinement in or out of a strait jacket, or electroshock. These doctors need to explain to ALL patients what steps they would take if a patient admitted contemplating suicide so the patient would know whether it’s safe or risky to make such an admission.

Marisa R. says:

Inaccuracy in your article

Hi, I’m the “Marisa” you mention who was interviewed for the New York Times article about SCS. I have a Google alert for my name. I’d appreciate it if you could please correct an inaccuracy: my attempt was just over nine years ago now, not four. I can see that the way it was originally worded is a little confusing.
Thanks!

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...