More Liability Will Make AI Chatbots Worse At Preventing Suicide

from the this-shit-is-way-more-complicated dept

California recently passed a law that will, in practice, cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely. The law requires chatbot providers to implement “a protocol for preventing the production of suicidal ideation” if they’re going to engage in mental health conversations at all, with liability waiting for any provider whose conversation is later linked to harm. New York is considering going further, with a bill that would simply ban chatbots from engaging in discussions “suited for licensed professionals.” Similar proposals are moving in other states.

If you’ve been reading Techdirt for any length of time, you know exactly what’s happening here. It’s the same moral panic playbook we’ve seen deployed against cyberbullying, then against social media, and now against generative AI. Something terrible happens. A handful of tragic stories emerge. Lawmakers, desperate to show they’re doing something, reach for the most visible technology in the room and start passing laws designed to stop it from doing whatever it was supposedly doing. The possibility that the technology might actually be helping more people than it’s hurting, or that the proposed fix might make things worse, rarely enters the conversation.

Professor Jess Miers and her student Ray Yeh had a terrific piece at Transformer last month that actually engages with the data and the incentive structures here, and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards. That is, until you actually think through how the current liability regime shapes behavior — as well as reflect on what we know about Section 230’s liability regime in a different context.

First, though, the empirical reality that rarely makes it into the moral panic coverage. People are using AI chatbots for mental health support at massive scale, and a lot of them say it’s helping:

A small number of tragic stories have spurred lawmakers into regulating how chatbots should help people who are dealing with mental health issues. Yet chatbots have emerged as first aid for people experiencing mental health issues, providing genuine benefit to those who aren’t in crisis but are not OK either. Heavy-handed legislation risks derailing this breakthrough in support, creating more problems than it solves.

Over a million people are using general-purpose chatbots for emotional and mental health support per week. In the US, those that use chatbots in this way primarily seek help with anxiety, depression, relationship problems, or for other personal advice. As conversational systems, chatbots can sustain coherent exchanges while conveying apparent empathy and emotional understanding. Many chatbots also draw on broad knowledge of psychological concepts and therapeutic approaches, offering users coping strategies, psychoeducation, and a space to process difficult experiences.

In a study of more than 1,000 users of Replika — a general-purpose chatbot with some cognitive behavioral therapy-informed features — most described the chatbot as a friend or confidant. Many reported positive life changes, and 30 people said Replika helped them avoid suicide. Similar patterns appear among younger chatbot users. In a study of 12–21-year-olds — a group for whom suicide is the second leading cause of death — 13% of respondents used chatbots for some kind of mental health advice, of which more than 92% said the advice was helpful.

There are, obviously, some limits to the Replika study, including that the data is from a few years ago, and it involves self-reporting, which can always lead to some wacky results. But it is notable that this study was done by Stanford academics (i.e., not Replika itself) and was good enough to get published in Nature. And it does seem notable that even with the methodological limitations, so many people self-reported that the service helped them avoid suicide. For all the attention-grabbing stories of chatbots being blamed for encouraging suicidal ideation, that seems important. Same with the claim of 92% that the mental health advice was helpful.

It feels like these kinds of numbers should be at the center of any serious policy conversation. Instead, they’re almost entirely absent from the legislative discussion, which focuses exclusively on the (very real, very tragic, but still somewhat rare) cases where things went wrong.

A big part of the reason chatbots are filling this gap is that the traditional mental health system isn’t remotely equipped to meet existing demand. Nearly half of Americans with a known mental health condition never seek professional help. There are plenty of reasons for this, ranging from the cost of mental health treatment, to the general stigma of being seen as needing such help, not to mention potential professional and social consequences.

As Miers and Yeh put it: “many stay silent, waiting to see if things get worse.”

Chatbots, whatever their limitations, offer something the professional system largely cannot: they’re always available in a form many people feel more comfortable talking with:

By contrast, chatbots offer low-friction, low-stakes, and always-available support. People are often more willing to speak candidly with computers, knowing that there is no human on the other side to judge or feel burdened. Some people even find chatbots to be more compassionate and understanding than human healthcare providers. AI users may feel more comfortable sharing embarrassing fears, or questions they might otherwise hold back. For clinicians, discussing these interactions can surface insights into patients’ thoughts and emotions that were once difficult to access. For now, chatbot providers generally refrain from contacting law enforcement, leading to more candid conversations.

So what does the California-style regulatory approach actually do to this ecosystem? Faced with liability for any conversation later linked to harm, and unable to reliably predict which conversations those will be (in part because, as we covered recently, even clinicians who specialize in suicide prevention admit they often can’t predict it), providers will default to the behavior that minimizes legal exposure whether or not it helps users. That means reflexively pushing 988 at any mention of distress, or cutting off conversations entirely, or simply refusing to engage with mental health topics at all.

And that kind of defensive posturing can be actively harmful to those most at risk:

Suicide prevention is about connecting people to the right support. Sometimes that means crisis care like hotlines or immediate medical treatment. But blunt, impersonal responses can backfire. Pushing 988 at the first mention of distress may seem neutral, but for some, it triggers shame, and deepens hopelessness. For some, suicide prevention “signposting” causes frustration, especially for those who already know those resources exist. People often turn to the Internet, or a chatbot, because they’re looking for something else. Abruptly ending conversations can have the same effect. That’s why suicide prevention protocols like Question, Persuade, Refer(QPR) prioritize trust-building and open dialogue before offering help.

So the regulatory regime mandates behavior that can actively escalate distress, all while still leaving providers exposed to blame if tragedy follows anyway. It’s the worst of both worlds: worse outcomes for users, continued liability for providers, and a chilling effect on the research and development that might actually improve things.

We don’t need to speculate about whether this dynamic plays out in practice. We’ve already watched it happen with social media:

The social media ecosystem has already shown this dynamic. In response to regulatory pressure, major online services heavily moderate, or outright prohibit, suicide-related discussions, sometimes hiding content that could otherwise destigmatize mental health. That merely displaces the conversations, and the people having them, often into spaces with less oversight and support.

If this sounds familiar, it’s because it is. It’s the same pattern that emerges whenever policymakers try to make sensitive topics go away through platform liability: the topics don’t go away, they just migrate to darker corners where nobody is watching at all. A mental health crisis doesn’t magically disappear just because Instagram or TikTok hid the conversation. Those in need of help are more likely to then end up somewhere with fewer guardrails, fewer resources, and fewer people equipped to help.

This leads directly back to the core of the argument, which may feel a bit backwards at first. If we want chatbot providers to build genuinely better systems for handling mental health conversations — systems that can identify distress patterns, offer appropriate triage, connect users to professional care when that’s what’s needed, and sustain helpful conversation when it isn’t — we need a liability environment that doesn’t punish the attempt.

This is, incidentally, exactly the logic that produced Section 230 in the first place. Before Section 230, the Stratton Oakmont v. Prodigy ruling created a perverse situation where platforms that tried to moderate content faced more liability than platforms that did nothing. The obvious result, had that stood, would have been less moderation, not more, because the smart legal advice would have been “don’t touch anything.” Section 230 fixed that by ensuring that the act of moderation itself didn’t create liability, which in turn made it possible for platforms to actually invest in moderation systems. Contrary to the widespread belief among the media and politicians, Section 230 didn’t eliminate accountability — it smartly redirected incentives toward the behavior we actually wanted.

The same logic applies here. A targeted liability shield for AI providers engaged in mental health support could give them the space to invest in building better suicide detection, better triage pathways, and better handoffs to human professionals. But that won’t happen if every such attempt turns into a potential lawsuit. The research to enable this is already happening despite the hostile incentive environment:

Meanwhile, emerging research suggests chatbots show real promise for mental health support. Trained on large-scale data and refined with clinical input, large language models are getting better at spotting patterns of distress and responding to suicidal ideation in nuanced, personalized ways. In a recent UCLA study, researchers found that LLMs can detect forms of emotional distress associated with suicide that existing methods often miss—opening the door to earlier, more effective intervention. According to another study, the most promising approach may be a hybrid where AI flags risk in real time, and trained humans step in with targeted support.

That hybrid model — AI identifying risk, trained humans providing targeted intervention — is exactly the kind of system you’d want chatbot providers racing to build. Instead, the current regulatory trajectory is telling them: build that, and you’re just creating a liability sinkhole. Every time your system engages with a mental health conversation, you’ve created a potential future lawsuit. Better to just block the conversation entirely and hope the user finds help somewhere else.

I get that some people will reasonably worry that “less liability” sounds like a giveaway to AI companies that are already acting irresponsibly. But Miers and Yeh aren’t arguing that chatbots should be able to impersonate licensed therapists, or that there should be no accountability for products designed to be used by vulnerable users. The American Psychological Association’s approach — prevent chatbots from posing as licensed professionals, limit designs that mimic humans, expand AI literacy — is perfectly compatible with a liability shield for thoughtful, helpful mental health support. The point is to stop punishing the specific behavior we want more of: chatbots that try to actually help people who are struggling, including by building better pathways to professional care for those who need it.

Simply putting liability on the companies is unlikely to do that.

And for people in acute crisis, professional intervention is still a necessity. Nobody serious is arguing chatbots should wholly replace crisis lines or psychiatric care. The argument is that the vast majority of people using chatbots for mental health support are not in acute crisis — they’re anxious, lonely, depressed, processing a breakup, working through stress, looking for someone to talk to at 3am when their therapist isn’t available and calling 988 feels like overkill. For that population — which is the overwhelming majority — the regulatory regime being built assumes the worst and mandates responses that often make things worse.

The deeper problem, as we’ve written before, is that the entire framing of “AI causes suicide” relies on a confidence about the mechanics of suicide that clinicians themselves don’t have. About half of people who die by suicide deny suicidal intent to their doctors in the weeks or month before their death. Experts who have spent decades studying this admit they often cannot predict it even when treating patients directly. The idea that we can identify which chatbot conversation “caused” which outcome, and design liability around that identification, assumes a causal clarity that doesn’t exist anywhere in the actual science.

Good policy here would look very different from what’s being proposed. Miers and Yeh point to a Pennsylvania proposal that would fund development of AI models designed to identify suicide risk factors among veterans — incentivizing the research we actually need rather than punishing it. They suggest liability shields modeled on Section 230 that would encourage continued investment in safer, more responsive systems. They warn specifically against imposing a clinical regulatory framework (with its mandatory reporting requirements) onto general-purpose chatbots, because doing so would replicate exactly the barriers that already keep many people from seeking professional help.

None of this is as emotionally satisfying as “ban the thing that hurt people.” Moral panics rarely are, because moral panics are fundamentally about finding something to blame rather than about the harder work of actually understanding what’s happening and designing interventions that might help. But for the over one million people per week currently turning to chatbots for mental health support — a group that includes at least the thirty Replika users who credit the chatbot with keeping them alive — the difference between a regulatory regime that punishes thoughtful engagement and one that incentivizes it is the difference between having somewhere to turn at 3am or running into a wall of “please call 988” followed by a terminated conversation.

We’ve watched this movie before with social media. We know how it ends. The conversations just move somewhere worse, with fewer resources and less oversight. The tragedies keep happening — they just stop being visible to anyone who might be in a position to help. And the technology gets worse at the thing we want it to be better at, because the legal environment has made getting better into a liability.

If lawmakers are serious about mental health outcomes rather than political theater, they should be asking how to make chatbots better at this — how to build the hybrid human-AI triage systems the research is pointing toward, how to turn these tools into genuine funnels toward professional care when that’s what’s needed, how to preserve the candid, low-stakes space that people clearly find valuable. That project requires a liability regime that rewards trying to be better rather than punishing it. The alternative is what California just passed, and what New York is considering, and what we’ll keep getting until someone in the policy conversation is willing to notice that the intuitive answer here is producing the exact opposite of the intended outcome.

It’s a counterintuitive approach. It’s also the only one that has any chance of actually working.

Filed Under: , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “More Liability Will Make AI Chatbots Worse At Preventing Suicide”

Subscribe: RSS Leave a comment
123 Comments
Anonymous Coward says:

The saddest part to me is that people are not able to get the same empathy and listening ear from their fellow humans near them as they do chatbots and AI. You should be able to talk to family and gain support from people around you in your life. I am not saying therapy or licensed professionals are not needed or can be replaced by family. Just that you should be able to count on those people too.

BootsTheory (profile) says:

Re:

The thing about a serious mental illness is that they’re often self-defeating. Thinking patterns and instinctive mechanisms meant to protect the entire brain (and body by extension) get misapproprated to protect the illness specifically.

We can be aware that we can count on family. We can know that they would comfort and listen, that our friends would drop everything, and not be able to force ourselves to begin the conversation. That’s if we have developed the metacognition to realize we’re doing this. It’s like…reaching towards fire or freezing because grass moved.
Humans freeze, or jump back, when grass moves particular ways. Some say “I thought I saw a snake/wolf/something” but that’s justifying after. The amygdala processed a pattern and sends down the order to the body before it ever goes to the visual cortex. Learning it’s your startle reflex doesn’t make you stop freezing, just helps you catch when you’re about to rationalize it. Learning to suppress an oversensitive startle reflex is much harder. (And in fact the amygdala can be something a mental illness misapproprates)

I can see here what some people might be doing – using a chatbot to practice the conversations. Desensitizing themselves to the equivalent of rustling grass.

It’s just that… It isn’t a controlled environment, it’s not actually safe to practice that desensitizing there. No one’s there to check for actual snakes with them and they’re wandering into the fringe of copperhead territory.

Anonymous Coward says:

ban chatbots from engaging in discussions “suited for licensed professionals.”

Wouldn’t that pretty much ban chatbots entirely? I mean, what’s a discussion that would not be suited for a licensed professional? Especially if we’re talking about mental health professionals, the whole point of which is to be available for helpful discussions on any topic that’s bothering a client.

Anonymous Coward says:

Re: Re:

Any law that bans AI chatbots

The quoted text does not limit the law to “A.I.” chatbots. But even if it did, the next question would be: what counts as “A.I.”? The chatbots that we have now are not intelligent; but even ELIZA was billed as such by its proponents, and managed to fool people into believing it.

Should GNU Emacs be illegal because it includes “M-x doctor”, even though that’s basically a decades-old joke? What about stuff like software installation scripts that happen to have “conversational” prompts? (“Where would you like to install this program?”—is that “chatting”?)

Anonymous Coward says:

Re: Re: Re:2

Pretending words don’t have a definition in usage is extremely disingenuous.

I don’t know which word you’re talking about, but courts often interpret even relatively well-understood phrases in surprising ways, like how growing a plant in one’s basement for one’s personal use counts as “interstate commerce”.

A law that’s overly ambiguous can cause real problems, and if this law doesn’t define the terms it’s talking about—such that the above rhetorical questions can be clearly answered—we’re likely to see unintended consequences.

TheKilt (profile) says:

I love how on Techdirt, when it comes to LLMs, the entire concept of product liability just goes right out the window. If this were a physical object that, ha ha, occasionally convinced people to commit suicide or murder, or spiral off into other delusions, it’d be off the shelves in a heartbeat, no matter how useful some people thought it was, and the manufacturer would be rightly sued into the ground. But according to Techdirt, because it’s software, it is now and forever a permanent and untouchable part of the internet landscape and regulating it is impossible and undesirable.

I’m (cautiously) interested in the concept of built-for-purpose chatbots being used therapeutically, although I expect the providers to fail horribly at not abusing the massive trove of personal data they’ll gain access to. But if a corporation can’t produce a general purpose chatbot that won’t help people kill themselves, they have no intrinsic right to just dump it on the internet and say “it’s not our fault.” If that’s a bet they want to make, then they need to accept that they’re going to take their lumps.

Anonymous Coward says:

Re:

If this were a physical object that, ha ha, occasionally convinced people to commit suicide or murder, or spiral off into other delusions, it’d be off the shelves in a heartbeat, no matter how useful some people thought it was, and the manufacturer would be rightly sued into the ground.

Alcohol? Weed? Right-wing books?

Anonymous Coward says:

A chatbot is, in absolutely zero circumstances, a replacement for a trained psychologist or therapist. To even imply otherwise is reckless and frankly stupid.

This is the kind of article that genuinely makes me consider unsubscribing.

If a company were selling a drug over the counter that has even a small potential to cause delusions, addiction, and even death, coming out and saying “oh, well, but you see, many users of this drug report feeling better when using it, and i mean, nobody can afford therapy anyway right? so why not just let them have the drug?” would be rightly seen as an absolutely insane, irresponsible statement designed entirely to rationalize the behavior of the company selling said drug.

Drugs spend decades being tested to prove that they are in fact actually effective and not dangerous to their users before being allowed on the market. Other substances with similar risks carry heavy regulations and are prohibited for children.

But hey, this new product is from totally responsible and trustworthy guy Sam A! you may hear reports that it’s causing delusions, addiction, or even death, but don’t worry about it! The users say they love it! In fact, if you’re even skeptical, you’re a Luddite.

Anonymous Coward says:

Re:

If a company were selling a drug over the counter that has even a small potential to cause delusions, addiction, and even death, coming out and saying “oh, well, but you see, many users of this drug report feeling better when using it, and i mean, nobody can afford therapy anyway right? so why not just let them have the drug?” would be rightly seen as an absolutely insane, irresponsible statement designed entirely to rationalize the behavior of the company selling said drug.

I can’t wait till you discover the unregulated mess that are gas station supplements. It totally blows your whole argument out of the water.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

Seriously?! Have you ever read the potential side effects of antidepressants? Or cholesterol medication. They know about those because they occurred during trials and yet they figured the benefits far prepared the risks.

Also, speaking as someone who suffers from depression, I am all for ANYTHING that can be done to make access to help easier. 988 is helpful for crises, but when it comes to finding a therapist, good luck. Especially if you (like me) prefer in person sessions. The wait-list for therapists in my area is several months out.

I am generally not a fan of how AI is being foisted on us. I do, however, believe when it’s used thoughtfully it can be a useful tool.

TheKilt (profile) says:

Re: Re:

Which is why antidepressants are supposed to be prescribed by and used under the supervision of a professional, as opposed to just being accessible online where anyone with a mild case of paranoia can be convinced they’re a very special unicorn that no one else understands and they should only talk to the very solicitous and comforting chatbot.

This comment has been deemed insightful by the community.
Anonymous Coward says:

That means reflexively pushing 988 at any mention of distress, or cutting off conversations entirely, or simply refusing to engage with mental health topics at all.

Yep sounds great. Until a chatbot is rigorously proven to produce better outcomes than this, this is the correct response. Chatbots are not therapists! They’re not professionals! They SHOULD be referring people to professionals!

This comment has been deemed insightful by the community.
Epic_Null (profile) says:

Not these bots.

The current generation of chatbots are COMMERCIAL first. Not Medical first. That needs to cover EVERY aspect of this conversation.

cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely.

Where are chatbots being implemented where this can happen? Should they be implemented there? Why are they implemented there? Why are users at any point being given an opportunity to express enotional distress to these bots, and why are they built such that a human would expect them to care?

the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers.

Is this even the right question? I need to highlight that.

“The way to make ovens safer for people taking a shit is to…” should be met with “Why are these things connected? Do you have an oven in your bathroom? Do you have a toilet in your kitchen? Who designed your house?!?!

I chose a rediculous concept here because other conceots are more disturbing.

“Commercial chatbots are a good treatment to mental health issues” is an argument not being made because it (is not supported)[https://www.sciencedirect.com/science/article/pii/S0022103126000417] by reality.

If we want chatbot providers to build genuinely better systems for handling mental health conversations — systems that can identify distress patterns, offer appropriate triage, connect users to professional care when that’s what’s needed, and sustain helpful conversation when it isn’t

Then that effort sure aint gonna be lead by the “Get people addicted so we can show them ads” crowd. Or the “Wait why are people mad we are going to take their jobs and paychecks” people. Or the – well you probably get my point. What you propose requires the entities involved to have a LOT of trust. The companies in question have crossed a lot of lines and have lost it and are running in the negatives.

The point is to stop punishing the specific behavior we want more of: chatbots that try to actually help people who are struggling, including by building better pathways to professional care for those who need it.

Why is the assumption that we want more chatbots?

Why is the assumption that we want more chatbots?

Why is the assumption that we want more chatbots?

Yes repetition, but answer the question.

Most chatbot use I am aware of happens when the alternative is taken away, or when the user is decieved or coersed.

That is NOT how people decide to use a product they actually want.

So again

Why is the assumption that we want more chatbots?

Nobody serious is arguing chatbots should wholly replace crisis lines or psychiatric care.

Why not? They’re arguing that they can replace lawyers, security teams, programmers, and anyone else who’s paycheck is large enough for basic financial stability.

But for the over one million people per week currently turning to chatbots for mental health support

Contrasts nicely with

Nearly half of Americans with a known mental health condition never seek professional help. There are plenty of reasons for this, ranging from […]

For which I must scream “THIS CLEARLY DOES NOT SOLVE OUR PROBLEM”. The solition is NOT to funnel people who need help into situations where they are more vulnerable than before. It is NOT to encourage companies to make it seem like they are qualified to offer a service they are not equipped to offer, and allowing the individuals that believe them to take the fall when the truth falls short of their promises.

We’ve watched this movie before with social media. We know how it ends.

Camberadge Analytica.

Massive scale political manipulation.

Legitimate news turning to clickbait

A deeply integrated survailence network where children are encouraged to put their most sensitive information in places where it has already been lost to malevolent entities.

The movie of social media is not over. A good chunk is still fresh in my mind. I happen to think social media can be, on the whole, a good thing, but like HELL do I think it’s enough of a nothingburger to justify using it as an excuse to brush off people’s concerns.

If lawmakers are serious about mental health outcomes rather than political theater, they should be asking how to make chatbots better at this

Once again, and to have a conclusion that matches the weight of yours:

It’s the wrong question.

You assume a solution that has not yet been stated, supported or defended.

Any solution that actually works cannot and will not start there.

Anonymous Coward says:

Re:

Most chatbot use I am aware of happens when the alternative is taken away, or when the user is decieved or coerced.

That is NOT how people decide to use a product they actually want.

So again

Why is the assumption that we want more chatbots?

Unfortunately, I have literally seen people respond to attempts to replace food banks with better support that aims for the root causes of food poverty with “People depend on food banks, they’re an essential part of making sure everyone can eat, why are you trying to get rid of them?” as if they are not a SYMPTOM of a broken system that will go away if treated with preventative rather than palliative care, and that not having them is a desirable state of a functioning society.

Hell, I’ve seen Americans argue this in favour of insurers against single payer healthcare, contrary to their own interests. Where an unnecessary need is manufactured, people will reflexively defend whatever caters to it, even if it does so in exploitative or harmful ways, because they can’t conceive of an alternative to the status quo.

This comment has been deemed insightful by the community.
Anonymous Coward says:

I don’t agree with the approach being pushed, especially the New York approach if it moves forward, but I also don’t feel that the tech companies should be shielded from the potential existing liability from their generative AI in the current lawsuits related to suicides.

I also feel that handling liability for their generative AI is going to be something that companies have to handle, and that society isn’t likely to decide that anything that appears out of thin air is exempt from liability. For example on a different issue, the alleged facts in the Ashley MacIsaac case, it appears Google’s generative AI appears to have created defamatory statements leading to actual provable damages to a third party, and provided that the facts in the lawsuit are correct, I feel Google should absolutely be liable for the defamatory statements their software created there.

Just as a thought experiment, I don’t think anyone would argue that Waymo/etc shouldn’t be liable for actual damages that their vehicles cause on the road even if AI is running the car, and I don’t feel that all liability should disappear just because the actionable tort happened through generated text instead of controlling a vehicle (though at the same time, also I don’t feel a company running AI should have liability that a human writing the same text wouldn’t have, thus my disagreement on some of the proposals being pushed by various states).

Anonymous Coward says:

Is this seriously trying to argue that chatbots should be allowed to interact with suicidal people? The only thing that makes any sense is that someone is paying Masnick off to write the worst opinions about AI on the internet, nobody could get their own head that deep up their own ass on their own.

Arianity (profile) says:

and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards.

Because it is- when we want to encourage certain behaviors, and discourage others, we generally apply liability. If you want to encourage doctors to take patient care carefully, do we make them immune? No. If we want there to be more access to mental health care professionals, do we just drop requirements? Also no.

It’s particularly galling because we see these companies doing things that ruin any trust, despite potential liability. To say nothing of hallucination. You mention a few “tragic” stories, but the part you’re leaving up is how completely badly companies like OpenAI screwed it up. Maybe not in a way that is provably causal, but negligent? Absolutely.

is perfectly compatible with a liability shield for thoughtful, helpful mental health support. The point is to stop punishing the specific behavior we want more of:

If you want something modeled on 230, 230 does not stop punishing particular behavior. It is unconditional. You’re complaining about comments saying it shouldn’t be universal liability protection, but you’re explicitly comparing it to something that is unconditional? I don’t get why you made this comparison. Even Ron Wyden has said that companies have not lived up to the liability protection they’ve gotten.

If you want to avoid backlash, I think you would benefit by mentioning what things should lead to liability more concretely. It would go a long way, and that part is left pretty nebulous.

we need a liability environment that doesn’t punish the attempt.

You want a liability environment that doesn’t punish good faith attempts, but does punish bad faith or negligent ones. A 230-like framework explicitly does not do that. In context, you can argue that for 230, on First Amendment expression grounds, “bad” stuff deserves protection. That doesn’t apply to professional medical care.

Contrary to the widespread belief among the media and politicians, Section 230 didn’t eliminate accountability

How can you hold a site that has actively bad moderation legally accountable, again? You can’t. It’s all market accountability, and that’s not something we consider acceptable for medical practitioners. And looking at companies like Facebook/Twitter, it should be clear why.

BootsTheory (profile) says:

"Deaths should be tolerated as is because some benefit"

… Is not a reasonable argument in suicide prevention tools. Which is the argument obscured in the perspectives you’re elevating here.

That opinion piece was written by a Professor of Law/Computer Scientist and one of their first year law students. No social worker, clinical psychologist, or even medical legal expert was involved.

The medical field is extremely strict with itself about tools and protocols for suicidal ideation. It is willfully embedded in law to enforce that – if people want to claim to have a new tool, medicine, or technique for managing serious depression, those people must accept responsibility for those that die from its use. It’s to make sure that those in charge think, really consider the proper structure for its use. The ones who throw a fit about that overwhelmingly aren’t actually seeking to support depressed individuals, they’re seeking to use them. For money, ego-stroking, shields, whatever.

When a tool helps some patients and harms others, they set guardrails and usage criteria. “This med can used for lethal overdose: no using in unmonitored patients, not for ones who’ve previously shown interest in planning medication deaths.” Overdose potential is why some old classes of antidepressants are rarely used anymore – sure, it helped save lives, but it ended others. Doctors and therapists set protocols for usage to minimize that, to hold each other accountable for every suicide, and transitioned to new meds with less risk as those became available. The therapy frameworks have changed too, diversified for this reason.

This is still ongoing – when and what dosage of SSRIs to give teenagers is carefully weighed because paradoxically, sometimes you give a teenager an SSRI and they become suicidal (or ideation worsens) for the first 3-4 weeks. What a volunteer for a suicide prevention hotline says is absolutely something they can be made to justify to a court, they don’t get to say things without consequence.

Laws saying that an LLM must have protocol for suicidal ideation is a law saying “your tools are not exempt from the standard of care or consequences.” These companies responding with “then no one gets the tool” is a childish temper tantrum thrown by people with no actual concern for others.

This comment has been deemed insightful by the community.
Anonymous Coward says:

That hybrid model — AI identifying risk, trained humans providing targeted intervention — is exactly the kind of system you’d want chatbot providers racing to build.

No. I want those chatbots to be built not by a “chatbot provider”, but by an organization whose primary goal is to provide mental healthcare. And publicly funded.

Your “chatbot providers” are for-profit companies. Whether they are driven by a profit motive or by the need to avoid liability, they will always have the wrong incentive.

If the only thing that people can think of is how much liability there should be, but never question the commercial underpinnings, well… Perhaps the country needs to rethink its healthcare system.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Lots of people self-medicate for their various problems with alcohol, including teens. They do this because drinking can be easier and more socially acceptable than getting real help.

Therefore (arguing in line with this article) we must remove all restrictions on alcohol production and sales then, because otherwise we’re withholding that source of ‘help’ from people.

WTF. Mike I like your other articles but your AI takes are god-awful.

This comment has been flagged by the community. Click here to show it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...