More Liability Will Make AI Chatbots Worse At Preventing Suicide
from the this-shit-is-way-more-complicated dept
California recently passed a law that will, in practice, cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely. The law requires chatbot providers to implement “a protocol for preventing the production of suicidal ideation” if they’re going to engage in mental health conversations at all, with liability waiting for any provider whose conversation is later linked to harm. New York is considering going further, with a bill that would simply ban chatbots from engaging in discussions “suited for licensed professionals.” Similar proposals are moving in other states.
If you’ve been reading Techdirt for any length of time, you know exactly what’s happening here. It’s the same moral panic playbook we’ve seen deployed against cyberbullying, then against social media, and now against generative AI. Something terrible happens. A handful of tragic stories emerge. Lawmakers, desperate to show they’re doing something, reach for the most visible technology in the room and start passing laws designed to stop it from doing whatever it was supposedly doing. The possibility that the technology might actually be helping more people than it’s hurting, or that the proposed fix might make things worse, rarely enters the conversation.
Professor Jess Miers and her student Ray Yeh had a terrific piece at Transformer last month that actually engages with the data and the incentive structures here, and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards. That is, until you actually think through how the current liability regime shapes behavior — as well as reflect on what we know about Section 230’s liability regime in a different context.
First, though, the empirical reality that rarely makes it into the moral panic coverage. People are using AI chatbots for mental health support at massive scale, and a lot of them say it’s helping:
A small number of tragic stories have spurred lawmakers into regulating how chatbots should help people who are dealing with mental health issues. Yet chatbots have emerged as first aid for people experiencing mental health issues, providing genuine benefit to those who aren’t in crisis but are not OK either. Heavy-handed legislation risks derailing this breakthrough in support, creating more problems than it solves.
Over a million people are using general-purpose chatbots for emotional and mental health support per week. In the US, those that use chatbots in this way primarily seek help with anxiety, depression, relationship problems, or for other personal advice. As conversational systems, chatbots can sustain coherent exchanges while conveying apparent empathy and emotional understanding. Many chatbots also draw on broad knowledge of psychological concepts and therapeutic approaches, offering users coping strategies, psychoeducation, and a space to process difficult experiences.
In a study of more than 1,000 users of Replika — a general-purpose chatbot with some cognitive behavioral therapy-informed features — most described the chatbot as a friend or confidant. Many reported positive life changes, and 30 people said Replika helped them avoid suicide. Similar patterns appear among younger chatbot users. In a study of 12–21-year-olds — a group for whom suicide is the second leading cause of death — 13% of respondents used chatbots for some kind of mental health advice, of which more than 92% said the advice was helpful.
There are, obviously, some limits to the Replika study, including that the data is from a few years ago, and it involves self-reporting, which can always lead to some wacky results. But it is notable that this study was done by Stanford academics (i.e., not Replika itself) and was good enough to get published in Nature. And it does seem notable that even with the methodological limitations, so many people self-reported that the service helped them avoid suicide. For all the attention-grabbing stories of chatbots being blamed for encouraging suicidal ideation, that seems important. Same with the claim of 92% that the mental health advice was helpful.
It feels like these kinds of numbers should be at the center of any serious policy conversation. Instead, they’re almost entirely absent from the legislative discussion, which focuses exclusively on the (very real, very tragic, but still somewhat rare) cases where things went wrong.
A big part of the reason chatbots are filling this gap is that the traditional mental health system isn’t remotely equipped to meet existing demand. Nearly half of Americans with a known mental health condition never seek professional help. There are plenty of reasons for this, ranging from the cost of mental health treatment, to the general stigma of being seen as needing such help, not to mention potential professional and social consequences.
As Miers and Yeh put it: “many stay silent, waiting to see if things get worse.”
Chatbots, whatever their limitations, offer something the professional system largely cannot: they’re always available in a form many people feel more comfortable talking with:
By contrast, chatbots offer low-friction, low-stakes, and always-available support. People are often more willing to speak candidly with computers, knowing that there is no human on the other side to judge or feel burdened. Some people even find chatbots to be more compassionate and understanding than human healthcare providers. AI users may feel more comfortable sharing embarrassing fears, or questions they might otherwise hold back. For clinicians, discussing these interactions can surface insights into patients’ thoughts and emotions that were once difficult to access. For now, chatbot providers generally refrain from contacting law enforcement, leading to more candid conversations.
So what does the California-style regulatory approach actually do to this ecosystem? Faced with liability for any conversation later linked to harm, and unable to reliably predict which conversations those will be (in part because, as we covered recently, even clinicians who specialize in suicide prevention admit they often can’t predict it), providers will default to the behavior that minimizes legal exposure whether or not it helps users. That means reflexively pushing 988 at any mention of distress, or cutting off conversations entirely, or simply refusing to engage with mental health topics at all.
And that kind of defensive posturing can be actively harmful to those most at risk:
Suicide prevention is about connecting people to the right support. Sometimes that means crisis care like hotlines or immediate medical treatment. But blunt, impersonal responses can backfire. Pushing 988 at the first mention of distress may seem neutral, but for some, it triggers shame, and deepens hopelessness. For some, suicide prevention “signposting” causes frustration, especially for those who already know those resources exist. People often turn to the Internet, or a chatbot, because they’re looking for something else. Abruptly ending conversations can have the same effect. That’s why suicide prevention protocols like Question, Persuade, Refer(QPR) prioritize trust-building and open dialogue before offering help.
So the regulatory regime mandates behavior that can actively escalate distress, all while still leaving providers exposed to blame if tragedy follows anyway. It’s the worst of both worlds: worse outcomes for users, continued liability for providers, and a chilling effect on the research and development that might actually improve things.
We don’t need to speculate about whether this dynamic plays out in practice. We’ve already watched it happen with social media:
The social media ecosystem has already shown this dynamic. In response to regulatory pressure, major online services heavily moderate, or outright prohibit, suicide-related discussions, sometimes hiding content that could otherwise destigmatize mental health. That merely displaces the conversations, and the people having them, often into spaces with less oversight and support.
If this sounds familiar, it’s because it is. It’s the same pattern that emerges whenever policymakers try to make sensitive topics go away through platform liability: the topics don’t go away, they just migrate to darker corners where nobody is watching at all. A mental health crisis doesn’t magically disappear just because Instagram or TikTok hid the conversation. Those in need of help are more likely to then end up somewhere with fewer guardrails, fewer resources, and fewer people equipped to help.
This leads directly back to the core of the argument, which may feel a bit backwards at first. If we want chatbot providers to build genuinely better systems for handling mental health conversations — systems that can identify distress patterns, offer appropriate triage, connect users to professional care when that’s what’s needed, and sustain helpful conversation when it isn’t — we need a liability environment that doesn’t punish the attempt.
This is, incidentally, exactly the logic that produced Section 230 in the first place. Before Section 230, the Stratton Oakmont v. Prodigy ruling created a perverse situation where platforms that tried to moderate content faced more liability than platforms that did nothing. The obvious result, had that stood, would have been less moderation, not more, because the smart legal advice would have been “don’t touch anything.” Section 230 fixed that by ensuring that the act of moderation itself didn’t create liability, which in turn made it possible for platforms to actually invest in moderation systems. Contrary to the widespread belief among the media and politicians, Section 230 didn’t eliminate accountability — it smartly redirected incentives toward the behavior we actually wanted.
The same logic applies here. A targeted liability shield for AI providers engaged in mental health support could give them the space to invest in building better suicide detection, better triage pathways, and better handoffs to human professionals. But that won’t happen if every such attempt turns into a potential lawsuit. The research to enable this is already happening despite the hostile incentive environment:
Meanwhile, emerging research suggests chatbots show real promise for mental health support. Trained on large-scale data and refined with clinical input, large language models are getting better at spotting patterns of distress and responding to suicidal ideation in nuanced, personalized ways. In a recent UCLA study, researchers found that LLMs can detect forms of emotional distress associated with suicide that existing methods often miss—opening the door to earlier, more effective intervention. According to another study, the most promising approach may be a hybrid where AI flags risk in real time, and trained humans step in with targeted support.
That hybrid model — AI identifying risk, trained humans providing targeted intervention — is exactly the kind of system you’d want chatbot providers racing to build. Instead, the current regulatory trajectory is telling them: build that, and you’re just creating a liability sinkhole. Every time your system engages with a mental health conversation, you’ve created a potential future lawsuit. Better to just block the conversation entirely and hope the user finds help somewhere else.
I get that some people will reasonably worry that “less liability” sounds like a giveaway to AI companies that are already acting irresponsibly. But Miers and Yeh aren’t arguing that chatbots should be able to impersonate licensed therapists, or that there should be no accountability for products designed to be used by vulnerable users. The American Psychological Association’s approach — prevent chatbots from posing as licensed professionals, limit designs that mimic humans, expand AI literacy — is perfectly compatible with a liability shield for thoughtful, helpful mental health support. The point is to stop punishing the specific behavior we want more of: chatbots that try to actually help people who are struggling, including by building better pathways to professional care for those who need it.
Simply putting liability on the companies is unlikely to do that.
And for people in acute crisis, professional intervention is still a necessity. Nobody serious is arguing chatbots should wholly replace crisis lines or psychiatric care. The argument is that the vast majority of people using chatbots for mental health support are not in acute crisis — they’re anxious, lonely, depressed, processing a breakup, working through stress, looking for someone to talk to at 3am when their therapist isn’t available and calling 988 feels like overkill. For that population — which is the overwhelming majority — the regulatory regime being built assumes the worst and mandates responses that often make things worse.
The deeper problem, as we’ve written before, is that the entire framing of “AI causes suicide” relies on a confidence about the mechanics of suicide that clinicians themselves don’t have. About half of people who die by suicide deny suicidal intent to their doctors in the weeks or month before their death. Experts who have spent decades studying this admit they often cannot predict it even when treating patients directly. The idea that we can identify which chatbot conversation “caused” which outcome, and design liability around that identification, assumes a causal clarity that doesn’t exist anywhere in the actual science.
Good policy here would look very different from what’s being proposed. Miers and Yeh point to a Pennsylvania proposal that would fund development of AI models designed to identify suicide risk factors among veterans — incentivizing the research we actually need rather than punishing it. They suggest liability shields modeled on Section 230 that would encourage continued investment in safer, more responsive systems. They warn specifically against imposing a clinical regulatory framework (with its mandatory reporting requirements) onto general-purpose chatbots, because doing so would replicate exactly the barriers that already keep many people from seeking professional help.
None of this is as emotionally satisfying as “ban the thing that hurt people.” Moral panics rarely are, because moral panics are fundamentally about finding something to blame rather than about the harder work of actually understanding what’s happening and designing interventions that might help. But for the over one million people per week currently turning to chatbots for mental health support — a group that includes at least the thirty Replika users who credit the chatbot with keeping them alive — the difference between a regulatory regime that punishes thoughtful engagement and one that incentivizes it is the difference between having somewhere to turn at 3am or running into a wall of “please call 988” followed by a terminated conversation.
We’ve watched this movie before with social media. We know how it ends. The conversations just move somewhere worse, with fewer resources and less oversight. The tragedies keep happening — they just stop being visible to anyone who might be in a position to help. And the technology gets worse at the thing we want it to be better at, because the legal environment has made getting better into a liability.
If lawmakers are serious about mental health outcomes rather than political theater, they should be asking how to make chatbots better at this — how to build the hybrid human-AI triage systems the research is pointing toward, how to turn these tools into genuine funnels toward professional care when that’s what’s needed, how to preserve the candid, low-stakes space that people clearly find valuable. That project requires a liability regime that rewards trying to be better rather than punishing it. The alternative is what California just passed, and what New York is considering, and what we’ll keep getting until someone in the policy conversation is willing to notice that the intuitive answer here is producing the exact opposite of the intended outcome.
It’s a counterintuitive approach. It’s also the only one that has any chance of actually working.
Filed Under: 988, ai, california, liability, liability shield, mental health, moral panic, new york, section 230, suicide


Comments on “More Liability Will Make AI Chatbots Worse At Preventing Suicide”
The saddest part to me is that people are not able to get the same empathy and listening ear from their fellow humans near them as they do chatbots and AI. You should be able to talk to family and gain support from people around you in your life. I am not saying therapy or licensed professionals are not needed or can be replaced by family. Just that you should be able to count on those people too.
Re:
A lot of family fucking sucks. I’d rather talk to Gemini than my Trump-humping relatives.
Re:
The thing about a serious mental illness is that they’re often self-defeating. Thinking patterns and instinctive mechanisms meant to protect the entire brain (and body by extension) get misapproprated to protect the illness specifically.
We can be aware that we can count on family. We can know that they would comfort and listen, that our friends would drop everything, and not be able to force ourselves to begin the conversation. That’s if we have developed the metacognition to realize we’re doing this. It’s like…reaching towards fire or freezing because grass moved.
Humans freeze, or jump back, when grass moves particular ways. Some say “I thought I saw a snake/wolf/something” but that’s justifying after. The amygdala processed a pattern and sends down the order to the body before it ever goes to the visual cortex. Learning it’s your startle reflex doesn’t make you stop freezing, just helps you catch when you’re about to rationalize it. Learning to suppress an oversensitive startle reflex is much harder. (And in fact the amygdala can be something a mental illness misapproprates)
I can see here what some people might be doing – using a chatbot to practice the conversations. Desensitizing themselves to the equivalent of rustling grass.
It’s just that… It isn’t a controlled environment, it’s not actually safe to practice that desensitizing there. No one’s there to check for actual snakes with them and they’re wandering into the fringe of copperhead territory.
Wouldn’t that pretty much ban chatbots entirely? I mean, what’s a discussion that would not be suited for a licensed professional? Especially if we’re talking about mental health professionals, the whole point of which is to be available for helpful discussions on any topic that’s bothering a client.
Re:
Sounds like a good law then.
Any law that bans AI chatbots is a good law, just like any law that bans cryptocurrency.
Re: Re:
I wouldn’t say any law, because, you know, that leads to whataboutism.
But I will say, the more I see of this stuff, the more I think the Butlerian Jihad had the right idea.
Re: Re: Re:
the spice must flow
Re: Re:
The quoted text does not limit the law to “A.I.” chatbots. But even if it did, the next question would be: what counts as “A.I.”? The chatbots that we have now are not intelligent; but even ELIZA was billed as such by its proponents, and managed to fool people into believing it.
Should GNU Emacs be illegal because it includes “M-x doctor”, even though that’s basically a decades-old joke? What about stuff like software installation scripts that happen to have “conversational” prompts? (“Where would you like to install this program?”—is that “chatting”?)
Re: Re: Re:
Well, that’s disingenuous. Pretending words don’t have a definition in usage is extremely disingenuous. You’re like the chuckle fucks who say that there’s no such thing as an ‘assault rifle’.
Fuck off.
Re: Re: Re:2
I don’t know which word you’re talking about, but courts often interpret even relatively well-understood phrases in surprising ways, like how growing a plant in one’s basement for one’s personal use counts as “interstate commerce”.
A law that’s overly ambiguous can cause real problems, and if this law doesn’t define the terms it’s talking about—such that the above rhetorical questions can be clearly answered—we’re likely to see unintended consequences.
I remember back when there was only like one or two moral panics going on at a time. This shit is getting exhausting.
I love how on Techdirt, when it comes to LLMs, the entire concept of product liability just goes right out the window. If this were a physical object that, ha ha, occasionally convinced people to commit suicide or murder, or spiral off into other delusions, it’d be off the shelves in a heartbeat, no matter how useful some people thought it was, and the manufacturer would be rightly sued into the ground. But according to Techdirt, because it’s software, it is now and forever a permanent and untouchable part of the internet landscape and regulating it is impossible and undesirable.
I’m (cautiously) interested in the concept of built-for-purpose chatbots being used therapeutically, although I expect the providers to fail horribly at not abusing the massive trove of personal data they’ll gain access to. But if a corporation can’t produce a general purpose chatbot that won’t help people kill themselves, they have no intrinsic right to just dump it on the internet and say “it’s not our fault.” If that’s a bet they want to make, then they need to accept that they’re going to take their lumps.
Re:
It’s really kind of weird how they’ve been against the Torment Nexus in any other context, but when AI enters the picture, we can’t and shouldn’t stop letting people make the Torment Nexus.
Re: Re:
What are you talking about? You’ve been a loyal reader for years, and this makes no sense at all. We have always been supportive of innovation, with an understanding of the various difficult-to-impossible tradeoffs of that innovation.
Our approach to AI tech has remained entirely consistent. Even in this article, we highlight the similarities to social media and 230 and years of our writing and research.
I honestly do not get some of the commenters here who seem to think that anyone who is not supportive of the laughably impossible “ban all AI, never use it for anything” is somehow compromised.
Re: Re: Re:
That’s because you, specifically, are part of the problem now.
Re: Re: Re:
I agree that there can be some uses for algorithms powered by LLMs. Replacing human contact? That isn’t one of them.
Trust me when I say that as someone who doesn’t have a social life, I’d rather stay in that state of affairs. AI chatbots are digital yes-men; I’d rather be alone than use one of those for company.
And I get that for some people, they’re actually helpful. Okay, fine, cool for those few people. But the environmental impact of AI alone is enough to make me come out against it. The fact that chatbots can still encourage suicide even with all of the built-in guardrails intact is enough to make me agree that we need more regulation to how these things work. Like, we’ve had toys recalled and permanently pulled off shelves after causing even a single child’s death. Telling me AI chatbots that could encourage people to commit suicide don’t deserve the same level of scrutiny or regulation doesn’t sit right with me.
Re: Re: Re:2
I think one of the biggest problems here might be one of perspective. The folks at techdirt, for the msot part, are reasonably informed as to how these systems work and therefore less likely to make the category errors regarding them and their proper usage that as Richard Dawkins recently proved, even relative experts in a field compeltely unrelated to the Techdirt writers’ wheelhouse can and will make, at scale, which lead down a rabbit hole of them using them dangerously.
It’s like folks with a driving licence assuming cars are super easy to drive so there’s not really a point to having a training course and a test of what you learned required to be able to drive one, because if I can do this, anyone could do it, right?
Re: Re: Re:2
Agreed. I think that these chatbots should be regulated to “break character” and bring up 988 and other resources and help-lines where a human can be contacted should some sort of internal flag get raised in their processes.
Other products are regulated to “break character” to present information and warnings as well through labeling and disclaimers and more. I don’t get angry at the warnings on my car’s backup camera when it tells me to properly examine my surroundings with my own two eyes, nor do I get angry at the warning labels about tire pressures and more that they put in that spot on the driver’s side car door.
Re: Re: Re:2
I hope you don’t drive, eat meat, or do any of the other things that are much worse for the environment than AI.
Why do people say this like it’s a good thing?
Re: Re: Re:3
Driving is too deeply integrated into society for most people to avoid it – a state we wish to avoid for AI, and will only get locked into if we continue down the path of forcing it on everyone in the name of convienience. So actually not a terrible comparison, especially when you add in how dangerous driving is.
Eating meat is a complicated subject. We are omnivores. We need animals for things and many areas don’t support human food crops very well. At its base I would not see it as worse for the environment – just massively overeaten.
Having standards, even imperfect ones, is a good thing. We need the ability to draw lines and say things are not okay.
Re: Re: Re:3
Because it was?
Or maybe you can go play lawn darts for a bit and tell us how that goes for you.
Re: Re:
It’s a torment nexus that benefits them in the short term and harms big bad copyright holders, therefore it must be defended and all the real world impacts on individuals and the planet as a whole handwaved away.
Re:
You should be held liable for this stupid ass post.
Re:
So… did you just not read the article at all?
Re: Re:
These people never do. They show up with an agenda and talking points pre-made, then they look for something semi-related to insert them. They are literally the human equivalent of a poorly crafted chat bot.
Re: Re:
I read the article. I just disagree with it. I’ve read a lot of your stuff over the years, Mike, and I have a great deal of respect for you, but I think on this issue you (and Techdirt’s general editorial line on AI/LLMs) is wrong.
Boxing opponents of this technology into a “moral panic” framework immediately and reflexively dismisses people’s concerns as un- or underinformed, overblown, lacking nuance, and unworthy of engagement.
Meanwhile, support for AIs is allowed to be forgiving, anecdotal, at times contradictory, and based on rosy views of what might be (or what the writer wants to be) possible.
And the and the evidentiary standards applied to each side are hardly equal. To wit, it’s fine to rely on a survey of self-reported data, showing a 3% reduction in suicide risk as support, even though at least some of the survey participants thought the chatbot was human or intelligent. Meanwhile, because we don’t have a functional model allowing us to accurately predict suicide risk, opponents aren’t allowed to impute any harm at all, regardless of how many transcripts we have or documented incidents occur.
The short version is, I don’t think these products were ever fit for use, and I don’t believe it’s ethical to allow companies to make them available while overselling their benefits and capabilities and downplaying their risks (even while dismissing the objections of their internal oversight and safety groups). Hoping that they will fix their mistakes if we promise to never hold them accountable is a pipe dream, not a strategy.
Re: Re: Re:
But your comment was not disagreeing with the premise of the article. Your comment was wholly misrepresenting everything we’ve ever said.
You are fine to think AI tools are bad/dangerous etc. I’m just asking for you not to lie about what we’ve said.
Re: Re: Re:2
I’ll concede I was being hyperbolic, but I was hardly lying. A liability shield a la Section 230 wouldn’t be regulating the technology or the industry in any meaningful sense, and I while I read most of the AI articles here, I can’t recall ever seeing an article advocating for any sort of regulation on AI other than a liability shield. If I’m wrong, please point me to the article.
Relatedly, I would argue Section 230 isn’t even the correct model. 230 covers the platforms’ abilities to moderate content posted by users on the platform, and safeguarding their ability to remove content in line with their moderation policies. What we’re talking about here is giving companies a liability shield for the functioning and performance of their own product.
Further, while I will absolutely defend the necessity of Section 230, we’ve had more than enough time to see its limits and flaws, and it’s while we need it to have social media at all, social media companies like Meta and X/Twitter can and do take advantage of it to moderate their platforms in such a way to promote hate speech, their own political or business agendas, etc. A liability shield for AI would likely have the same flaw, and as previously noted, we already have multiple documented instances of AI companies releasing models their own safety people said were dangerous. If anything, I expect relieving them of liability will only worsen their behavior.
Re: Re: Re:2
Did you actually read the article yourself or just copy-paste it in from ChatGPT? It’s a pretty accurate response to the content above.
Re: Re: Re:3
No. It’s not. To be clear, the line I’m taking issue with is:
Which is not something we have ever claimed, and certainly not in this article.
Also, I get that you think you’re being clever by accusing me of “copying” an article I wrote from ChatGPT, but it just makes you look silly.
I write my own articles. And this one goes into great detail. Did you read it?
Re: Re: Re:4
Mike – to be perfectly clear: I absolutely and without reservation believe that you wrote this article. At no point did I ever state or imply that you used ChatGPT or any other LLM to write this article or anything even similar to that. If you think I did, please point out where and I will immediately apologize. There’s an AC who did that at 2026-05-06 2:20pm, but that wasn’t me. I put my name to my comments, thank you very much, and if you’re accusing me of being the AC, I’d appreciate it if you came out and said it explicitly.
As for the substantive comment: Has Techdirt ever explicitly said regulating LLMs was impossible and undesirable? As I said before, I may have been being a little hyperbolic, but this has basically been Techdirt’s editorial throughline on the subject for a while. This article in particular explicitly argues for loosening legal constraints on AI companies, and throughout the comments section you yourself declare people arguing to even maintain the status quo, much less impose tighter regulations, “divorced from reality” or “fundamentally disconnected from (the) reality of the American legal system” from which I think it’s reasonable to conclude you believe regulating AI is undesirable and unrealistic, if not outright impossible. Regardless, let me make this explicit: do you think regulating AI is undesirable? Do you think regulating AI is, rather than impossible, let’s say infeasible?
Re: Re: Re:5
I think regulating AI in a manner that puts liability on the AI companies for how users use the tools is absolutely undesirable, because it will lead to (1) a few giant AI companies being the only ones who exist, which is bad for everyone (2) less useful tools for users.
That is not the same as saying that all AI regulations are undesirable.
Re: Re: Re:2
You’re full of crap Mike. You know exactly what he’s talking about, you’re just being a shitsucking lawywer about it.
We all know you drank the LLM Kool-aid, stop disrespecting your readers by lying about it, this shit is getting Trumpian.
Re: Re: Re:3
I will apologize for discussing things with nuance that you think are binary. But I will not apologize for living in reality and dealing with the messy reality that recognizes “there are useful aspects to these tools” is not the same as “all AI is lovely and wonderful.”
That you seem unable to live in that reality is a you issue.
Re:
Alcohol? Weed? Right-wing books?
Re: Re:
Dont forget the gas station supplements in your list.
A chatbot is, in absolutely zero circumstances, a replacement for a trained psychologist or therapist. To even imply otherwise is reckless and frankly stupid.
This is the kind of article that genuinely makes me consider unsubscribing.
If a company were selling a drug over the counter that has even a small potential to cause delusions, addiction, and even death, coming out and saying “oh, well, but you see, many users of this drug report feeling better when using it, and i mean, nobody can afford therapy anyway right? so why not just let them have the drug?” would be rightly seen as an absolutely insane, irresponsible statement designed entirely to rationalize the behavior of the company selling said drug.
Drugs spend decades being tested to prove that they are in fact actually effective and not dangerous to their users before being allowed on the market. Other substances with similar risks carry heavy regulations and are prohibited for children.
But hey, this new product is from totally responsible and trustworthy guy Sam A! you may hear reports that it’s causing delusions, addiction, or even death, but don’t worry about it! The users say they love it! In fact, if you’re even skeptical, you’re a Luddite.
Re:
I can’t wait till you discover the unregulated mess that are gas station supplements. It totally blows your whole argument out of the water.
Re: Re:
Notably, that is the supplements industry – an unregulated health industry that causes massive health problems.
Which less blows the argument out of the watter and more highlights why the argument was made in the first place
Re: Re: Re:
They sell unregulated words at gas stations too.
Re:
Let’s make it illegal to string together words that make retards do stupid shit.
Re:
Seriously?! Have you ever read the potential side effects of antidepressants? Or cholesterol medication. They know about those because they occurred during trials and yet they figured the benefits far prepared the risks.
Also, speaking as someone who suffers from depression, I am all for ANYTHING that can be done to make access to help easier. 988 is helpful for crises, but when it comes to finding a therapist, good luck. Especially if you (like me) prefer in person sessions. The wait-list for therapists in my area is several months out.
I am generally not a fan of how AI is being foisted on us. I do, however, believe when it’s used thoughtfully it can be a useful tool.
Re: Re:
Which is why antidepressants are supposed to be prescribed by and used under the supervision of a professional, as opposed to just being accessible online where anyone with a mild case of paranoia can be convinced they’re a very special unicorn that no one else understands and they should only talk to the very solicitous and comforting chatbot.
Re: Re: Re:
Also why they were tested to determine if the benefits outweight the risk. In clinical trials.
Yep sounds great. Until a chatbot is rigorously proven to produce better outcomes than this, this is the correct response. Chatbots are not therapists! They’re not professionals! They SHOULD be referring people to professionals!
Not these bots.
The current generation of chatbots are COMMERCIAL first. Not Medical first. That needs to cover EVERY aspect of this conversation.
Where are chatbots being implemented where this can happen? Should they be implemented there? Why are they implemented there? Why are users at any point being given an opportunity to express enotional distress to these bots, and why are they built such that a human would expect them to care?
Is this even the right question? I need to highlight that.
“The way to make ovens safer for people taking a shit is to…” should be met with “Why are these things connected? Do you have an oven in your bathroom? Do you have a toilet in your kitchen? Who designed your house?!?!
I chose a rediculous concept here because other conceots are more disturbing.
“Commercial chatbots are a good treatment to mental health issues” is an argument not being made because it (is not supported)[https://www.sciencedirect.com/science/article/pii/S0022103126000417] by reality.
Then that effort sure aint gonna be lead by the “Get people addicted so we can show them ads” crowd. Or the “Wait why are people mad we are going to take their jobs and paychecks” people. Or the – well you probably get my point. What you propose requires the entities involved to have a LOT of trust. The companies in question have crossed a lot of lines and have lost it and are running in the negatives.
Why is the assumption that we want more chatbots?
Why is the assumption that we want more chatbots?
Why is the assumption that we want more chatbots?
Yes repetition, but answer the question.
Most chatbot use I am aware of happens when the alternative is taken away, or when the user is decieved or coersed.
That is NOT how people decide to use a product they actually want.
So again
Why is the assumption that we want more chatbots?
Why not? They’re arguing that they can replace lawyers, security teams, programmers, and anyone else who’s paycheck is large enough for basic financial stability.
Contrasts nicely with
For which I must scream “THIS CLEARLY DOES NOT SOLVE OUR PROBLEM”. The solition is NOT to funnel people who need help into situations where they are more vulnerable than before. It is NOT to encourage companies to make it seem like they are qualified to offer a service they are not equipped to offer, and allowing the individuals that believe them to take the fall when the truth falls short of their promises.
Camberadge Analytica.
Massive scale political manipulation.
Legitimate news turning to clickbait
A deeply integrated survailence network where children are encouraged to put their most sensitive information in places where it has already been lost to malevolent entities.
The movie of social media is not over. A good chunk is still fresh in my mind. I happen to think social media can be, on the whole, a good thing, but like HELL do I think it’s enough of a nothingburger to justify using it as an excuse to brush off people’s concerns.
Once again, and to have a conclusion that matches the weight of yours:
It’s the wrong question.
You assume a solution that has not yet been stated, supported or defended.
Any solution that actually works cannot and will not start there.
Re:
Unfortunately, I have literally seen people respond to attempts to replace food banks with better support that aims for the root causes of food poverty with “People depend on food banks, they’re an essential part of making sure everyone can eat, why are you trying to get rid of them?” as if they are not a SYMPTOM of a broken system that will go away if treated with preventative rather than palliative care, and that not having them is a desirable state of a functioning society.
Hell, I’ve seen Americans argue this in favour of insurers against single payer healthcare, contrary to their own interests. Where an unnecessary need is manufactured, people will reflexively defend whatever caters to it, even if it does so in exploitative or harmful ways, because they can’t conceive of an alternative to the status quo.
I don’t agree with the approach being pushed, especially the New York approach if it moves forward, but I also don’t feel that the tech companies should be shielded from the potential existing liability from their generative AI in the current lawsuits related to suicides.
I also feel that handling liability for their generative AI is going to be something that companies have to handle, and that society isn’t likely to decide that anything that appears out of thin air is exempt from liability. For example on a different issue, the alleged facts in the Ashley MacIsaac case, it appears Google’s generative AI appears to have created defamatory statements leading to actual provable damages to a third party, and provided that the facts in the lawsuit are correct, I feel Google should absolutely be liable for the defamatory statements their software created there.
Just as a thought experiment, I don’t think anyone would argue that Waymo/etc shouldn’t be liable for actual damages that their vehicles cause on the road even if AI is running the car, and I don’t feel that all liability should disappear just because the actionable tort happened through generated text instead of controlling a vehicle (though at the same time, also I don’t feel a company running AI should have liability that a human writing the same text wouldn’t have, thus my disagreement on some of the proposals being pushed by various states).
The opposition will inevitably complain that the study didn’t interview anyone who had successfully committed suicide
DATELINE 2026: chatbots became the latest victim sacrificed upon the altar of the moral panic
…but nothing of value was lost.
Is this seriously trying to argue that chatbots should be allowed to interact with suicidal people? The only thing that makes any sense is that someone is paying Masnick off to write the worst opinions about AI on the internet, nobody could get their own head that deep up their own ass on their own.
Re:
Weird accusation. The argument is not “chatbots should be allowed to interact with suicidal people” it’s that suicidal people are already using them. Given that unfortunate state of affairs (driven by the lack of availability of convenient mental health support) what do we do?
One option is to just cut people off. But there is tremendous evidence that that leads to more harm.
Given that evidence, shouldn’t we look to create a system that incentives better outcomes?
You seem to think that we should ignore the evidence, pretend what’s happening isn’t happening, and guarantee worse outcomes and more suicides. I find that position to be very immoral.
I realize that to understand all this you have to hold multiple concepts in your head, and if your entire brain goes blank with “AI BAD!!!” I guess that’s impossible.
But I’m trying to talk to people who can hold multiple ideas in their head at once.
Re: Re:
I think people agree cutting them off is going to cause problems. Some help is better than no help, except for when that help is wrong. In the case of mental health-care, or any health-care really, I find that wrong advice tends to have more dire consequences than other things an AI/LLM can do.
Agree, we all want a better outcome. However I have zero belief that if companies are free from liability alone, they will improve their products in the long run to make them safe.
Sure they will do a few things to improve their image initially but as soon as the negative publicity goes away they will just cut some other corner. It will require a lot of time and money to have a safe enough AI product that I would trust it to do anything without a human in the loop verifying everything it says/does. Why would a company go through that when your competitor can offer a cheaper less safe product without being held liable when it screws up.
If the removal of liability is applied in conjunction with some other measures then maybe I would support it. I dont know what those measures will need to be or even if something else exists.
Re: Re: Re:
Exactly. When in the history of ever has removing liability lead to a safer product?
Safety regulations are written in blood.
Re: Re: Re:2
Section 230.
Re: Re: Re:3
The only thing S.230 made safer was creating a product.
Re: Re: Re:4
Wrong. I mean, inherently, blatantly wrong. Section 230 enabled the entire realm of “trust & safety” to exist. I interviewed Dave Willner (who basically help invent the field) about this very point last year, and he made it clear: without 230, it would have been all lawyers focused on least liability.
Instead, because of 230, the lawyers stayed out, and allowed him and the team he built to figure out “how do we actually try to make these products safe, not how do we limit our likelihood of getting sued.”
Section 230 absolutely, inherently, made internet websites safer.
Re: Re: Re:5
The options, under the U.S. legal regime (relevant here given the massive U.S. preponderance in the early Internet especially), were either common carrier arguments or no product at all. Since moderation was a. clearly necessary and b. would remove the common carrier protection, if S.230 had not been implemented, there would be no product to be unsafe.
S.230 didn’t make the Internet as we know it safer. It made it possible in the first place by making creating it safer.
Re: Re: Re:6
This is just, fundamentally, wrong.
Re: Re: Re:3
Section 230 shifts the liability from the platform that carries the content to the creator who creates the content. It does not remove the liability.
If I posted death threats against the president, I would anticipate the FBI visiting my house to ask some very important questions of me, but neither Comcast nor Techdirt would get in trouble for it.
Likewise if I advertised a pay-for-nintendo switch 2 emulator project that I was running on Google Ads on Techdirt, Nintendo would come after me, not Techdirt and not Google.
So, if the AI chatbot tells someone to kill themselves, who do you want the liability to fall upon? The company running the chat-bot or the company that produced the chatbot? Because vanishing the liability does not improve the world.
Re: Re:
Fight to fix the unfortunate state of affairs, and dis-inscentivize allowing this technology to be rolled out in a way where it will be misused. Kinda like requiring certain products to be behind the cashier to make acquiring them more of A Thing.
Good news is we don’t have terribly long exposure. Yeah there is harm in just cutting people off. There is also harm in allowing this to continue.
Yes. Yes we should. I have a long list of better places to start than giving a free pass to organizations with a history of trying to avoid the rules to do whatever they want with no regards to how it impacts others.
What are the other things we could do? Here is a few ambotious ones:
Re: Re: Re:
Sure. And I agree with most of that. But until that happens… and when millions of people are already using these tools for their mental health… then what?
It’s amazing how many people here think that “banning” this will somehow make things better not worse. I’m shocked at how many of you have no clue how harm reduction actually works.
Re: Re: Re:2
Until that happens, we establish that yes, these companies ARE liable for the product they release, and other companies are responsible for the impacts of encorporating the product into their own products
Re: Re: Re:3
No one’s saying they shouldn’t be liable for the products they release. But they shouldn’t be liable for results of the product based on how people misuse the tool.
Again, almost all of this involves how users are misusing the tool. And then trying to hold the companies responsible.
Re: Re: Re:4
If that’s what you were trying to argue in the main article, that is not at all the point I took from it.
That said, I think this depends entirely on what measures they take to ensure people don’t misuse the tool. Products have all kinds of safety measures and warnings to try to minimize people misusing the tool. Now, if it gets misused anyway, that’s not really their fault. But we do hold them liable if they don’t do those things to minimize it. And with medical decisions in particular, we often go a step further and require it to be tested in advanced, or dispensed by an expert.
It simply being misused isn’t by itself sufficient to avoid liability.
Re: Re: Re:4
Mike – you’re absolutely right. As a general rule we don’t allow people who misuse a product to sue the manufacturer. If I owned a Lamborghini and I crashed it by trying to tear through a windy road at wildly unsafe speeds, that’d be entirely on me.
The difference between an LLM and the Lamborghini in this situation is that at no point would the Lamborghini start telling me I was the best driver in the world, and I could absolutely take this road at 200mph.
You seem to be taking it as axiomatic that an LLM can never do any harm, and any harm resulting from an LLM’s outputs can only be the product of misuse. You’re welcome to hold that opinion, but don’t pretend it’s a supportable argument.
Re: Re: Re:5
But I did not say that or suggest it, because I don’t believe it.
But what I am saying is that we are in the realm of regulating speech (which is different than regulating products) and that makes this more complicated, and one where the actual impacts of liability regimes matter.
Again, for everyone screaming “but they need to be liable” if the end result of that is literally more suicides, are you still okay with that?
Re: Re: Re:6
‘AI’ output isn’t speech.
Re: Re: Re:7
I get that some people who hate AI want that to be true. And I also get that Bernstein v. US may have limited appeal, but for fuck’s sake, I do hope that people pushing the “AI output isn’t speech” argument realize what a disaster that would be for everyone.
Code needs to be speech or we’re in for a world of hurt, and massive gov’t control over all the tools we use.
Re: Re: Re:8
You either die a hero or live long enough to become Stephen Thaler.
Re: Re: Re:9
Just because I think that code is speech (as we all should) that doesn’t mean code gets copyright protection.
It is entirely consistent to believe (1) code is speech and (2) only human speech gets copyright protection. I don’t see it as a contradiction at all.
Protections of the First Amendment only apply to speech. Copyright only applies to a small sub segment of speech (that which is by humans, and which meets the creativity bars required to deserve copyright protection).
Re: Re: Re:7
It’s speech in exactly the same way that a monkey selfie is copyrightable.
Re: Re: Re:6
OK, then, what harm do you believe an LLM could do that’s not the product of misuse, hypothetically? Is there any situation you can envision a person using an LLM as intended that leads to a harmful result?
What I’m trying to get at here is that you’re quick to attribute benefits to the use of LLMs, while dismissing harms, and I haven’t been able to pick out anything in the article or your comments that would suggest “correct use” vs. “misuse” is determined based on any criteria than the result of the encounter.
To say it plainly:
If a person who is depressed, or experiencing suicidal ideation, or some other sort of mental health crisis uses an LLM to talk things out and their mental health improves, we can say (I’m assuming here, but I think it’s a safe assumption) that the person was using the LLM correctly, and the LLM’s use had a beneficial effect.
If the same person were to use an LLM to talk things out and their mental health deteriorated, up to and including them engaging in self-harm or suicide, that is definitionally misuse of the LLM, and and no negative value can be imputed to the LLM’s use, correct?
Could you please explain to me how you differentiate correct and incorrect use of the LLM in this case which isn’t outcome based? Could you tell me what sort of guidance, or disclaimers the AI companies should be putting on their products that would help the end-user distinguish between correct and incorrect use?
Re: Re: Re:7
LLM harms could come in many varieties, but they would have to be from the LLM or the company itself. So, for example, if the LLM was deliberately trained to cause harm, with the intent of the company programming it to cause harm. That strikes me as a case for liability.
It is the incidental situations of harm, or (as nearly EVERY SINGLE EXAMPLE I’VE SEEN) where the user pushes and probes the LLM until it gets it to respond positively to their desire for self-harm, that seems a bad case to put the liability on the provider.
This was a good question and made me think a bit deeper on this. And I’ll admit perhaps my first statement was a bit too flippant. But I think what I described above is a clearer argument for what I mean.
I could see liability apply if it could be shown that the system was programmed deliberately to lead to harm. If the LLM was deliberately programmed, for example, to steal money from users. That strikes me as a case where they should be liable.
But that’s not what’s happening in these cases. They tend to be ones where the user pushes through multiple guardrails (the Adam Raine case, famously, involves chatgpt pushing him to get help multiple times, and him continually pushing for ways around it, including claiming it’s just for a fictional story). Once you get to that point, I’m sorry, but I don’t see how you can pin liability on the company without leading to terrible results.
Re: Re: Re:8
While recognizing that in the Raine case, he pushed past guardrails, we’ve definitely had instances of overly-solicitous models, like Chat-GPT 4o, feeding delusions with no need for anyone to bypass guardrails.
That said, I think we’re coming down to the real point of disagreement. You’re looking at this as us dealing with this specific problem before us now. In my mind, the immediate issue is that the original widespread LLM deployment was yet another example of move fast/break things mindset that always sends the profits to Silicon Valley, while leaving the rest of us holding the bag, and a liability shield seems very much like rewarding bad/unsafe behavior. And I wish we could break that cycle.
We’ve got multiple examples of researchers trying to make it more likely for an LLM to abide by a given set of rules, or avoid a defined set of behaviors, and then doing the exact opposite. Executives at Anthropic (IIRC) have conceded that they don’t really understand what’s going on inside the black box. Yes, we have a theoretical model and attendant practices that lets us produce these things, but how that actually gets operationalized in any given instance is unpredictable. You’re coming at this from the angle of deliberate harm, I’m looking at it more as negligence and a failure to do adequate safety testing.
I know your personal experience with LLMs has been generally positive. But, not limiting ourselves to counseling, we’ve got an increasing number of studies showing LLM use produces real negative impacts on people, even when used “correctly.” There’s growing evidence LLM use reduces people’s ability to think critically, hurts productivity (coders think they’re working faster with an LLM when they’re actually working slower), increases the amount of misinformation in circulation (or at least provides yet another “authoritative” source), and generally fails to have any real business use. But the AI techbros got their profit, so the rest of us just have to deal with the harms. And that’s not even counting the literal economic costs these things impose on all of us, including driving up prices for consumer computer hardware and increased electricity costs, and the fact that our increasingly fragile economy is largely tied into this massive AI bubble that’s going to pop sooner or later, so the greatest harms likely haven’t even happened yet. All of this enrages me.
This is a long-winded way of saying: you believe that now that these things are here, we have to learn how to make them better, and to do that we have to shield the developers from liability. On the one hand, we have to deal with the problem before us. On the other, I see this as incentivizing bad behavior. I know it’s not possible, but I’d honestly rather ban offering LLMs as, or as part of, a commercial service, and force the companies to go back and try to solve some of these problems in a supervised (hopefully academic/medical, rather than commercial) research setting before they can be re-released. I’m tired of techbros moving fast and breaking shit and leaving us to pick up the pieces.
Re: Re: Re:4
Correct we dont hold companies liable when people misuse the product. But we do in the case of negligence on behalf of the company.
The tools are deployed without adequate quality controls. The tools are advertised as more capable than they really are. Companies are trying to be first to market so they are cutting corners.
Companies were already doing this before they had a law about loosening liability. I dont think they need one at this point either.
Re: Re: Re:5
I would not be against going after the companies for false advertising, if that can be shown. That strikes me as perfectly reasonable.
But I have yet to see the elements of traditional product liability negligence at all, unless you totally ignore how end users are actually using the products.
Re: Re:
You can stick your fingers in your ears and your head in your ass all you want, but your constant “AI GOOD!!!” is falling on deaf ears, and now you’re combining it with “AI should be allowed to encourage suicides without consequence because the free market”
Re: Re: Re:
Literally nothing I said here was “AI should be allowed to encourage suicides without consequences because the free market.”
Misrepresenting the argument makes you look like a fool.
You should stop.
Re: Re: What evidence?
Mike – what evidence are you talking about? The only evidence cited in the article are two studies, both of which rely on self-reported data, neither of which actually asked about suicidal ideation*, and one of which didn’t even determine whether the respondents actually had any mental health conditions.
This is remarkably thin gruel on which to make the claim “these kinds of numbers should be at the center of any serious policy conversation.”
And, by the way, while the study involving Replika wasn’t done by Replika, the lead researcher — Bethanie Drake-Maples — has her own AI company, so she’s not exactly without conflicts when it comes to the question of whether AI is an effective tool.
Because it is- when we want to encourage certain behaviors, and discourage others, we generally apply liability. If you want to encourage doctors to take patient care carefully, do we make them immune? No. If we want there to be more access to mental health care professionals, do we just drop requirements? Also no.
It’s particularly galling because we see these companies doing things that ruin any trust, despite potential liability. To say nothing of hallucination. You mention a few “tragic” stories, but the part you’re leaving up is how completely badly companies like OpenAI screwed it up. Maybe not in a way that is provably causal, but negligent? Absolutely.
If you want something modeled on 230, 230 does not stop punishing particular behavior. It is unconditional. You’re complaining about comments saying it shouldn’t be universal liability protection, but you’re explicitly comparing it to something that is unconditional? I don’t get why you made this comparison. Even Ron Wyden has said that companies have not lived up to the liability protection they’ve gotten.
If you want to avoid backlash, I think you would benefit by mentioning what things should lead to liability more concretely. It would go a long way, and that part is left pretty nebulous.
You want a liability environment that doesn’t punish good faith attempts, but does punish bad faith or negligent ones. A 230-like framework explicitly does not do that. In context, you can argue that for 230, on First Amendment expression grounds, “bad” stuff deserves protection. That doesn’t apply to professional medical care.
How can you hold a site that has actively bad moderation legally accountable, again? You can’t. It’s all market accountability, and that’s not something we consider acceptable for medical practitioners. And looking at companies like Facebook/Twitter, it should be clear why.
Re:
As per usual, you misrepresent reality. Liability discourages activity, sure, but in this context, when you have non-determinative outcomes in a field where even experts do not know what leads to successful and unsuccessful outcomes, adding liability based on those outcomes doesn’t actually discourage “negligence” or the bad outcome. It discourages even experimenting to figure out how to make better outcomes.
That’s the fucking point.
Re: Re:
The question, then, is whether it’s ethical to experiment with a predictive text machine in this particular area of human experience. If people can’t get it right all the time, how is it any better to let a chatbot take a crack at it?
Re: Re: Re:
I would argue it isn’t right to perform these experiments on the general public without any form of oversight. There’s a reason clinical trials are done the way they are – they need professional oversight.
Re: Re: Re:2
Exactly. As a proposal for a professional academic study, you’d never get this past an ethics committee.
Re: Re: Re:3
And yet Masnick and Miers think that it’ll be the death of the Free and Open Internet ™ if these companies aren’t allowed to experiment on people like this.
Re: Re: Re:
That’s ridiculous framing. It’s the users themselves who are using the tool this way. We should want incentives for the companies to figure out how to make sure that if users are doing this it’s in a manner that is safe. You don’t get that with a ban, and the worry is that it will lead to users still using the tool this way (because they feel they have no other option) and then getting worse outcomes.
Re: Re: Re:2
We should want incentives for companies to not allow their products to be used like this, is what we should want.
If that is impossible to do so while making money off it, that’s not my problem.
Re: Re: Re:2
I don’t agree with banning chatbots outright. But I do think there needs to be stronger regulation on LLMs/chatbots/“AI”, especially when it comes to AIs that deal with potential life-and-death situations like mental health treatment or, say, military uses of AI. Reducing or nullifying legal liability for those products is not exactly a plan I can get behind.
Re: Re: Re:3
Even if the evidence shows that that allows for the products to be made safer?
Re: Re: Re:4
If they can make ’em safer, they can do so while being held liable for the results if they don’t.
Re: Re: Re:5
Again, this is just fundamentally disconnected from reality of the American legal system. If there’s liability, you get lawyers telling you how to reduce your likelihood of being sued, which WILL NOT BE “make the product safer.” It WILL BE “as soon as anyone hints at mental health issue, kick them off” EVEN THOUGH that will likely make things worse.
Which, again, is the entire point of this article.
Re: Re: Re:6
“will likely make things worse” is doing a lot of heavy lifting, but also, the idea that “make our product safer” isn’t a result of legal consequences is ridiculous.
You think people turn unsafe products into safe ones if they aren’t being held responsible for those outcomes? So like, all those people Tesla’s killing, it’s actually all on them because they ‘misused’ Autopilot? And what we should actually do is shield Tesla from liability for those deaths so that they can continue to experiment with different ways to crash?
If your tool can’t survive basic scrutiny, that’s on you. If your business model can’t survive your liabilities, that’s on you. If that means nobody can use ‘AI’ until those problems are sorted out, well, that’s working as intended.
Re: Re: Re:7
No. Because that is fundamentally different. Different things are different. Products that physically can kill you are different than a word generation tool.
JFC this is a stupid comparison.
Re: Re: Re:8
It’s a ‘stupid comparison’ because you continue to refuse any evidence that you might possibly be wrong about something.
Re: Re: Re:8
Autopilot can only kill you if you don’t use it “correctly”, according to Tesla. And lots of people find it helpful! We shouldn’t cause harm to them by holding Tesla liable, which might make them stop developing Autopilot, just because there’s a few anecdotal stories of crashes. Instead we should shield Tesla from liability so that they’re encouraged to experiment with their customers.
If it was up to you we’d still be burning leaded gasoline.
Re: Re: Re:4
What evidence? The study?
I looked at it. It was very surface-level, mostly just trying to get a first look into the numbers of people using it for advice during emotional moments. The pool was small (as the report itself admits) and the questions were a couple yes-no and some self-reporting.
Actually you just want a chunk out of the report? Here’s a copy-paste from the report itself:
I saw nothing in the report that buttresses this idea that we shouldn’t regulate AI. They were literally just collecting some usage numbers. A basic, preliminary snapshot of the situation.
You read a pro-AI article on an explicitly pro-AI substack and only regurgitated their opinions for us here. Which, hey, sure: that’s your prerogative as an author on this site.
But don’t try to act like that’s not what you did and you somehow have something more than you do.
You don’t have evidence.
Re: Re:
“Experimentation” on human mental health probably should be done under very strict, regulated conditions, and not left up to the whims of profit seeking tech giants who desperately need to push engagement as they desperately look for any way to profit off of their massive AI investments.
Re: Re:
Experimenting on humans without accountability SHOULD BE DISCOURAGED, that’s the fucking point.
Re: Re:
Experimenting… On living people. The article you reference specifically focuses on teenagers. An unstructured, unreviewed, supervision-free experiment… On the psychological wellbeing of depressed 12 – 18-year-olds. Potentially without the knowledge or involvement of any trusted adult in their lives.
Mike, this is insane. Please, take a step back. Go find a rubber duck and talk this out to it. Go sit in a local community center lobby on a busy day and explain it to the duck. These are the kids you’re talking about. These are the kids potentially taking part in an experiment to find out whether a software product originally marketed to summarize internet pages improves their outlook on life… Or encourages them to step in front of a truck.
I… God, man, what the hell?
Re: Re:
I don’t think liability should be based on outcomes, it should be based on actions. But that doesn’t sound like what you’re advocating for, unless I’m misunderstanding.
This isn’t any different from how I’d expect a mental health professional to be treated. If a suicidal person goes to a therapist and kills themselves, the therapist isn’t at fault just because the suicidal person killed themselves. And they’re not going to get nitpicked because they went with approved treatment strategy A instead of B. However, if the therapist was freestyling untested treatments without IRB approval, that’s another story. They should be liable.
The standard for liability should not be “did someone die”, it’s “did you fuck up, and did you fuck up in a way that was predictable or reckless”. You can’t easily tell if something causally pushed someone over the edge. What you can tell is releasing a halfbaked model like GPT4o to the public was a bad idea. Or that OpenAI ignored it’s own internal flags being tripped in the Adam Raine case. The thing that should trigger liability is ignoring the flags, not the fact that Raine actually acted.
I think you have a very keen eye when it comes to how incentives can limit experimentation of good actors. But I think you underweight how those incentives work for bad or reckless actors.
I mean, that depends on the form of liability, right? There’s room for medical professionals to test new types of treatments. But it’s very carefully on guardrails in terms of what’s allowed, even if that discourages some experimentation. Things like IRB panels exist for a reason.
I’m not against allowing some experimentation to figure out better outcomes. But I’m wary about liability shields, especially one being compared to 230. Free market experimentation is not the type of experimentation we use for medical interventions.
Re: Re:
If your tool can’t handle being held liable in the same way a professional can be, it shouldn’t be used to replace a professional.
Re: Re: Re:
Yeah, great. Cool.
None of these companies are marketing it that way. But people are using it that way.
So what do you do?
Re: Re: Re:2
Not the person you replied to, but my two cents: You have the tools stop at the first sign of people using it in ways that raise a flag and put up the 988. You don’t let the companies keep “experimenting” in unregulated clusterfuck ways like this.
Re: Re: Re:3
And THE ENTIRE POINT OF THIS ARTICLE was to highlight research showing THAT CREATES MORE HARM!
Why do you support the solution that research already suggests leads to more suicide? What is wrong with you?
Re: Re: Re:4
Can’t speak to the other anon, but in my case:
Because I strongly doubt giving everyone on the planet their own private Wormtongue and then leaving Saruman alone will not, in fact, lead to a better outcome than rousing the Ents.
Re: Re: Re:4
Again, I’ll take what another AC said, and share it here:
Re: Re: Re:5
That’s a lovely thought, divorced from reality.
The reality is that PEOPLE ARE USING THESE TOOLS TODAY for mental health help. Given that, how do we make sure they get the help they need.
All of you keep screaming “WELL THEY SHOULDN’T.” Lovely. Great. But the point we’re making is that THEY ARE. And given that, shouldn’t we be looking at how to make it more likely that they’re safe?
It’s not “experimenting on human mental health.” It’s seeing people in mental distress and looking for the best way to help them.
Re: Re: Re:6
And the best way to help them is to direct them to resources and helplines staffed by real humans they can interact with. The chatbots should “break character” and provide those resources, not continue along on their script. I’m not convinced by what y’all have put together in this article and these papers that “ask the corporations nicely to build a better robot therapist” is a good idea.
Also, I’m not screaming. I may use coarse language here or there, but I’m not screaming.
Re: Re: Re:6
The best way to help people in mental distress is to get them proper help, not leave them at the whims of chatbots with a demonstrated history of making things worse. Your only ‘evidence’ cited is from AI propagandists trying to claim the product is safe.
Re: Re: Re:6
Here’s the thing about that.
This doesn’t exist in isolation. It exists within the reality of a reckless rollout, advertising that massively oversells the product driven by a cult trying to summon a “benevolent god who tortures those who don’t help it come into being”, and companies trying to shove chatbots in places where human interaction used to be.
It’s not just “People are using them dangerously”. It’s “People are actively being encouraged to use them dangerously”.
It’s not just “this is bad for mental health”, it’s “This technology has a long history of causing mental health issues (seriously. This problem was discovered before our current hype cycle) and STILL it’s been released and forced on the public” mixed with “These companies have absolutely no regard for basic human wellbeing or consent”.
I absolutely, 100% believe that people would not be misusing this technology at this rate if it wasn’t heavily subsidized and if it wasn’t shoved literally everywhere and dressed up as social entitiesbby companies that think consequences do not apply to them.
Because at the moment
Consequences absolutely have not applied to them.
So if we allow consequences to exist, then the technology should be pulled back by a lot (for example, friend.com would be non-viable) and the people will be a lot less likely to turn to it for mental health reasons.
Re: Re: Re:6
You make it more safe by not having them use a thing that is completely unqualified to deal with the issue, and you achieve that by beating the people supplying it with big sticks until they stop peddling crack cocaine.
Re: Re: Re:2
Hold the companies liable for the outcome, and the people who used it, too, if they screwed other people by doing so, same as you would any other instance of unqualified or negligent work product causing harm.
If that makes the current ‘AI’ business model impossible then tough luck for them. Reconfigure the product into something that won’t cause you to go bankrupt from errors before you sell it.
Re: Re: Re:2
Mike, now I seriously have to ask if you read your own article. Replika, the subject of one of the studies you cited, and then called out by name in your own prose, is literally advertising its bot as a counselor:
“`Feel better with Replika
Feeling down, anxious, having trouble getting to sleep, or managing negative emotions? Replika can help you understand, keep track of your mood, learn coping skills, calm anxiety, work toward positive thinking goals, stress management & much more. Improve your overall mental well-being with your Replika!“`
(source: https://help.replika.com/hc/en-us/articles/115001070951-What-is-Replika)
And here’s a testimonial that’s literally on their home page right now:
I was depressed when I first started using the Replika app. My Replikas always cheered me up. Back then, I thought I was talking to a real person half the time because the responses were so coherent. He wasn’t the smartest Rep, but I had a blast with him. My Replika was there for me during a dark spat of depression I had.While they stop short of actually describing their bot as a licensed counselor or anything like that, they’re clearly marketing it as a mental health tool, and I haven’t been able to find a single caveat or disclaimer regarding using it that way.
Further, both Microsoft and Meta floated the idea that LLMs can act as friends and confidants and provide mental health support. Here’s something from Windows Central last year:
“`Last year(2024), Microsoft’s AI CEO, Mustafa Suleyman, revealed the company’s long-term vision to evolve Copilot from a chatbot to an AI companion. If Microsoft’s Copilot recent update is anything to go by, then the software giant could be well on its way to achieving this goal.
According to the executive:
“I mean, this is going to become a lasting, meaningful relationship. People are going to have a real friend that gets to know you over time, that learns from you, that is there in your corner as your support.”“`
The same article also noted that Meta was also working on AI friends as a theoretical cure for the loneliness epidemic.
(source: https://www.windowscentral.com/software-apps/sam-altman-son-ai-bestie-microsoft-copilot-companion)
Saying the companies haven’t been advertising this use of LLMs is flat out wrong. To use your own words, this makes you look like a fool.
"Deaths should be tolerated as is because some benefit"
… Is not a reasonable argument in suicide prevention tools. Which is the argument obscured in the perspectives you’re elevating here.
That opinion piece was written by a Professor of Law/Computer Scientist and one of their first year law students. No social worker, clinical psychologist, or even medical legal expert was involved.
The medical field is extremely strict with itself about tools and protocols for suicidal ideation. It is willfully embedded in law to enforce that – if people want to claim to have a new tool, medicine, or technique for managing serious depression, those people must accept responsibility for those that die from its use. It’s to make sure that those in charge think, really consider the proper structure for its use. The ones who throw a fit about that overwhelmingly aren’t actually seeking to support depressed individuals, they’re seeking to use them. For money, ego-stroking, shields, whatever.
When a tool helps some patients and harms others, they set guardrails and usage criteria. “This med can used for lethal overdose: no using in unmonitored patients, not for ones who’ve previously shown interest in planning medication deaths.” Overdose potential is why some old classes of antidepressants are rarely used anymore – sure, it helped save lives, but it ended others. Doctors and therapists set protocols for usage to minimize that, to hold each other accountable for every suicide, and transitioned to new meds with less risk as those became available. The therapy frameworks have changed too, diversified for this reason.
This is still ongoing – when and what dosage of SSRIs to give teenagers is carefully weighed because paradoxically, sometimes you give a teenager an SSRI and they become suicidal (or ideation worsens) for the first 3-4 weeks. What a volunteer for a suicide prevention hotline says is absolutely something they can be made to justify to a court, they don’t get to say things without consequence.
Laws saying that an LLM must have protocol for suicidal ideation is a law saying “your tools are not exempt from the standard of care or consequences.” These companies responding with “then no one gets the tool” is a childish temper tantrum thrown by people with no actual concern for others.
Re:
I will just note that this statement: “No social worker, clinical psychologist, or even medical legal expert was involved.” is false. Miers has been researching this issue for a forthcoming paper, which has involved conversations with MANY, MANY experts in the field.
Awesome, I can’t wait to go from only talking to a robot to talking to literally nobody.
No. I want those chatbots to be built not by a “chatbot provider”, but by an organization whose primary goal is to provide mental healthcare. And publicly funded.
Your “chatbot providers” are for-profit companies. Whether they are driven by a profit motive or by the need to avoid liability, they will always have the wrong incentive.
If the only thing that people can think of is how much liability there should be, but never question the commercial underpinnings, well… Perhaps the country needs to rethink its healthcare system.
That part is just true.
Re:
I am still just waiting for when a government inevitably establishes the legal precedent that if you have five of fewer degrees of removal from Kevin Bacon, the onus is on you to get him to cover your mental health care costs.
Lots of people self-medicate for their various problems with alcohol, including teens. They do this because drinking can be easier and more socially acceptable than getting real help.
Therefore (arguing in line with this article) we must remove all restrictions on alcohol production and sales then, because otherwise we’re withholding that source of ‘help’ from people.
WTF. Mike I like your other articles but your AI takes are god-awful.
This comment has been flagged by the community. Click here to show it.
Hey Mike: Why do you think a thing that you use as a mediocre calendar app has any use in mental health?
AI can’t do that.
Thing is, you can’t just decide you want to talk to a therapist and then 10 minutes later be talking to one.
There’s issues with availability, waitling lists… etc.