Human Problems: It’s Not Always The Technology’s Fault
from the it-prevents-us-from-fixing-societal-issues dept
We have met the enemy and he is us.
When a teenage boy in Orlando started texting Character.AI’s chatbot, it started as an innocent use of a new tool. Sewell Setzer III customized the chatbot to have the Game of Thrones-inspired persona of Daenerys Targaryen, the series’ prominent dragon-riding queen. In the months that followed, the boy developed a romantic connection with the chatbot. One night, he messaged the bot: “What if I told you I could come home right now?” The bot sent back, “[P]lease do, my sweet king.” Setzer was only fourteen years old when he died by suicide later that evening.
Setzer’s death is a tragedy. Like many parents in the wake of suicide, Seltzer’s mother is left searching for answers and accountability. Suicide often leaves behind a painful void, filled with questions that rarely yield satisfying explanations.
In her search, Setzer’s mother sued the chatbot’s developer, Character Technologies, alleging that its chatbot caused her son’s death. The complaint describes the bot as a “defective” and “inherently dangerous” technology, and accuses the company of having “engineered Setzer’s harmful dependency on their products.” She is not alone. Three other families have brought similar suits against Character Technologies, and another has sued OpenAI, alleging the chatbots harmed their children.
Framing suicide and other harms as technology problems—as much of the current discourse around chatbots suggests—obscures underlying societal conditions and can undermine effective interventions. In effect, what are often described as “tech problems” are, more accurately, the result of human decisions, norms, and policies. They are, at their core, human problems.
Historical Framing of Tech and Media in Creating and Sustaining Societal Problems
This is just the latest vintage whine, rebottled yet another time. Humanity has long sought to condemn new technologies and media for problems of the day. When the printing press made literature available to the masses, church and state condemned publications for causing immorality. Rock ‘n’ Roll and comic books were blamed for juvenile delinquency. Later, it was heavy metal and role-playing games. The advent of video games supposedly led to increased violence by adolescent boys.
The desire to hold technology companies responsible for human harms, however, has its immediate antecedent in social media. Over the past decade, users have sued social media platforms for offline violence committed by people they met online, failing to prevent cyberbullying, and hosting user-generated content that allegedly radicalized extremists.
Like in Setzer’s case, parents have also sued social media companies after the deaths of their children, arguing that design choices, engagement mechanics, and algorithmic targeting played a role. Indeed, this is the central question at the heart of the current wave of “social media addiction” litigation that is currently being tried.
AI is just the latest technological scapegoat to which we seek to ascribe fault. It’s easier to hold technology responsible for our problems, especially when the technology is as uncanny as generative AI. We’re afraid of robots, perhaps not because of any harm they cause us, but because they show us how much we, as humanity, can harm ourselves. We would rather fault the technology du jour than confront the harder truths underneath.
Death by Suicide as a Case Study
To put this into context, consider the allegations about the Character.AI chatbot and Setzer’s suicide. Suicide is a complex, deeply human problem. Among youth and young adults, it stands as the second leading cause of death. Suicide has no single cause. Public health experts have long recognized that risk emerges from a convergence of individual, relational, communal, and societal factors. These can include long-term effects of childhood trauma, substance abuse, social isolation, relationship loss, economic instability, and discrimination. On the surface, these may look like personal struggles, but they’re really the fallout of systemic failure.
Access to lethal means compounds the risk of self-harm and suicide. In particular, the presence of firearms in the home has remained strongly associated with higher youth suicide rates.
These systemic failures tend to hit teens the hardest. Studies consistently show that young people are facing rising rates of mental health challenges, especially due to and following the COVID-19 pandemic. This is compounded by chronically underfunded school counseling programs, inaccessible mental health care, and inconsistent support for youth in crisis. LGBTQ+ youth, in particular, bear the brunt, facing higher rates of bullying, depression, and suicidal ideation, all while increasingly being targeted by state policies that strip away protections and deny their identities.
We don’t and can’t know for sure why Setzer or anyone else died by suicide. Tragically, teenage suicide is common. Indeed, it’s the subject of many songs. There’s no mechanism to definitively determine how Setzer and other victims felt when they started using Character.AI. However, as we likely all remember from our own lives, teenage years can be trying. As we mature physically and mentally, it can be difficult to express and accept ourselves. Other children can be cruel. Hormones can lead us to lash out in anger and withdraw into ourselves.
In Setzer’s case, the complaint and public reporting indicate that he exhibited other signs and conditions commonly associated with elevated suicide risk, including anxiety and depression, withdrawal from teachers and peers, chronic lateness, significant sleep deprivation, and access to a firearm in the home. His interactions with fictional characters on the Character.AI service may suggest unmet emotional needs or a search for understanding and connection. At different points, he described a character as resembling a father figure and spoke about feelings of loneliness and a lack of romantic connection—experiences that are not uncommon for adolescents, particularly during periods of heightened vulnerability. According to the complaint, Setzer also raised the topic of suicide in earlier conversations with the chatbot, and those exchanges were promptly halted by the system.
The uncomfortable truth about suicide is that it has existed as long as there have been people–sometimes for reasons we can understand, and often for reasons we never will. We are terrified that people die by suicide, not only because it is difficult to comprehend, but because the forces that drive someone there can feel disturbingly familiar.
Parents like Setzer’s can’t fix systemic governmental and societal failures. What feels more immediate and actionable is holding the technology companies accountable when their services appear to enable or amplify harm. It is far easier to fixate on the medium through which people express suicidal thoughts rather than ask where those thoughts came from or why they felt like the only option.
Legal Analysis of Faulting Tech
Legal doctrine appears to recognize that holding the technology responsible for these systemic failures is not viable. For example, because suicide is shaped by so many overlapping factors, tort claims against AI companies for causing a teen’s death—while understandable in their urgency—are, doctrinally speaking, a stretch.
Under traditional tort principles, providers of generative AI systems and social media services are unlikely to bear legal responsibility in these cases. Claims based on intentional torts, such as battery, generally fail because providers of online services do not act with the intent to cause—or even to contribute to—physical harm. Therefore, Plaintiffs more commonly turn to negligence theories.
Negligence, however, requires more than just harm in fact. It demands both factual causation and proximate (i.e., legal) causation. In some situations, an online service or generative AI model might satisfy a but-for test because the harm would not have occurred without the service. But that is not sufficient.
Proximate cause—what the law treats as a legally meaningful connection between conduct and injury—is where most of these claims falter. In many cases, particularly those involving such numerous and complex factors as suicide, the link between a provider’s conduct and the ultimate injury is typically too attenuated to meet this standard.
Services such as social media and AI chatbots are typically designed as broad, general-purpose tools. The potentially implicatable content comes from other users’ behaviors, personalized interactions, or the user’s own actions. Even where excessive technology use—including social media—has been associated with elevated rates of suicidal ideation among youth and young adults, research has not established a direct causal link. As a result, courts are generally reluctant to find the technology service to be the legal cause of death.
The Broader Ramifications of a Myopic Focus on Tech
Beyond legal error, focusing solely on technology obscures the path to real solutions. When we frame fundamentally human problems as technological ones, we deflect attention from the underlying conditions that lead to these tragedies and make it more likely they will recur.
This framing guides policymakers and advocates toward seemingly easy, surface-level technological fixes such as imposing age-verification requirements, mandating disclosures about content moderation, or curbing algorithmic feeds. True, technology companies can—and should—consider how to help mitigate real-world harms. Yet these proposed interventions rest on the assumption that technology is the primary culprit, even though research increasingly shows that, in the right contexts, technology can actually help those in crisis.
The appeal of reducing complex social issues to matters of redesigning or banning technology is understandable. Technology problems can feel tractable. They suggest clear targets and concrete fixes.
What this logic ignores, however, is that the pre-technology status quo for many public health crises has long been dismal. The better question, then, isn’t whether technology causes harm, but whether it deepens an already broken baseline—or simply reflects it.
Technology, including generative AI, often acts less as a cause than a mirror. Our digital spaces often reflect the offline world, including its ills.
Today, children face more pressure to excel at school and attend the best universities, even while job prospects stagnate and inflation soars. They have lost access to the kinds of public and community spaces that once offered structure, connection, and care. Libraries operate with reduced hours. Budget cuts have decimated after-school programs. Parks are monitored and restricted for loitering. Community centers that shuttered during the pandemic have never reopened. In many ways, technology—and social media in particular—has stepped in as a makeshift third space for teens. Yet rather than address the erosion of offline support, policymakers are now working to dismantle these digital communities too.
If human distress reflects deteriorating real-world social infrastructure, then optimizing digital services cannot restore stasis. Technological interventions address a symptom while the deeper human cancer persists.
A Pragmatic Path Forward
The path forward requires resisting the impulse to treat fundamentally human problems as technological ones. When new technologies appear alongside harm, the harder and more necessary questions are not simply how to regulate the tool, but what human choices produced the conditions in which harm emerged, which institutions failed or fell short, and what values should guide our response. These questions are more difficult—and often more uncomfortable—because they turn our attention inward, toward ourselves, rather than external and more convenient actors.
Instead of focusing our energies on systematically regulating platforms, we should direct our efforts toward these human problems. For suicide, public health experts point to a wide range of evidence-based strategies for preventing and mitigating risk factors. These include strengthening economic supports such as household financial stability and housing security; creating safer environments by reducing at-risk individuals’ access to lethal means; fostering healthy organizational policies and cultures; and improving access to healthcare by expanding insurance coverage for mental health services and increasing provider availability and remote access in underserved areas. Experts also emphasize the importance of promoting social connection and teaching problem-solving skills that can help individuals navigate periods of acute distress.
These and other socioeconomic reforms are not easy solutions. They aren’t just a matter of adjusting algorithms or restricting platform features. They demand uncomfortable conversations about how we structure work, education, and community life. They require sustained political commitment and resource allocation. Yet if we can achieve these results, we will create a better world than one derived from mere technological fixes.
In short, technology doesn’t cause suicide. It doesn’t cause a host of human problems for which it is often accused. Sadly, they have always been with us.
But technology, used wisely, could help us mitigate these problems. For example, through processing massive amounts of data, AI can detect patterns that elude us humans. This alone could help reveal early warning signs or surface new protective factors. AI chatbots, for example, could help us identify teens who are at risk and create opportunities to intervene.
But that kind of progress demands that we take responsibility for these problems. We must acknowledge that our governments, societies, communities, and even ourselves may have normalized and contributed to these harmful conditions. We may discover there’s no rhyme or reason to why teenagers commit suicide. But we may uncover that teen suicide isn’t random at all. It may stem from something we’ve unwittingly ignored, or perhaps built into the world.
That possibility is far more unsettling than the idea of dangerous technology. It’s the idea that the danger might be us.
Kevin Frazier directs the AI Innovation and Law Program at the University of Texas School of Law and is a Senior Fellow at the Abundance Institute. Brian L. Frye is a Spears-Gilbert Professor of Law at the University of Kentucky J. David Rosenberg College of Law. Michael P. Goodyear is an Associate Professor at New York Law School. Jess Miers is an Assistant Professor of Law at The University of Akron School of Law
Filed Under: ai, blame, moral panic, negligence, proximate cause, suicide, tech, tort law


Comments on “Human Problems: It’s Not Always The Technology’s Fault”
Kids in my chain saw theme park keep losing limbs. It must be their fault.
What's this?
Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »
Trying human problems? Demand a human jury.
Kids in my chain saw theme park keep losing limbs. It must be their fault.
Did social media step into the vacuum of “third spaces,” or did teens migrate away from physical spaces towards online ones as these platforms became more engaging? An evening at the library or park with friends is awesome, but it is hard to compete with tech that allows for infinite possibilities. Please dont get me wrong. Third spaces are great, and my town does its best to make sure there are places and events for teens. I have yet to witness much buy in from the target audience, though. Do other folks from small town America have similar experiences, and if so, how do communities create spaces where teens want to hang out that can compete with time spent online?
Re:
I’d say it’s a bit of both. American society in general has been waging a war against teenager-friendly third spaces for years and years. Once the pandemic hit and teens were forced out of IRL socializing, social media became the norm for connection. From there, AI chatbots getting popular with teens—especially teens who would be considered social outcasts—was absolutely a foreseeable event. I still believe Twitter-esque social media and chatbots are tools that humanity was not (and will never be) ready for, but I’m not sure how to get teens back into physical third spaces without a massive sea change in how society thinks about and treats the idea of “teenagers being teenagers”.
Re: Re:
We also should not forget though that we have children who are fairly restricted in where they go. A child who is not allowed to go to the park on their own is not exactly likely to turn into a teen who hangs out at the mall.
If we want independent people who use third spaces… we have to make laws and culture that supports people being independent and using third spaces.
Re: Re:
You make a good point about society not being ready for chatbots. I’d extend this to social media and online betting activities, as well. So many of these platforms are limitless in their opportunities for consumption, risk, knowledge, etc, and humans, I’ve come to decide, do better with certain limits. Also, teens need opportunities to take risks. If they’re not taking risks out in the world, they’re definitely doing so online ( often with offline consequences). Let your teen screw up in the offline environment you the parent better understand rather than trying to keep up with technology your kids are always going to be better at 😆.
Computers do exactly what they have been told to do without exception 100% of the time.
Like they say “garbage in, garbage out.”
Re: Chatbots don't reliably do what the human requested...
While this comment reflects historical precedent with computers, it is -not- true for large language models (typically called generative AI), so character.ai, chat got, Claude, etc. Any of these tools can, and will, give you random answers to attempt to seem more like “people conversationalists”. In addition, this article mis. represents a lot of the legal considerations in the suicide cases that are cropping up around Ai. Take a look at the Ars Technics article (and linked lawsuit material) for the recent lawsuit set against Google Gemini for sending a man on “missions” and building a suicide countdown timer. If a human did what the chat bot did, they would likely be in jail. https://arstechnica.com/tech-policy/2026/03/lawsuit-google-gemini-sent-man-on-violent-missions-set-suicide-countdown/
Evading Accountability
Is suicide a complex topic? Yes.
Do we fail teens in many ways? So much yes.
But speaking of uncomfortable problems, we need to discuss the one where we have created a norm that a company can just… dismiss accountability for their actions as long as they design themselves to do so.
Sites that demand an arbitration waver if you wish to read an article
Contracts making realators not responsible for gross neglegence
Long terms of service no one reads
This weird thing where you agree to the terms on your product after paying for it and installing it
Advertisement networks where no one can be held responsible for copyright infringement, because the only one allowed to file a claim is the copyright holder, but there is no way to directly reference the advertisement so the rights holder can see it.
And now, we have AI. The latest in tech designes so that no one is responsible. So that the words that come out of it have no weight. No monitor. No judgement. No consequences when something blatantly untrue is said.
We have bad precident here. These chatbots should be a legal risk, because they are a risk. This is not the first person pushed to extremes with AI. That we have the so-named “AI Psychosis” and are continuing to push AI into private, unmonitored spaces is, in my mind if not the law, a sign that companies are willfully selling a product they know may be dangerous, but don’t care because they have insulated themselves from the consequences of such a product.
It’s not the tech. It’s the failure of our system to ensure that actual proven dangers are taken care of. It’s our acceptance of companies abilities to ignore the dangers that their practices create.
Re:
If Alice stabs Bob to death with a knife, should we hold the knife manufacturer accountable for not implementing the necessary safeguards to protect against murder? Fuck off.
Re: Re:
If Knife Company advertises their knives as useful in roughhousing, then absolutely.
If Alice and Bob are fourth graders and the knife company had a whole campaign to get a knife to every elementry school children, then also absolutely.
The analogy falls apart with knives not because knives are not dangerous, or even because you went with a higher party example, but because knives:
Also, knife companies do not actively encourage people to cary their products onto busses or into bedrooms. They encourage people to store their knives in the kitchen when not in use.
In other words, when you look at the knife companies, they are not actively encouraging dangerous behavior like AI companies are. In my mind, that counts for a lot.
Re: Re: Re:
Right. Weapons are dangerous but most people have a basic understanding of how they work and what their capabilities are when they work as they should. We also try and prevent people at risk for self harm, violence, and suicide from accessing them (probably not nearly enough), and regardless of whether or not they are followed, there are actual procedures one can do to handle and store weapons safely. AI companies have foisted a half baked product on the public, and by their own admissions, don’t completely understand how it works. Same with companies like meta. They use algorithms to steer users, but they and other social media companies won’t release exactly how they are used. How can we keep anyone safe when we barely know what we’re up against?
Re: Re:
You joke, but we do in fact regulate things like this. Not knives specifically, but other products. Generally, it comes down to two questions:
a) Could the knife company have reasonably done anything better?
b) Do the benefit of knives outweigh the harms when they’re misused, that we’re willing to bear the cost of when they’re misused?
Re: Re: Re:
Not yet, but “anti-stab” knives have been developed. I’m not sure whether any mandates have been seriously proposed, but it’s possible to find statements like “Killer kitchen knives should be phased out as dangerous and unnecessary.” Nevermind that any idiot with access to minor power tools could turn these things into normal pointy knives in a few minutes.
There is one golden rule – It’s always someone else’s fault, and they owe me a lot of money.
We also set ourselves up to misunderstand what technology is. “Technology”, basically, is a name for the things that humans do—any application of knowledge to achieve a goal.
Every company is a technology company, the idea of a company is technology, this comment is technology (one of our most fundamental and important: written communication). Some people act like the concept didn’t exist before micro-chips.
Anyway, frame it as written above, and it’s easy to see that blaming technology is just blaming humans in a non-specific way (see diffusion of responsibility).
Smells a lot like rock and roll music hysteria to me. Sure, that’s what I would have done to protect my son when I was raising him. Shield him from technology. Instead I let him play GTA4 when he was 4 years old. GTA5 was the first game he completed fust a few years later. I guess I got lucky, because he didn’t become a criminal.
Oh, and you should keep your kids away from D&D too!
Re:
Four seems really young… but also I believe these are the wrong analogies. Would you have let your four year old play with a chainsaw? My guess is no – best case scenario, you still want them large enough that they can counter the forces of the chainsaw, focused enough to keep control of it, and mature enough to understand and heed safe chainsaw use.
Note that this is for the chainsaw, which we as a culture do understand and have good safety practices for, and which is sold by companies that are paying attention to issues like user safety. AI is not well understood, and the companies involved are encouraging unsafe practices.
They can be both, for different people in different contexts. Perhaps more fundamentally though- we do in fact regulate things even if they are the secondary culprit. Even good regulations are often ultimately about addressing symptoms that go back to an underlying human/societal problem.
You sure seem confident that you do: In short, technology doesn’t cause suicide. And this isn’t supported anywhere in any of the evidence cited.
This article is pretty disappointing. It’s presented as a sober, nuanced and evidence based argument, but ends up being just blind apologia. While it acknowledges the often neglected human aspect (great!), it’s just as myopic in downplaying how technology can interact with or amplify those problems. Or even discussing whether any parts are reasonably addressable or not.
It doesn’t even acknowledge the reason companies like Character AI and OpenAI are being targeted, which is in part due to specific design decisions they’ve made that made this result more likely. Decisions other more responsible companies didn’t make. The most we ever get is a throwaway line about True, technology companies can—and should—consider how to help mitigate real-world harms.
It would be nice if this article actually spent some time trying to answer that question, instead of asserting innocence. But even this framing misses something important- even reflections can sometimes be worthy of regulation.
Funnily enough, while this article paints pushing for regulation as a moral panic, many of the same people who want to regulate technology are also simultaneously the ones asking those uncomfortable questions about how underlying society contributes to suicide. It’s not an either/or.
You don’t say.
I think I can counter the article’s idea that the AI is being used as a scapegoat by a single question: if a person had said the same things the AI did, occupied the AI’s part in the conversations, would that person be a valid target for this lawsuit? I think the answer is clearly “Yes.”. So why should the AI or the company behind it escape liability?
If it were an employee of the company that engaged in those conversations, we’d certainly hold them liable. If they produced the company manuals and documentation for job procedures showing that the company instructed them that this was the proper way to respond to the person’s half of the conversation, we’d likewise hold the company liable for giving their employees those instructions. The company knew, or should clearly have known (since there’s sufficient evidence of this behavior on the record), that their chatbot would do things like this and took no steps to stop it from happening. Willful negligence isn’t grounds to get out of being held responsible.