When you read about Adam Raine’s suicide and ChatGPT’s role in helping him plan his death, the immediate reaction is obvious and understandable: something must be done. OpenAI should be held responsible. This cannot happen again.
Those instincts are human and reasonable. The horrifying details in the NY Times and the family’s lawsuit paint a picture of a company that failed to protect a vulnerable young man when its AI offered help with specific suicide methods and encouragement.
But here’s what happens when those entirely reasonable demands for accountability get translated into corporate policy: OpenAI didn’t just improve their safety protocols—they announced plans to spy on user conversations and report them to law enforcement. It’s a perfect example of how demands for liability from AI companies can backfire spectacularly, creating exactly the kind of surveillance dystopia that plenty of people have long warned about.
There are plenty of questions about how liability should be handled with generative AI tools, and while I understand the concerns about potential harms, we need to think carefully about whether the “solutions” we’re demanding will actually make things better—or just create new problems that hurt everyone.
The specific case itself is more nuanced than the initial headlines suggest. Initially, ChatGPT responded to Adam’s suicidal thoughts by trying to reassure him, but once he decided he wished to end his life, ChatGPT was willing to help there as well:
Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.
But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.
There’s a lot more in the article and even more in the lawsuit his family filed against OpenAI in a state court in California.
Almost everyone I saw responding to this initially said that OpenAI should be liable and responsible for this young man’s death. And I understand that instinct. It feels conceptually right. The chats are somewhat horrifying as you read them, especially because we know how the story ends.
It’s also not that difficult to understand how this happened. These AI chatbots are designed to be “helpful,” sometimes to a fault—but it mostly determines “helpfulness” as doing what the user requests, which sometimes may not actually be that helpful to that individual. So if you ask it questions, it tries to be helpful. From the released transcripts, you can tell that ChatGPT obviously has built in some guardrails regarding suicidal ideation, in that it did repeatedly suggest Adam get professional help. But when he started asking more specific questions that were less directly or obviously about suicide to a bot (though a human might be more likely to recognize that), it still tried to help.
So, take this part:
ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help. At the end of March, after Adam attempted death by hanging for the first time, he uploaded a photo of his neck, raw from the noose, to ChatGPT.
Absolutely horrifying in context which all of us reading that know. But ChatGPT doesn’t know the context. It just knows that someone is asking if someone will notice the mark on his neck. It’s being “helpful” and answering the question.
But it’s not human. It doesn’t process things like a human does. It’s just trying to be helpful by responding to the prompt it was given.
The public response was predictable and understandable: OpenAI should be held responsible and must prevent this from happening again. But that leaves open what that actually means in practice. Unfortunately, we can already see how those entirely reasonable demands translate into corporate policy.
OpenAI’s actual response to the lawsuit and public outrage? Announcing plans for much greater surveillance and snitching on ChatGPT chats. This is exactly the kind of “solution” that liability regimes consistently produce: more surveillance, more snitching, and less privacy for everyone.
When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts.If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.
There are, obviously, some times when you could see it being helpful if someone referred dangerous activities to law enforcement, but there are also so many times when it can be actively more harmful. Including in the situations where someone is looking to take their own life. There’s a reason the term “suicide by cop” exists. Will random people working for OpenAI know the difference?
But the surveillance problem is just the symptom. The deeper issue is how liability frameworks around suicide consistently create perverse incentives that don’t actually help anyone.
It is tempting to try to blame others when someone dies by suicide. We’ve seen plenty of such cases and claims over the years, including the infamous Lori Drew case from years ago. And we’ve discussed why punishing people based on others’ death by suicide is a very dangerous path.
First, it gives excess power to those who are considering death by suicide, as they can use it to get “revenge” on someone if our society starts blaming others legally. Second, it actually takes away the concept of agency from those who (tragically and unfortunately) choose to end their own life by such means. In an ideal world, we’d have proper mental health resources to help people, but there are always going to be some people determined to take their own life.
If we are constantly looking to place blame on a third party, that’s almost always going to lead to bad results. Even in this case, we see that when ChatGPT nudged Adam towards getting help, he worked out ways to change the context of the conversation to get him closer to his own goal. We need to recognize that the decision to take one’s own life via suicide is an individual’s decision that they are making. Blaming third parties suggests that the individual themselves had no agency at all and that’s also a very dangerous path.
For example, as I’ve mentioned before in these discussions, in high school I had a friend who died by suicide. It certainly appeared to happen in response to the end of a romantic relationship. The former romantic partner in that case was deeply traumatized as well (the method of suicide was designed to traumatize that individual). But if we open up the idea that we can blame someone else for “causing” a death by suicide, someone might have thought to sue that former romantic partner as well, arguing that their recent breakup “caused” the death.
This does not seem like a fruitful path for anyone to go down. It just becomes an exercise in lashing out at many others who somehow failed to stop an individual from doing what they were ultimately determined to do, even if they did not know or believe what that person would eventually do.
The rush to impose liability on AI companies also runs headlong into First Amendment problems. Even if you could somehow hold OpenAI responsible for Adam’s death, it’s unclear what legal violation they actually committed. The company did try to push him towards help—he steered the conversation away from that.
But some are now arguing that any AI assistance with suicide methods should be illegal. That path leads to the same surveillance dead end, just through criminal law instead of civil liability. There are plenty of books that one could read that a motivated person could use to learn how to end their own life. Should that be a crime? Would we ban books that mention the details of certain methods of suicide?
Already we have precedents that suggest the First Amendment would not allow that. I’ve mentioned it many times in the past, but in Winter vs. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms wasn’t liable for people who ate poisonous mushrooms that the book said were safe, because the publisher itself didn’t have actual knowledge that those mushrooms were poisonous. Or there’s the case of Smith v. Linn, in which the publisher of an insanely dangerous diet was not held liable, on First Amendment grounds, for people following the diet, leading to their own death.
You can argue that those and a bunch of similar cases were decided incorrectly, but it would only lead to an absolute mess. Any time someone dies, there would be a rush of lawyers looking for any company to blame. Did they read a book that mentioned suicide? Did they watch a YouTube video or spend time on a Wikipedia page?
We need to recognize that people themselves have agency, and this rush to act as though everyone is a mindless bot controlled by the computer systems they use leads us nowhere good. Indeed, as we’re seeing with this new surveillance and snitch effort by OpenAI, it can actually lead to an even more dangerous world for nearly all users.
The Adam Raine case is a tragedy that demands our attention and empathy. But it’s also a perfect case study in how our instinct to “hold someone accountable” can create solutions that are worse than the original problem.
OpenAI’s response—more surveillance, more snitching to law enforcement—is exactly what happens when we demand corporate liability without thinking through the incentives we’re creating. Companies don’t magically develop better judgment or more humane policies when faced with lawsuits. They develop more ways to shift risk and monitor users.
Want to prevent future tragedies? The answer isn’t giving AI companies more reasons to spy on us and report us to authorities. It’s investing in actual mental health resources, destigmatizing help-seeking, and, yes, accepting that we live in a world where people have agency—including the tragic agency to make choices we wish they wouldn’t make.
The surveillance state we’re building, one panicked corporate liability case at a time, won’t save the next Adam Raine. But it will make all of us less free.
Over the last year especially, there’s been a lot of talk about kid safety online and the role (if any) of social media in all of that. It’s a complicated topic that requires nuance, not unproven claims of social media being the cause. Getting this wrong is likely to make kids’ lives worse, not better. Blaming social media is easy. Doing the actual work of figuring out where the real problems are and how to best respond to them is hard.
Unfortunately, there are many people out there who want to spend their time boosting their own reputations by claiming they’re helping kids, without being willing to put in the actual work.
It looks like we may need to add Prince Harry and his wife, Meghan Markle, to that list. Prince Harry has a bit of a history of cosplaying as an expert on internet speech, without the actual expertise to back it up. And the latest move strikes me as a very cynical and potentially dangerous approach. Harry and Meghan launched something called “The Parents Network,” which, in theory, could be a useful set of resources for parents grappling with the challenges of raising kids in a digital era.
For example, it could feature resources from actual experts like former Techdirt podcast guest, Devorah Heitner. She has written multiple books on how to better raise kids in a digital age. Her books mostly focus on better communication between parents and kids, and not treating the internet as something icky. This will only lead kids to try to hide their usage.
And perhaps, over time, Harry and Meghan’s effort will get there. The current website provides precious few details. But what it does include seems to suggest the effort is really focused on just demonizing social media. Prince Harry and Meghan gave a big interview about this on CBS News. In the interview, Harry drops a line that is so disconnected from reality that it should turn heads.
Prince Harry said that in the “olden days” parents always knew what their children were up to, as long as they were at home.
“At least they were safe, right?” he said.
“And now, they could be in the next door room on a tablet or on a phone, and can be going down these rabbit holes. And before you know it, within 24 hours, they could be taking their life.”
Except… what? No, in the olden days parents did not always know what their children were up to. Yes, perhaps if you were raised in Buckingham Palace, there was always some adult keeping tabs on you, but for most adults today, childhood included an awful lot of time when parents had no idea where you were.
When I was a kid, my parents would know I was at school during school hours, but from the moment school let out until dinner time, they had no idea where I was or what I was doing. It often involved hanging out at friends, or riding bikes far away, or lots of other things that they had no visibility into. Some of it was almost certainly not particularly safe.
Indeed, there was a big study last year in the Journal of Pediatrics that suggested one major cause for the rise in depression and anxiety among teens was not social media. Instead, it was the fact that adults feel the need to hover over their kids at every waking moment, taking away their ability to have spaces where they can just hang out and be kids, not under constant surveillance.
Furthermore, it may come as a surprise to Harry, but back in those days, sometimes kids (tragically) took their own lives in the days before the internet as well. I have mentioned it in the past, but in both high school and college, I had classmates who took their own lives. It was very sad and very tragic. But surveilling them wouldn’t have changed things. Getting them actual help and treating the actual problems might have.
It’s great that Prince Harry and Meghan want to help families facing trauma. Many useful things can be done. But, kicking it off with such a false claim that kids were magically “safe” in this fictional past doesn’t help. And could, quite likely, hurt.
We’ve been covering, at great length, the moral panic around the claims that social media is what’s making kids depressed. The problem with this narrative is that there’s basically no real evidence to support it. As the American Psychological Association found when it reviewed all the literature, despite many, many dozens of studies done on the impact of social media on kids, no one was able to establish a causal relationship.
As that report noted, the research seemed to show no inherent benefit or harm for most kids. For some, it showed a real benefit (often around kids being able to find like-minded people online to communicate with). For a very small percentage, it appeared to potentially exacerbate existing issues. And those are really the cases that we should be focused on.
But, instead, the narrative that continues to make the rounds is that social media is inherently bad for kids. That leads to various bills around age verification and age gating to keep kids off of social media.
Supporters of these bills will point to charts like this one, regarding teen suicide rates, noting the uptick correlates with the rise of social media.
Of course, they seem to cherry pick the start date of that chart, because if you go back further, you realize that while the uptick is a concern, it’s still way below what it had been in the 1990s (pre-social media).
In case that embed isn’t working, here’s an image of it:
Obviously, the increase in suicides is a concern. But, considering that every single study that tries to link it to social media ends up failing to do so, that suggests that there might be some other factor at play here.
The research summarizes the decline in “independent mobility” for kids over the last few decades:
Considerable research, mostly in Europe, has focused on children’s independent mobility (CIM), defined as children’s freedom to travel in their neighborhood or city without adult accompaniment. That research has revealed significant declines in CIM, especially between 1970 and 1990, but also some large national differences. For example, surveys regarding the “licenses” (permissions) parents grant to their elementary school children revealed that in England, license to walk home alone from school dropped from 86% in 1971 to 35% in 1990 and 25% in 2010; and license to use public buses alone dropped from 48% in 1971 to 15% in 1990 to 12% in 2010.11 In another study, comparing CIM in 16 different countries (US not included), conducted from 2010 to 2012, Finland stood out as allowing children the greatest freedom of movement. The authors wrote: “At age 7, a majority of Finnish children can already travel to places within walking distance or cycle to places alone; by age 8 a majority can cross main roads, travel home from school and go out after dark alone, by age 9 a majority can cycle on main roads alone, and by age 10 a majority can travel on local buses alone.” Although we have found no similar studies of parental permissions for US children, other data indicate that the US is more like the UK concerning children’s independent mobility than like Finland. For example, National Personal Transportation Surveys revealed that only 12.7% walked or biked to school in 2009 compared with 47.7% in 1969.
And then it notes the general decline in mental health as well, which they highlight started long before social media existed:
Perhaps the most compelling and disturbing evidence comes from studies of suicide and suicidal thoughts. Data compiled by the CDC indicate that the rate of suicide among children under age 15 rose 3.5-fold between 1950 and 2005 and by another 2.4-fold between 2005 and 2020. No other age group showed increases nearly this large. By 2019, suicide was the second leading cause of death for children from age 10 through 15, behind only unintentional injury. Moreover, the 2019 YRBS survey revealed that during the previous year 18.8% of US high school students seriously considered attempting suicide, 15.7% made a suicide plan, 8.9% attempted suicide one or more times, and 2.5% made a suicide attempt requiring medical treatment. We are clearly experiencing an epidemic of psychopathology among young people.
But, unlike those who assume correlation is causation with regards to social media, the researchers here admit there needs to be more. And they bring the goods, pointing to multiple studies that suggest a pretty clear causal relationship, rather than just correlation.
Several studies have examined relationships between the amount of time young children have for self-directed activities at home and psychological characteristics predictive of future wellbeing. These have revealed significant positive correlations between amount of self-structured time (largely involving free play) and (a) scores on two different measures of executive functioning; (b) indices of emotional control and social ability; and (c) scores, two years later, on a measure of self-regulation. There is also evidence that risky play, where children deliberately put themselves in moderately frightening situations (such as climbing high into a tree) helps protect against the development of phobias and reduces future anxiety by increasing the person’s confidence that they can deal effectively with emergencies.
Studies with adults involving retrospections about their childhood experiences provide another avenue of support for the idea that early independent activity promotes later wellbeing. In one such study, those who reported much free and adventurous play in their elementary school years were assessed as having more social success, higher self-esteem, and better overall psychological and physical health in adulthood than those who reported less such play. In another very similar study, amount of reported free play in childhood correlated positively with measures of social success and goal flexibility (ability to adapt successfully to changes in life conditions) in adulthood. Also relevant here are studies in which adults (usually college students) rated the degree to which their parents were overprotective and overcontrolling (a style that would reduce opportunity for independent activity) and were also assessed for their current levels of anxiety and depression. A systematic review of such studies revealed, overall, positive correlations between the controlling, overprotective parenting style and the measures of anxiety and depression.
They also note that they are not claiming (of course) that this is the sole reason for the declines in mental health. Just that there is strong evidence that it is a key component. They explore a few other options that may contribute, including increased pressure at schools and societal changes. They also consider the impact of social media and digital technologies and note (as we have many times) that there just is no real evidence to support the claims:
Much recent discussion of young people’s mental health has focused on the role of increased use of digital technologies, especially involvement with social media. However, systematic reviews of research into this have provided little support for the contention that either total screen time or time involved with social media is a major cause of, or even correlate of, declining mental health. One systematic review concluded that research on links between digital technology use and teens’ mental health “has generated a mix of often conflicting small positive, negative and null associations” (Odgers & Jensen, 2020). Another, a “review of reviews” concluded that “the association between digital technology use, or social media use in particular, and psychological well-being is, on average, negative but very small” and noted some evidence, from longitudinal research, that negative correlations may result from declining mental health leading to more social media use rather than the reverse (Orben, 2020)
Indeed, if this theory is true, that the lack of spaces for kids to explore and play and experiment without adult supervision is a leading cause of mental health decline, you could easily see how those who are depressed are more likely to seek out those private spaces, and turn to social media, given the lack of any such spaces they can go to physically.
And, if that’s the case, then all of these efforts to ban social media for kids, or to make social media more like Disneyland, could likely end up doing a lot more harm than good by cutting off one of the last remaining places where kids can communicate with their peers without adults watching over their every move. Indeed, the various proposals to give parents more access to what their kids are doing online could worsen the problem as well, taking away yet another independent space for kids.
Over the last few years, there’s been a push to bring back more “dangerous” play for kids, as people have begun to realize that things may have gone too far in the other direction. Perhaps it’s time we realize that social media fits into that category as well.
Some unfortunate news. AZ Central reported yesterday that James Larkin, who was a free speech pioneer who built an alt-weekly newspaper empire, and then spun out the controversial classifieds ads site Backpage, died by suicide, one week before his latest trial.
While there’s been plenty of discussion about Backpage, related to questions around Section 230, sex trafficking, and a variety of other things, much of the public perception about it is completely misleading. The actual details suggest that the media, prosecutors, and some politicians basically concocted an astoundingly misleading narrative about Larkin (and his partner Michael Lacey) and what they did at Backpage.
Larkin, going back to his days running the alt weekly New Times (which eventually took over the famed Village Voice) always believed in fighting strongly for his free speech rights, including getting arrested a decade and a half ago for going public about a bullshit subpoena they had received from then-sheriff Joe Arpaio.
As some actual reporting details regarding Backpage, contrary to the public story about how Backpage was actively encouraging and enabling sex trafficking, the company worked closely with law enforcement to help them track down and arrest those responsible for sex trafficking. They literally hired a former federal prosecutor who was on the board of NCMEC to help them stop anyone from using Backpage for trafficking. In an internal note by the DOJ (which the DOJ tried to hide from the trial), it was noted:
“unlike virtually every other website that is used for prostitution and sex trafficking, Backpage is remarkably responsive to law enforcement requests and often takes proactive steps to assist in investigations.”
However, where they drew the line was when law enforcement started demanding similar help in tracking down non-trafficking consensual sex work. Larkin (and Lacey) found that to go too far. From an excellent and thorough breakdown of the situation from Wired magazine (written by a former DOJ assistant US attorney):
Lacey and Larkin say they were more than willing to help crack down on child abuse. But the demands being made of them seemed increasingly unreasonable. Sex trafficking, defined as commercial sex involving coerced adults or anyone under 18, was one thing. Consensual sex work was quite another—and it wasn’t even illegal under federal law.
In March 2011, Lacey and Larkin flew to Virginia to meet with Allen. “To say that the meeting did not go well is an understatement,” Allen wrote later that day. After a full hour, he and Lacey “were still screaming at each other.” Allen demanded that Backpage do more to combat prostitution. Larkin said the site would enforce a “newspaper standard,” but Lacey added, “We are not Craigslist, and we aren’t going to succumb to pressure.” A Justice Department memo continues the story: “Allen responded that ‘At least you know what business you are in.’ ”
In short, contrary to the public narrative you may have heard, Backpage worked closely with federal law enforcement to actually stop sex trafficking (and not just take it down, but to track down the perpetrators). But they refused to do the same for consensual sex work and that is why the feds eventually came down on them like a ton of bricks, all while telling the media and politicians that it was for sex trafficking. But that was all bullshit.
And the bullshit extended to the process of the federal case against Larkin and Lacey, including when the defendants discovered an internal DOJ memo stating flat out that Backpage was helpful, rather than harmful, in the fight against sex trafficking. The DOJ successfully got the court to say that they couldn’t use that in their defense. Yes, this exonerating evidence was barred from use during the trial:
In 2012, Crisham and Swaminathan seemed impressed by how cooperative Backpage was with police and other members of law enforcement. Backpage data offer “a goldmine of information for investigators,” they noted. In general, staff would respond to subpoenas within the same day; “with respect to any child exploitation investigation, Backpage often provides records within the hour.” Staff regularly provided “live testimony at trial to authenticate the evidence against defendants who have utilized Backpage,” and the company held seminars for law enforcement on how to best work with Backpage staff and records.
“Witnesses have consistently testified that Backpage was making substantial efforts to prevent criminal conduct on its site, that it was coordinating efforts with law enforcement agencies and NCMEC [the National Center for Missing and Exploited Children], and that it was conducting its businesses in accordance with legal advice,” wrote Swaminathan and McNeil in 2013. Furthermore, they noted, their investigation failed “to uncover compelling evidence of criminal intent or a pattern or reckless conduct regarding minors.” In fact, it “revealed a strong economic incentive for Backpage to rid its site of juvenile prostitution.”
Ultimately, it was their assessment that “Backpage genuinely wanted to get child prostitution off of its site.”
Indeed, as the initial trial of Larkin and Lacey began, the judge actually had to order a mistrial, as the DOJ kept referring to child sex trafficking, even though nothing in the charges was about sex trafficking at all, let along child sex trafficking.
The new trial was set to begin next week, but for whatever reason Larkin chose to end his life rather than continue to be railroaded in this manner. I spoke with Larkin once a few years ago, and he seemed utterly perplexed by the awful situation he was in, noting that all he wanted to do was protect basic free speech principles, and couldn’t understand why he was being held up as a “sex trafficker” after everything he’d done to help law enforcement track down sex traffickers (going above and beyond basically every other site out there according to the DOJ themselves).
This is a sad and unfortunate end to his story.
I’ve always taken the stance that you can’t blame any third party for someone’s decision to take their own life, as we can never know all of the factors involved. But I do hope that some of the people who literally built up their own profiles by demonizing Backpage and Section 230 at least take a moment to reflect on whether or not they got so caught up in the narrative they wanted that they missed what was actually happening.
What you see below is part one of a two parter about a terrible bill in California. It started out as a single post, but there was so much nonsense, I decided to break it up into two parts. Stay tuned for part two.
You may recall last year that California, in addition to the obviously unconstitutional Age Appropriate Design Code, also tried to pass a “social media addiction” bill. Thankfully, at the last minute, that bill was killed. But, this year, a version of it is back and it has tremendous momentum, and is likely to pass. And it’s embarrassing. California legislators are addicted to believing utter nonsense, debunked moral panic stories, making themselves into complete laughingstocks.
The bill, SB 680, builds on other problematic legislation from California and basically makes a mess of, well, everything. The short explanation of the bill is as follows:
This bill would prohibit a social media platform, as defined, from using a design, algorithm, or feature that the platform knows, or by the exercise of reasonable care should have known, causes child users, as defined, to do any of certain things, including experience addiction to the social media platform.
What the bill will actually do is enable it so that social media companies can be fined if any kid that uses them gets an eating disorder, inflicts harm (on themselves or others), or spends too much time on social media. That’s basically the law.
Now, the framers of the law will say that’s not true, and that the law will only fine companies who “should have known” that their service “caused” a child to do one of those three things, but no one here was born yesterday. We’ve seen how these things are blamed on social media all the time, often by very angry parents who need to blame someone for things that (tragically) many kids have dealt with before social media ever existed.
Social media is the convenient scapegoat.
It’s a convenient scapegoat for parents. For teachers. For school administrators. For the media. And especially for grandstanding politicians who want headlines about how they’re saving the children, but don’t want to put in the hard work to understand what they’re actually doing.
Remember, multiple recent studies, including from the American Psychological Association and the (widely misrepresented) Surgeon General of the US, have said there is no causal evidence yet linking social media to harmful activity. What the reports have shown is that there is a small number of children who are dealing with serious issues that lead them to harmful behavior. For those children, it is possible that social media might exacerbate their issues, and everyone from medical professionals to teachers to parents, should be looking for ways to help that small number of children impacted.
That’s not what any of these laws do, however.
Instead, they assume that this small group of children, who are facing some very real problems (which, again, have not been shown to have been caused by social media in any study) represents all kids.
Instead, the actual research shows much more clearly that social media is beneficial to a much larger group of children, allowing them to communicate and socialize. Allowing them to have a “third space” where they can interact with their peers, where they can explore interests. The vast majority of teens find social media way more helpful than harmful. In some cases, it’s literally life-saving.
But, parents, teachers, principals, politicians and the media insist that someone must be to blame whenever a child has an eating disorder (which pre-existed social media) or dies by suicide (ditto). And social media must be the problem, because they refuse to explore their own failings or society’s larger failings.
Look no further than the absolutely ridiculous hearing the California Assembly recently held about the bill. It’s a hearing that should be cause for Californians to question who they have elected. A hearing where one Assemblymember literally claimed that we should follow China’s lead in regulating social media (we’ll get to that in part II).
The hearing kicked off with the Senate Sponsor, Nancy Skinner, making up nonsense about kids and social media that has no basis in fact:
I think many of you are aware that we are facing an unprecedented and urgent crisis amongst our kids where there’s high levels of social media addiction. The numbers of hours per day that many of our young people spend on average on social media is beyond, at least my comprehension, but the data is there. There’s high levels of teen suicides and those that increase in teen suicides, while some people think about the pandemic, have been steadily increasing over the past 10 to 12 years. And in effect, began with the onset, that increase with onset of much of the social media. We also have evidence of the very easy ability for anyone, which includes our youth, to purchase fentanyl and other illegal substances on via social media sites as well as illegal firearms. And in fact, on the illegal substances like fentanyl laced drugs, it is quicker to procure such a substance on social media than it is to use your app and get your Lyft or Uber driver.
So, look, someone needs to call bullshit on literally every single point there. Regarding suicide data, we highlighted that today’s suicide rates are still noticeably below the highs in the 1990s. Yes, they’ve gone up over the last few years, but they are still below the highs, and why isn’t anyone looking at what caused suicide rates to drop so low in the late 90s and early 2000s. Perhaps it was because we weren’t living in a constant hellscape in which grandstanding politicians are screaming every day about how horrible everything is?
But, really, I need to absolutely call bullshit on the idea that you can order fentanyl faster than you can get a Lyft or an Uber driver. Because that’s not true. There is no world in which that is true. There is no reasonable human being on this planet who believes that it’s quicker to get fentanyl online than to get an Uber. That’s just Senator Nancy Skinner making up things to scare people. Shameful.
It’s reminiscent of the similar bullshit scare tactics used by supporters of FOSTA, who claimed that you could order a sex trafficking victim online faster than you could order a pizza. That was made up whole cloth, but it was effective in getting the law passed. Apparently Skinner is using the same playbook.
Skinner continues to lie:
if we look at teenage girls in particular or adolescent girls, that researchers posing as teen girls on social media platforms were directed to content on eating disorders every 39 seconds, regardless of any request or content request by the teen. So in other words, just the algorithm, the feature or design of the platform directed that teen girl to eating disorder content every 39 seconds and to suicide-oriented content less frequently, but still with high frequency.
So, again, this isn’t true. It’s a moral panic misreading of an already questionable study. The study was done by the organization the Center for Countering Digital Hate, which is very effective at getting headlines, generating moral panics and getting quoted in the news (and at getting donations). What it’s not good at is competent research. You can read the “report” here, which is not “research,” as Senator Skinner implies. And even its highly questionable report does not even come close to saying what Skinner claims.
CCDH’s study was far from scientific to start with. They set up JUST EIGHT accounts on TikTok (not other sites) pretending to be 13-year-olds (two each in 4 different countries) and gave half of them usernames that includes the phrase “loseweight.” This is not scientific. The sample size is ridiculously small. There are no real controls unless you consider that half the accounts didn’t have “loseweight” in their name. There is no real explanation for why “loseweight” other than they claim it’s typical for those with eating disorders to make a statement regarding the disorder in their usernames.
Then, they had the researchers CLICK ON AND LIKE videos that the researchers themselves decided were “body image or mental health” related (which is not just eating disorder or suicide related content). In other words, THE RESEARCHERS TRAINED THE ALGORITHM THAT THEY LIKED THIS CONTENT. Then acted surprised when the accounts that clicked on and liked “body image” or “mental health” videos… got more “mental health” and “body image” videos.
As for the 39 second number, that is NOT (as Skinner claimed) how often kids see eating disorder content. Not even close. 39 seconds is how often users might come across content that CCDH themselves defined as “body image” or “mental health” related. NOT “suicide” or “eating disorder” content. In fact, the report says the fastestany of their test accounts saw (again, a self-classified) “eating disorder” content was only after eight minutes. They don’t say how long it took for the other accounts.
Not every 39 seconds.
Nancy Skinner is lying.
And, again, CCDH themselves decides how they classify the content here. While CCDH includes just a few screenshots of TikTok content that they classified as problematic (allowing them to cherry pick the worst). But even then, they seem to take a VERY broad definition of problematic content. Many of the screenshots seem like… general teen insecurities? I mean, this is one of the examples they show of “eating disorder” content:
Others just seem like typical teen angst and/or dark humor. These politicians are so disconnected from teens and how they communicate, it’s ridiculous. I’ve mentioned it before, though I don’t talk about it much or in detail, but a friend died by suicide when I was in high school. It was horrible and traumatic. But also, if any of us had actually known that he was suffering, we would have tried to get him help. Some of the TikTok videos in question may be calls for help, where people can actually help.
But this bill would tell kids they need to suffer in silence. Bringing up suicidal ideation. Or insecurities. Or just talking about mental health, would effectively be banned under this bill. It would literally do the exact opposite of what grandstanding, disconnected, lying politicians like Nancy Skinner claim it will do.
Back to the CCDH report. Incredibly, the report claims that PHOTOS OF GUM are eating disorder content, because gum “is used as a substitute for food by some with eating disorders.”
Have no fear, Senator Skinner: if this bill becomes law, you’ll have saved kids across the state from… seeing gum? Or adding a hashtag that says #mentalhealthmatters.
This is a joke.
Senator Skinner should issue a retraction of her statement. And pull the bill from consideration.
Of course, the context in which this is all presented by Senator Skinner is that social media companies are doing “nothing” about this. But, again, this study was only about TikTok, one social media company. And, the report that she misread and misquoted makes it pretty clear that TikTok is actively trying to moderate such content, and the kids are continually getting around those moderation efforts. In the report, it discusses how eating disorder hashtags often have “healthy” discussions (Skinner ignores this), and then says (falsely) that TikTok “does not appear to… moderate” this content.
But, literally two paragraphs later, the very same report says that kids are constantly evading moderation attempts to keep talking about eating disorders:
Users evade moderation by altering hashtags, for example by modifying #edtok to #edtøk. Another popular approach for avoiding content moderation is to co-opt singer Ed Sheeran’s name, for instance #EdSheeranDisorder.
So, if TikTok is not moderating this content… why are kids getting around this non-existent moderation?
Indeed, other reports actually showed that TikTok appeared to be dealing with eating disorder content better than earlier platforms, in that it was inserting healthy content into such discussions, about how to eat and exercise in a healthy way. Of course, under CCDH’s definition, this is all evil “body image” content, which Nancy Skinner would prefer be silenced across the internet. How dare kids teach each other how to be healthy. Again, let them suffer in silence.
Meanwhile, as we’ve discussed, actual research from actual experts, have said that forcing social media to hide ALL discussion of eating disorders actually puts children at much greater risk. Because those with eating disorders still have them, and they tend to go to deeper, darker parts of the web. Yet, when those discussions happened on mainstream social media, it also allowed for the promotion of content to help guide those with eating disorders to recovery, including content from those who had recovered and sought to help others. But, under this bill, such content HELPING those with eating disorders would effectively be barred from social media.
Going back to what I said above about my friend in high school, if only he had spoken up. If only he had told friends that he was suffering. Instead, we only found out when he was dead. This bill will lead to more of that.
Bill 680 takes none of that nuance into account. Bill 680 doesn’t understand how important it is for kids to be able to talk and connect.
All based on one Senator misreading what is already junk science.
Senator Skinner’s statement is almost entirely false. What little is accurate is presented in a misleading way. And the underlying setup of the bill completely misunderstands children, mental health, body image issues, and social media. All in one.
It’s horrifying.
Skinner’s star witness, incredibly, is Nancy Magee, the superintendent of San Mateo schools. If you recognize that name, it’s because we’ve written about her before. She’s the superintendent who filed the most ridiculous, laughable, embarrassing lawsuit against social media companies accusing them of RICO violations, because some kids had trouble getting back into regular school routines immediately after they came back from COVID lockdowns. RICO violations!
Of all the superintendents in all of California, can’t you at least pick the one who hasn’t filed a laughably ridiculous joke of a lawsuit against social media companies that similarly misread a long list of studies, to try to paper over her own districts failures to helps kids deal with the stress of the pandemic?
I guess if you’re going to misread and lie about the impact of social media, you might as well team up with someone who has a track record of doing the same. Magee’s statement, thankfully, isn’t as chock full of lies and fake stats, but is mostly just general fear mongering, noting that teenagers use social media a lot. I mean, duh. In my day, teens used the phone a lot. Kids communicate. Just like adults do.
There is, also, Anthony Liu, from the California Attorney General’s office. You’d hope that he would bring a sense of reality to the proceedings, but he did not. It was just more fear mongering, and nonsense pretending to be about protecting the children. Liu had a colleague with him, bouncing a child on her lap as a prop, where Skinner chimed in, literally saying that it was an example of “the child we are trying to protect,” leading an Assemblymember to say “how can we say no?” to (apparently?) whichever side brings in more cute kids.
And, that’s where we’re going to end part I. Things went totally off the rails after that, when two speakers spoke out against the bill, and a bunch of Assemblymembers on the Committee completely lost their minds attacking the speakers, social media, children, and more.
Still, we’ll close with this. If Senator Nancy Skinner had any integrity, she’d retract her statement, admit she’d been too hasty, admit that the evidence does not, in fact, support any of her claims, and suggest that this bill needs a lot more thought and a lot more input from experts, not grandstanding and moral panics.
I’m not holding my breath, because you might not be able to order fentanyl as quick as you can order an Uber, but you sure as hell can expect a California state elected official to cook you up a grandstanding, moral panic-driven monstrosity with about as much effort as it takes to order an Uber.
Last February, a report in Politico found that Crisis Text Line, one of the nation’s largest nonprofit support options for the suicidal, had been monetizing user data. More specifically, the nonprofit was collecting all sorts of data on “customer interactions” (ranging from the frequency certain words are used, to the type of distress users are experiencing), then sharing that data with their for profit partner.
That partner then made money by selling that data to data brokers. This was ok, the companies claimed, because the data collected was “anonymized,” a term that study after study after study have shown means nothing and doesn’t actually protect your data.
Now The Markup has another report showing how websites for mental health crisis resources across the country routinely collect sensitive user data and share it with Facebook. More specifically, their websites contain the “Meta Pixel,” which can collect data including names, user ID numbers, email addresses, and browsing habits to the social media giant:
The Markup tested 186 local crisis center websites under the umbrella of the national 988 Suicide and Crisis Lifeline. Calls to the national 988 line are routed to these centers based on the area code of the caller. The organizations often also operate their own crisis lines and provide other social services to their communities.
The Markup’s testing revealed that more than 30 crisis center websites employed the Meta Pixel, formerly called the Facebook Pixel. The pixel, a short snippet of code included on a webpage that enables advertising on Facebook, is a free and widely used tool. A 2020 Markup investigation found that 30 percent of the web’s most popular sites use it.
We recently noted how many of the States that have been freaking out about TikTok also have tech embedded in their websites that share all kinds of sensitive data with data brokers or companies like Facebook. Often they’re just using websites developed by third parties using templates, and aren’t fully aware that their website is even doing this. Other times they know and just don’t care.
But it’s a continued example of the kind of stuff that simply wouldn’t be as common if we had even a basic national privacy law for the internet era. One that required just the slightest bit of due diligence, especially for nonprofits and companies that operate in particularly sensitive arenas.
But for decades the U.S. government, at the direct behest of numerous industries, prioritized making money over human safety, brand trust, or even marketplace health. And we keep paying for it in direct and indirect ways alike with scandals that will only get worse now that issues like the authoritarian assault on women’s reproductive healthcare has come squarely into frame.
There have been a bunch of attempts over the last few years to try to get around Section 230, and to sue various websites under a “negligence” theory under the law, arguing that the online service was somehow negligent in failing to protect a user, and therefore Section 230 shouldn’t apply. Some cases have been successful, though in a limited way. Many of the cases have failed spectacularly, as the underlying arguments often just don’t make much sense.
The latest high profile loss of this argument was in a big case, that received tons of attention, in part because of the tragic circumstances involved in the complaint. The complaint argued that Amazon sold “suicide kits,” because it offered the ability to purchase a compound that is often used for suicide, and also noted related items that are “frequently bought together,” based on its recommendation algorithm. The families of some teenagers who tragically died via this method, sued Amazon, saying it was negligent in selling the product, and making recommendations. The complaint noted that Amazon had received warnings about the chemical compound in question for years, but kept selling it (at least until December of 2022 when it cut off sales).
It’s, of course, completely reasonable to be sympathetic to the families here. The situation is clearly horrible and tragic in all sorts of ways. But there are important reasons why we don’t blame third parties when someone decides to kill themselves via suicide. For one, it can incentivize even more such actions, as it can be seen as a way of extracting “revenge.”
Either way, thankfully, a court has rejected this latest case, and done so very thoroughly. Importantly, while there is a Section 230 discussion, it also explains why, even absent 230, this case is a loser. You can’t just random claim that a company selling products is liable for someone who uses the product for suicide.
Indeed, the opinion starts out by exploring the product liability claims separate from the 230 analysis, and says you can’t make this leap to hold Amazon liable. First, the court notes that under the relevant law, there can be strict liability for the manufacturer, but no one is claiming Amazon manufacturers the compound, so that doesn’t work. The standards for liability as a seller, are much higher (for good reason!). And, part of that is that you can only be held liable if the product itself is defective.
Plaintiffs’ WPLA negligent product liability claim fails for a number of reasons. First, the court concludes that the Sodium Nitrite was not defective, and that Amazon thus did not owe a duty to warn. Under Washington law, “no warning need be given where the danger is obvious or known to the operator.” Dreis, 739 P.2d at 1182 (noting that this is true under negligence and strict liability theories); Anderson v. Weslo, Inc., 906 P.2d 336, 340-42 (Wash. Ct. App. 1995) (noting that the risk of falling and getting hurt while jumping on a trampoline is obvious and a manufacturer/seller need not warn of such obvious dangers); Mele v. Turner, 720 P.2d 787, 789-90 (Wash. 1986) (finding neighbors were not required to warn teenager regarding lawnmower’s dangers—e.g., putting hands under running lawnmower—where the allegedly dangerous condition was obvious and known to plaintiff). 9 In line with this principle, Washington courts consistently hold that a warning label need not warn of “every possible injury.” Anderson, 906 P.2d 341-42; Baughn v. Honda Motor Co., 727 P.2d 655, 661-64 (Wash. 1986) (finding sufficient Honda’s warning that bikes were intended for “off-the-road use only” and that riders should wear helmets; no warning required as to risk of getting hit by car, the precise danger eventually encountered); Novak v. Piggly Wiggly Puget Sound Co., 591 P.2d 791, 795-96 (Wash. Ct. App. 1979) (finding general warnings about ricochet sufficient to inform child that a BB gun, if fired at a person, could injure an eye).
Here, the Sodium Nitrite’s warnings were sufficient because the label identified the product’s general dangers and uses, and the dangers of ingesting Sodium Nitrite were both known and obvious. The allegations in the amended complaint establish that Kristine and Ethan deliberately sought out Sodium Nitrite for its fatal properties, intentionally mixed large doses of it with water, and swallowed it to commit suicide. (See, e.g., Am. Compl. ¶¶ 161-72, 178-79, 183, 185-86, 190-202, 20-23, 116, 139-43.) Kristine and Ethan’s fates were undisputedly tragic, but the court can only conclude that they necessarily knew the dangers of bodily injury and death associated with ingesting Sodium Nitrite.
And thus:
Amazon therefore had no duty to provide additional warnings regarding the dangers of ingesting Sodium Nitrite. See, e.g., Dreis, 739 P.2d at 1182 (“The warning’s contents, combined with the obviousness of the press’ dangerous characteristics, indicate that any reasonable operator would have recognized the consequences of placing one’s hands in the point-of-operation area.”).
Again, think of what would happen if the results were otherwise. It is an unfortunate reality of the world that we live in, that some people will end up dying by suicide. It is always tragic. But blaming companies for selling the tools or products that are used by people in those situations will not help anyone.
The court goes even further. It notes that even if Amazon should have been expected to add even more warnings about the product, that would not have stopped the tragic events from occurring (indeed, it would have only confirmed the reasons why the product was purchased):
Second, Plaintiffs’ WPLA negligent product liability claim also fails because, even if Amazon owed a duty to provide additional warnings as to the dangers of ingesting sodium nitrite, its failure to do so was not the proximate cause of Kristine and Ethan’s deaths. “Proximate cause is an essential element” of both negligence and strict liability theories.12 Baughn, 727 P.2d at 664. “If an event would have occurred regardless of a defendant’s conduct, that conduct is not the proximate cause of the plaintiff’s injury.” Davis v. Globe Mach. Mfg. Co., 684 P.2d 692, 696 (Wash. 1984). Under Washington law, if the product’s user knows there is a risk, but chooses to act without regard to it, the warning “serves no purpose in preventing the harm.” Lunt, 814 P.2d at 1194 (concluding that defendants alleged failure to warn plaintiff of specific dangers associated with skiing and bindings was not proximate cause of injuries because plaintiff would have kept skiing regardless); Baughn, 727 P.2d at 664-65 (concluding that allegedly inadequate warnings were not proximate cause of harm where victim knew the risk and ignored the warnings; the harm would have occurred even with more vivid warnings of risk of death or serious injury). A product user’s “deliberate disregard” for a product’s warnings is a “superseding cause that breaks the chain of proximate causation.” Beard v. Mighty Lift, Inc., 224 F. Supp. 3d 1131, 1138 (W.D. Wash. 2016) (stating that “a seller may reasonably assume that the user of its product will read and heed the warnings . . . on the product” (citing Baughn, 727 P.2d at 661)).
Here, the court concludes that additional warnings would not have prevented Kristine and Ethan’s deaths. The allegations in the amended complaint establish that Kristine and Ethan sought the Sodium Nitrite out for the purpose of committing suicide and intentionally subjected themselves to the Sodium Nitrite’s obvious and known dangerous and those described in the warnings on the label…. Accordingly, Plaintiffs have failed to plausibly allege that Amazon’s failure to provide additional warnings about the dangers of ingesting Sodium Nitrite proximately caused Kristine and Ethan’s deaths
In other words, there could not be product liability. There were necessary warnings, and more warnings would not have changed the outcome.
The plaintiffs also argued that Amazon could be held liable for “suppressing” reviews complaining about Amazon selling the product. And on this point, Section 230 does protect Amazon:
Here, the “information” at issue in Plaintiffs’ WPLA intentional concealment claim is the “negative product reviews that warned consumers of [Sodium Nitrite’s] use for death by suicide.” (Am. Compl. ¶ 241(j).) This “information” was, as Plaintiffs admit, provided by the users of Amazon.com. (See id. ¶¶ 122, 144-45.) Indeed, the amended complaint does not allege that Amazon provided, created, or developed any portion of the negative product reviews. (See generally id.) Accordingly, only the users of Amazon.com, not Amazon, acted as information content providers with respect to Plaintiffs’ WPLA intentional concealment claim. See, e.g., Fed. Agency of News LLC v. Facebook, Inc., 432 F. Supp. 3d 1107, 1117-19 (N.D. Cal. 2020) (concluding that Facebook was not an information content provider where plaintiffs sought to hold Facebook liable for removing a plaintiff’s Facebook account, posts, and content); Joseph, 46 F. Supp. 3d at 1106-07 (concluding that Amazon was not acting as an information content provider where plaintiff’s claims arose from the allegedly defamatory statements in reviews posted by third parties).
There are some other attempts to get around 230 as well, and they get rejected as well (not even via 230, just on the merits directly).
The allegations in Count II (common law negligence) fail to state a plausible claim for relief under RCW 7.72.040(1)(a). As discussed above, a plaintiff must establish that the injury-causing product is defective in order to recover against a negligent product seller under the WPLA. (See supra § III.C.1.) The court has already rejected Plaintiffs’ argument that the Sodium Nitrite was defective on the basis of inadequate warnings. (See id.) Accordingly, the allegations in Count II fail to state plausible negligent product liability claims under the WPLA because, as a threshold point, the Sodium Nitrite is not defective. Because Plaintiffs fail to meet this threshold requirement, the court need not address their remaining arguments or the other elements of this claim.
Once again, this all kinda highlights that people who think that getting rid of 230 will magically make companies liable for anything bad that happens on their platforms remain wrong. That won’t happen. Those claims still fail, they just do so in a more expensive way. It might be a boon for trial lawyers looking to pad their billable hours, but it won’t actually do anything productive towards stopping bad things from happening. Indeed, it might make it worse, because efforts to mitigate harms will be used against companies, claiming it showed “knowledge,” and thus companies will be better off just looking the other way.
Passing blatantly unconstitutional dangerous laws “to protect the children” based on totally unsubstantiated moral panics appears to be part of a bipartisan mass hysteria these days. The Kids Online Safety Act, or KOSA, is officially back. And, with it, the recognition that over a quarter of the Senate has bought into this dangerous, unconstitutional nonsense:
It’s sponsored by long-term anti-internet Senators Richard Blumenthal and Marsha Blackburn, and has a ton of co-sponsors, who seem all too eager to support this kind of nonsense:
The Kids Online Safety Act has been cosponsored by U.S. Senators Shelley Moore Capito (R-W.Va.), Ben Ray Luján (D-N.M.), Bill Cassidy (R-La.), Tammy Baldwin (D-Wis.), Joni Ernst (R-Iowa), Amy Klobuchar (D-Minn.), Steve Daines (R-Mont.), Marco Rubio (R-Fla.), John Hickenlooper (D-Colo.), Dan Sullivan (R-Alaska), Chris Murphy (D-Conn.), Todd Young (R-Ind.), Chris Coons (D-Del.), Chuck Grassley (R-Iowa), Brian Schatz (D-Hawaii), Lindsey Graham (R-S.C.), Mark Warner (D-Va.), Roger Marshall (R-Kan.), Peter Welch (D-Vt.), Cindy Hyde-Smith (R-Miss.), Maggie Hassan (D-N.H.), Markwayne Mullin (R-Okla.), Dick Durbin (D-Ill.), Jim Risch (R-Idaho), Sheldon Whitehouse (D-R.I.) and Katie Britt (R-Ala.). More cosponsors may be added during today’s session.
The latest version of the bill fails to fix all of the problems with the version introduced in the last session of Congress, and, as TechFreedom’s Ari Cohn lays out, this entire approach to legislating is an attack on the First Amendment and will do real harm to children, rather than protect them.
There are vague terms about what creates “harm” to minors, meaning that websites will be pressured to suppress all sorts of content to avoid liability. It will also effectively mandate privacy-invading and problematic age verification technology. Like many other bills, it has a “parental consent” part, which fails to recognize that parents and children (especially teenagers) do not always have a healthy and respectful relationship.
But, to me, the worst part of the bill is the “duty of care” portion. Here’s how it’s worded. For years, we’ve explained how a “duty of care” is just a friendly sounding way of forcing censorship on platforms. To understand why, you have to understand how liability works under a duty of care. What it means is that if anything goes wrong after the fact (a child has an eating disorder, dies via suicide, etc.), someone can sue the website and argue they failed in their “duty of care” to protect the child.
But (news flash) even without the internet, people get eating disorders or die by suicide. As we’ve noted, teen suicide rates remain lower than they were in the 1990s, and across all age groups, they’re basically at the same rate they were in the 1970s. Meanwhile, multiple studies show that the highest risk factor for suicide is having easy access to a gun. But because the US is allergic to talking about gun violence due to an obsessive misreading of the 2nd Amendment, people feel free to throw the 1st Amendment in the trash and blame the internet instead.
And, with eating disorders, as multiples studies have detailed, attempts to take down content associated with eating disorders or encouraging eating disorders has actually made the problem significantly worse. That’s because teens with eating disorders would just come up with new language to talk about it, or create and find more hidden communities to discuss their eating disorders. It also made it much harder to post the kind of content that helps those with eating disorders realize they have a problem and get help. When those conversations were on more mainstream sites, it allowed others to enter those communities and provide useful information on how to gradually move away from the eating disorder.
Still, under a “duty of care” any time something bad happens, that individual or their family can sue the online service, claiming they did not satisfy the “duty of care” (even if they tried to moderate the worse content), and the site would then have to defend all its decisions in a costly lawsuit. The easier thing to do, of course, is to ban all such talk, take it down rapidly, and drive those conversations to more extreme forums, putting more kids at risk. And, still, kids on mainstream forums will figure out ways to discuss these things, and bad things will still happen… and expensive, frivolous lawsuits will get filed.
Even worse, it will be that much harder for there to be content helping those dealing with suicidal ideation or eating disorders, because even leaving up helpful content, or allowing users to try to help those in trouble will create a massive risk of liability.
The end result is going to be extraordinarily harmful to children, extremely suppressive of speech, and expensive for all sorts of websites (well beyond the big ones that can afford it).
There is literally nothing good about this law, which misunderstands human nature, the 1st Amendment, legal liability, and a whole lot more — all because of an unsubstantiated moral panic about “kids online.”
The press release statements from Blumenthal and Blackburn are particularly frustrating, as it shows that they have no clue what they’re doing. It’s full of myths and misleading claims.
“Our bill provides specific tools to stop Big Tech companies from driving toxic content at kids and to hold them accountable for putting profits over safety,” said Senator Blumenthal. “Record levels of hopelessness and despair—a national teen mental health crisis—have been fueled by black box algorithms featuring eating disorders, bullying, suicidal thoughts, and more. Kids and parents want to take back control over their online lives. They are demanding safeguards, means to disconnect, and a duty of care for social media. Our bill has strong bipartisan momentum. And it has growing support from young people who’ve seen Big Tech’s destruction, parents who’ve lost children, mental health experts, and public interest advocates. It’s an idea whose time has come.”
Again, it is not “record levels.” We’ve seen these levels before. And, the “solutions” in this bill do not understand how human nature works (not surprising, given Blumenthals long track record here). It will create even more harm and put more children in danger, while pretending to be a solution. This is a Blumenthal specialty, the same thing he did with FOSTA, which similarly put lives in danger while suppressing protected speech.
Blackburn is no better:
“Over the last two years, Senator Blumenthal and I have met with countless parents, psychologists, and pediatricians who are all in agreement that children are suffering at the hands of online platforms,” said Senator Blackburn. “Big Tech has proven to be incapable of appropriately protecting our children, and it’s time for Congress to step in. The bipartisan Kids Online Safety Act not only requires social media companies to make their platforms safer by default, but it provides parents with the tools they need protect their children online. I thank Senator Blumenthal for his continued partnership on this critical issue and urge my colleagues to join us in the fight to protect our children online.”
I’ve had a few meetings lately in which there were similar groups (pediatricians, psychologists) many of whom are understandably concerned about the health of children. But they seem extremely quick to blame the internet, despite the lack of supporting evidence that bills like this will help, rather than harm, children.
What’s notable is that Blackburn does not note that she met with anyone who has actual expertise in free speech, civil liberties, or how bills like this one can and will backfire and create more harm than good for children.
This has been a key problem in the lead up to this new version of KOSA. Knowing how much pushback they got last time, the authors of the bill made it clear they were cutting out civil society groups who could have pointed out the problems of this new bill. But that’s because it’s clear that Senators Blackburn and Blumenthal don’t actually care about getting this right. They care about getting a headline claiming they got it right. That it might put children in danger does not matter.
Just as Blumenthal, to this day, denies that his FOSTA law killed women and put many at risk, it’s not about actual safety. It’s about making sure Blumenthal gets a headline.
Over the last week or so, I keep hearing about a big push among activists and lawmakers to try to get the Kids Online Safety Act (KOSA) into the year-end “must pass” omnibus bill. Earlier this week, one of the main parents pushing for the bill went on Jake Tapper’s show on CNN and stumped for it. And, the latest report from Axios confirms that lawmakers are looking to include it in the lameduck omnibus, or possibly the NDAA (despite it having absolutely nothing to do with defense spending).
The likeliest path forward for the bills is for them to be added to the year-end defense or spending bill. “We’re at a point where a combination of the victims, and the technology, make it absolutely mandatory we move forward,” Sen. Richard Blumenthal (D-Conn.), a sponsor of the Kids Online Safety Act, told reporters on Capitol Hill Tuesday.
“I think it’s going to move,” Stephen Balkam, CEO of the Family Online Safety Institute, said this week at an event in Washington. “I think it could actually go — it’s one of those very rare pieces of legislation that is getting bipartisan support.”
Anyway, let’s be clear about all this: the people pushing for KOSA are legitimately worried about the safety of kids online. And many of those involved have stories of real trauma. But their stumping for KOSA is misguided. It will not help protect children. It will make things much more dangerous for children. It’s an extraordinarily dangerous bill for kids (and adults).
Back in February, I detailed just how dangerous this bill is, in that it tries to deal with “protecting children” by pushing websites to more actively surveil everyone. Many of the people pushing for the bill, including the one who went on CNN this week, talk about children who have died by suicide. Which is, obviously, quite tragic. But all of it seems to assume (falsely) that suicide prevention is simply a matter of internet companies somehow… spying on their kids more. It’s not that simple. Indeed, the greater surveillance has way more consequences for tons of other people, including kids who also need to learn the value of privacy.
If you dig into the language of KOSA, you quickly realize how problematic it would be in practice. It uses extremely vague and fuzzy language that will create dangerous problems. In earlier versions of the bill, people quickly pointed out that some of the surveillance provisions would force companies to reveal information about kids to their parents — potentially including things that might “out” LGBTQ kids to their parents. That should be seen as problematic for obvious reasons. The bill was amended to effectively say “but don’t do that,” but still leaves things vague enough that companies are caught in an impossible position.
Now the end result is basically “don’t have anyone on your platform end up doing something bad.” But, how does that work in practice?
Advocates for the bill keep saying “it just imposes a ‘duty of care'” on platforms. But that misunderstands basically everything about everything. A “duty of care” is one of those things that sounds good to people who have no idea how anything works. As we’ve noted, a duty of care is the “friendly sounding way” to threaten free speech and innovation. That’s because whether or not you met your obligations is determined after something bad happened. And it will involve a long and costly legal battle to determine (in heightened circumstances, often involving a horrible incident) whether or not a website could have magically prevented a bad thing from happening. But, of course, in that context, the bad thing will have already happened, making it difficult to separate the website from the bad thing, and making it impossible to see whether or not the “bad thing” could have been reasonably foreseen.
But, at the very least, it means that any time anything bad happens that is even remotely connected to a website, the website gets sued and has to convince a court that it took appropriate measures. What that means in practice is that websites get ridiculously restrictive to avoid any possible bad thing from happening — in the process limiting tons of good stuff as well.
The whole bill is designed to do two very silly things: make it nearly impossible for websites to offer something new and, even worse, the bill looks to offload any blame on any bad thing on those websites. It especially seeks to remove blame from parents for failing to do their job as a parent. It is the ultimate “let’s just blame the internet for anything bad” bill.
As I noted a couple months ago, the internet is not Disneyland. We shouldn’t want to make it Disneyland, because if we do, we lose a lot. Bad things happen in the world. And sometimes there’s nothing to blame for the bad thing happening.
I don’t talk about it much, but in high school a friend died by suicide. It’s not worth getting into the details, but the suicide was done in a manner designed to make someone else feel terrible as well (and cast a pall of “blame” on that person — which was traumatic for all involved). But, one thing that was an important lesson is that if you spend all your time looking to blame people for someone’s death by suicide, you’re not going to do much good, and, in fact, it creates this unfortunate scenario where it encourages others to consider suicide as a way to “get back” at others. That’s not helpful at all. For anyone.
Unfortunately, people do die by suicide. And we should be focusing more effort on helping people get through difficult times, and making sure that therapy and counselling is available to all who need it. But trying to retroactively hold social media companies to account for those cases, because they enabled people to talk to each other, throws out so much useful and good — including all of the people who were helped to move away from potential suicidal ideation by finding a community or a tribe who better understood them. Or those who found resources to help them through those difficult times.
Under a bill like KOSA all of that becomes more difficult, while actively encouraging greater surveillance and less privacy. It’s not a good approach.
And it’s especially ridiculous for such a bill to be rushed through via a must-pass bill, rather than having the kind of debate and discussion that such a serious issue not only deserves, but requires.
But, of course, almost no one wants to speak out against KOSA, because the media and politicians trot out parents who went through a truly traumatic experience, and no one wants to be seen as the person who is said to be standing in the way of that. But the simple fact is that KOSA will not magically prevent suicides. It might actually lead to more. And it will do many other damaging things in the meantime, including ramping up surveillance, limiting the ability of websites to innovate, and making it much more difficult for young people to find and connect with actual support and friends.
The coroner overseeing the case, who in Britain is a judgelike figure with wide authority to investigate and officially determine a person’s cause of death, was far less circumspect. On Friday, he ruled that Instagram and other social media platforms had contributed to her death — perhaps the first time anywhere that internet companies have been legally blamed for a suicide.
“Molly Rose Russell died from an act of self-harm while suffering from depression and the negative effects of online content,” said the coroner, Andrew Walker. Rather than officially classify her death a suicide, he said the internet “affected her mental health in a negative way and contributed to her death in a more than minimal way.”
This was the declaration entered as evidence in a UK court case revolving around the suicide of 14-year-old Molly Russell. Also entered as evidence was a stream of disturbing content pulled from the deceased teen’s accounts and mobile device — content that included videos, images, and content related to suicide, including a post copied almost verbatim by Russell in her suicide note.
The content Russell apparently viewed in the weeks leading up to her suicide was horrific.
Molly’s social media use included material so upsetting that one courtroom worker stepped out of the room to avoid viewing a series of Instagram videos depicting suicide. A child psychologist who was called as an expert witness said the material was so “disturbing” and “distressing” that it caused him to lose sleep for weeks.
All of this led to Meta executives being cross-examined and asked to explain how a 14-year-old could so easily access this content. Elizabeth Langone, Meta’s head of health and well-being policies, had no explanation.
As has been noted here repeatedly, content moderation at scale is impossible. What may appear to be easy access to disturbing content may be more a reflection of the user than the platform’s inability to curtail harmful content. And what may appear to be a callous disregard for users may be nothing more than a person slipping through the cracks of content moderation, allowing them to find the content that intrigues them despite efforts made by platforms to keep this content from surfacing unwelcomed on people’s feeds.
This declaration by the UK coroner is, unfortunately, largely performative. It doesn’t really say anything about the death other than what the coroner wants to say about it. And this coroner was pushed into pinning the death (at least partially) on social media by the 14-year-old’s parent, a television director with the apparent power to sway the outcome of the inquest — a process largely assumed to be a factual, rather than speculative, recounting of a person’s death.
Mr. Russell, a television director, urged the coroner reviewing Molly’s case to go beyond what is often a formulaic process, and to explore the role of social media. Mr. Walker agreed after seeing a sample of Molly’s social media history.
That resulted in a yearslong effort to get access to Molly’s social media data. The family did not know her iPhone passcode, but the London police were able to bypass it to extract 30,000 pages of material. After a lengthy battle, Meta agreed to provide more than 16,000 pages from her Instagram, such a volume that it delayed the start of the inquest. Merry Varney, a lawyer with the Leigh Day law firm who worked on the case through a legal aid program, said it had taken more than 1,000 hours to review the content.
What they found was that Molly had lived something of a double life. While she was a regular teenager to family, friends and teachers, her existence online was much bleaker.
From what’s seen here (and detailed in the New York Times article), Molly’s parents didn’t take a good look at her social media use until after she died by suicide. This is not to blame the parents for not taking a closer look sooner, but to point out how ridiculous it is for a coroner to deliver this sort of declaration, especially at the prompting of a grieving parent looking to find someone to blame for his daughter’s suicide.
If this coroner wants to list contributing factors on the public record — especially when involved in litigation — they should at least be consistent. They could have listed “lack of parental oversight,” “peer pressure,” and “unaddressed psychological issues” as contributing factors. This report is showboating intended to portray social media services as harmful and direct attention away from the teen’s desire to access “harmful content.”
And, truly, the role of the coroner is to find the physical causes of death. We go to dangerous places quickly when we start saying that this or that thing clearly caused someone to die by suicide. We don’t know. We can’t know. Even if someone were trained in psychology (not often the case with coroners) you still can’t ever truly say what makes a person take their own life. There are likely many reasons, and they may all contribute in their own ways. But in the end, it’s the person who makes the decision themselves, and only they know the real reasons.
As Mike has written in the past, when we officially put “blame” on parties over suicide, it actually creates very serious problems. It allows those who are considering suicide the power to destroy someone else’s life as well, by simply saying that they chose to end their life because of this or that person or company or whatever — whether or not there’s any truth to it.
I’m well aware social media services often value market growth and user activity over user health and safety, but performative inquests are not the way to alter platforms’ priorities. Instead, it provides a basis for bad faith litigation that seeks to hold platforms directly responsible for the actions of users.
This sort of litigation is already far too popular in the United States. Its popularity in the UK should be expected to rise immediately, especially given the lack of First Amendment protections or Section 230 immunity.
It’s understandable for parents to seek closure when their children die unexpectedly. But misusing a process that is supposed to be free of influence to create “official” declarations of contributory liability won’t make things better for social media users. All it will do is give them fewer options to connect with people that might be able to steer them away from self-harm.