California Court, Ridiculously, Allows School Lawsuits Against Social Media To Move Forward
from the this-is-not-how-any-of-this-works dept
Over the last year, we’ve covered a whole bunch of truly ridiculous, vexatious, bullshit lawsuits filed by school districts against social media companies, blaming them for the fact that the school boards don’t know how to teach students (the one thing they’re supposed to specialize in!) how to use the internet properly. Instead of realizing the school board ought to fire themselves, some greedy ambulance-chasing lawyers have convinced them that if courts force social media companies to pay up, they’ll never have a budget shortfall again. And school boards desperate for cash, and unwilling to admit their own failings as educators, have bought into the still unproven moral panic that social media is harming kids. This is despite widespread evidence that it’s just not true.
While there are a bunch of these lawsuits, some in federal court and some in state courts, some of the California state court ones were rolled up into a single case, and on Friday, California state Judge Carolyn Kuhl (ridiculously) said that the case can move forward, and that the social media companies’ 1st Amendment and Section 230 defenses don’t apply (first reported by Bloomberg Law).
There is so much wrong with this decision, it’s hard to know where to start, other than to note one hopes that a higher court takes some time to explain to Judge Kuhl how the 1st Amendment and Section 230 actually work. Because this is not it.
The court determines that Defendants’ social media platforms are not “products” for purpose of product liability claims, but that Plaintiffs have adequately pled a cause of action for negligence that is not barred by federal immunity or by the First Amendment. Plaintiffs also have adequately pled a claim of fraudulent concealment against Defendant Meta.
As noted in that paragraph, the product liability claims fail, as the court at least finds that social media apps don’t fit the classification of a “product” for product liability purposes.
Product liability doctrine is inappropriate for analyzing Defendants’ responsibility for Plaintiffs’ injuries for three reasons. First, Defendants’ platforms are not tangible products and are not analogous to tangible products within the framework of product liability. Second, the “risk-benefit” analysis at the heart of determining whether liability for a product defect can be imposed is illusive in the context of a social media site because the necessary functionality of the product is not easily defined. Third, the interaction between Defendants and their customers is better conceptualized as a course of conduct implemented by Defendants through computer algorithms.
However, it does say that the negligence claims can move forward and are not barred by 230 or the 1st Amendment. A number of cases have been brought using this theory over the last few years, and nearly all of them have failed. Just recently we wrote about one such case against Amazon that failed on Section 230 grounds (though the court also makes clear that even without 230 it would have failed).
But… the negligence argument the judge adopts is… crazy. It starts out by saying that the lack of age verification can show negligence:
In addition to maintaining “unreasonably dangerous features and algorithms”, Defendants are alleged to have facilitated use of their platforms by youth under the age of 13 by adopting protocols that do not verify the age of users, and “facilitate[ed] unsupervised and/or hidden use of their respective platforms by youth” by allowing “youth users to create multiple and private accounts and by offering features that allow youth users to delete, hide, or mask their usage.”
This seems kinda crazy to say when it comes less than a month after a federal court in California literally said that requiring age verification is a clear 1st Amendment violation.
The court invents, pretty much out of thin air, a “duty of care” for internet services. There have been many laws that have tried to create such a duty of care, but as we’ve explained at great length over the years, a duty of care regarding speech on social media is unconstitutional, as it will easily lead to over-blocking out of fear of liability. Even as the court recognized that internet services are not a product in the product liability sense, because that would make no sense, for negligence… it cited a case involving… electric scooters? Yup. Electric scooters.
In Hacala, the Court of Appeal held that defendant had a duty to use care when it made its products available for public use and one of those products harmed the plaintiff. The defendant provided electric motorized scooters that could be rented through a “downloadable app.” (Id. at p. 311.) The app allowed the defendant “to monitor and locate its scooters and to determine if its scooters were properly parked and out of the pedestrian right-of-way.” (Id., internal quotation marks and brackets omitted.) The defendant failed to locate and remove scooters that were parked in violation of the requirements set forth in the defendant’s city permit, including those parked within 25 feet of a single pedestrian ramp. (Id.) The defendant also knew that, because the defendant had failed to place proper lighting on the scooters, the scooters would not be visible to pedestrians at night. (Id. at p. 312.) The court found that these allegations were a sufficient basis on which to find that the defendant owed a duty to members of the public like the plaintiff, who tripped on the back wheel of one of the defendant’s scooters when walking “just after twilight.” (Id. at p. 300.)
Here, Plaintiffs seek to hold Defendants liable for the way that Defendants manage their property, that is, for the way in which Defendants designed and operated their platforms for users like Plaintiffs. Plaintiffs allege that they were directly injured by Defendants’ conduct in providing Plaintiffs with the use of Defendants’ platforms. Because all persons are required to use ordinary care to prevent others from being injured as the result of their conduct, Defendants had a duty not to harm the users of Defendants’ platforms through the design and/or operation of those platforms.
But, again, scooters are not speech. It is bizarre that the court refused to recognize that.
The social media companies also pointed out that the claims made by the school districts about kids saying they ended up suffering from depression, anxiety, eating disorders, and more from social media, can’t be directly traced back to the social media companies. As the social media companies point out, if a student goes to a school and suffers from depression, she can’t sue the schools for causing depression. But, no, the judge says that there’s a “close connection” between social media and the suffering (based on WHAT?!? she does not say).
Here, as previously discussed, there is a close connection between Defendants’ management of their platforms and Plaintiffs’ injuries. The Master Complaint is clear in stating that the use of each of Defendants’ platforms leads to minors’ addiction to those products, which, in turn, leads to mental and physical harms. (See, e.g., Mast. Compl., 11 80-95.) These design features themselves are alleged to “cause or contribute to (and, with respect to Plaintiffs, have caused and contributed to) [specified] injuries in young people….” (Mast. Compl., ¶ 96, internal footnotes omitted; see also Mast. Compl., ¶ 102 [alleging that Defendants’ platforms “can have a detrimental effect on the psychological health of their users, including compulsive use, addiction, body dissatisfaction, anxiety, depression, and self-harming behaviors such as eating disorders”], internal quotation marks, brackets, and footnotes omitted.) Plaintiffs allege that the design features of each of the platforms at issue here cause these types of harms. (See, e.g., Mast. Compl., 11268-337 (Meta); 1 484-487, 489-490 (Snap); 11 589-598 (ByteDance); ¶¶ 713-773, 803 (Google).) These allegations are sufficient under California’s liberal pleading standard to adequately plead causation.
The court also says that if the platforms dispute the level to which they caused these harms, that’s a matter of fact, to be dealt with by a jury.
Then we get to the Section 230 bit. The court bases much of its reasoning on Lemmon v. Snap. This is why we were yelling about the problems that Lemmon v. Snap would cause, even as we heard from many (including EFF?) who thought that the case was decided correctly. It’s now become a vector for abuse, and we’re seeing that here. If you just claim negligence, some courts, like this one, will let you get around Section 230.
As in Lemmon, Plaintiffs’ claims based on the interactive operational features of Defendants’ platforms do not seek to require that Defendants publish or de- publish third-party content that is posted on those platforms. The features themselves allegedly operate to addict and harm minor users of the platforms regardless of the particular third-party content viewed by the minor user. (See, e.g., Mast. Compl., 11 81, 84.) For example, the Master Complaint alleges that TikTok is designed with “continuous scrolling,” a feature of the platform that “makes it hard for users to disengage from the app,” (Mast. Compl., ¶ 567) and that minor users cannot disable the “auto-play function” so that a “flow-state” is induced in the minds of the minor users (Mast. Compl., 1 590). The Master Complaint also alleges that some Plaintiffs suffer sleep disturbances because “Defendants’ products, driven by IVR algorithms, deprive users of sleep by sending push notifications and emails at night, prompting children to re-engage with the apps when they should be sleeping.” (Mast. Comp., 107 [also noting that disturbed sleep increases the risk of major depression and is associated with “future suicidal behavior in adolescents”].)
Also similar to the allegations in Lemmon, the Master Complaint alleges harm from “filters” and “rewards” offered by Defendants. Plaintiffs allege, for example, that Defendants encourage minor users to create and post their own content using appearance-altering tools provided by Defendants that promote unhealthy “body image issues.” (Mast. Compl., 194). The Master Complaint alleges that some minors spend hours editing photographs they have taken of themselves using Defendants’ tools. (See, e.g., Mast. Compl., 318.) The Master Complaint also alleges that Defendants use “rewards” to keep users checking the social media sites in ways that contribute to feelings of social pressure and anxiety. (See, e.g., Mast. Compl., ¶ 257 [social pressure not to lose or break a “Snap Streak”].)
There’s also the fact that kids “secretly” used these apps without their parents knowing, but… it’s not at all clear how that’s the social media companies’ fault. But the judge rolls with it.
Another aspect of Defendants’ alleged lack of due care in the operation of their platforms is their facilitation of unsupervised or secret use by allowing minor users to create multiple and private accounts and allowing minor users to mask their usage. (Mast. Compl., 1929(d), (e), (f).) Plaintiffs J.S. and D.S., the parents of minor Plaintiff L.J.S., allege that L.J.S. was able to secretly use Facebook and Instagram, that they would not have allowed use of those sites, and that L.J.S. developed an addiction to those social media sites which led to “a steady decline in his mental health, including sleep deprivation, anxiety, depression, and related mental and physical health harms.” (J.S. SFC 11 7-8.)
Then, there’s a really weird discussion about how Section 230 was designed to enable users to have more control over their online experiences, and therefore, the fact that users felt out of control means 230 doesn’t apply? Along similar lines, the court notes that since the intent of 230 was “to remove disincentives” for creating tools for parents to filter the internet for their kids, the fact that parents couldn’t control their kids online somehow goes against 230?
Similarly, Congress made no secret of its intent regarding parental supervision of minors’ social media use. By enacting Section 230, Congress expressly sought “to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict children’s access to objectionable or inappropriate online material.” (47 U.S.C. § 230, subd. (b)(4).) While in some instances there may be an “apparent tension between Congress’s goals of promoting free speech while at the same time giving parents the tools to limit the material their children can access over the Internet” (Barrett, supra, 40 Cal.4th at p. 56), where a plaintiff seeks to impose liability for a provider’s acts that diminish the effectiveness of parental supervision, and where the plaintiff does not challenge any act of the provider in publishing particular content, there is no tension between Congress’s goals.
But that’s wholly misunderstanding both the nature of Section 230 and what’s going on here. Services shouldn’t lose 230 protections just because kids are using services behind their parents’ backs. That makes no sense. But, here, the judge seems to think it’s compelling.
The judge also claims (totally incorrectly based on nearly all of the case law) that if, as the social media companies claim, any harms from social media are due to third party content (which would mean Section 230 protections apply), that’s a matter for the jury.
Although Defendants argue they cannot be liable for their design features’ ability to addict minor users and cause near constant engagement with Defendants’ platforms because Defendants create such “engagement” “with user-generated content” (Defs’ Dem., at p. 42, internal italics omitted), this argument is best understood as taking issue with the facts as pleaded in the Master Complaint. It may very well be that a jury would find that Plaintiffs were addicted to Defendants’ platforms because of the third-party content posted thereon. But the Master Complaint nonetheless can be read to state the contrary-that is, that it was the design of Defendants’ platforms themselves that caused minor users to become addicted. To take another example, even though L.J.S. was viewing content of some kind on Facebook and Instagram, if he became addicted and lost sleep due to constant unsupervised use of the social media sites, and if Defendants facilitated L.J.S.’s addictive behavior and unsupervised use of their social media platforms (i.e., acted so as to maximize engagement to the point of addiction and to deter parental supervision), the negligence cause of action does not seek to impose liability for Defendants’ publication decisions, but rather for their conduct that was intended to achieve this frequency of use and deter parental supervision. Section 230 does not shield Defendants from liability for the way in which their platforms actually operated.
But if that’s the case, it completely wipes out the entire point of Section 230, which is to get these kinds of silly, vexatious cases dismissed early on, such that companies aren’t constantly under threat of liability if they don’t magically solve large societal problems.
From there, the court also rejects the 1st Amendment arguments. To get around those arguments, the court repeatedly keeps arguing that the issue is the way that social media designed its services, and not the content on those services. But that’s tap dancing around reality. When you dig into any of these claims, they’re all, at their heart, entirely about the content.
It’s not the “infinite scroll” that is keeping people up at night. It’s the content people see. It’s not the lack of age verification that is making someone depressed. Assuming it’s even related to the social media site, it’s from the content. Ditto for eating disorders. When you look at the supposed harm, it always comes back to the content, but the judge dismisses all of that and says that the users are addicted to the platform, not the content on the platform.
Because the allegations in the Master Complaint can be read to state that Defendants’ liability grows from the way their platforms functioned, the Demurrer cannot be sustained pursuant to the protections of the First Amendment. As Plaintiffs argue in their Opposition, the allegations can be read to state that Plaintiffs’ harms were caused by their addiction to Defendants’ platforms themselves, not simply to exposure to any particular content visible on those platforms. Therefore, Defendants here cannot be analogized to mere publishers of information. To put it another way, the design features of Defendants’ platforms can best be analogized to the physical material of a book containing Shakespeare’s sonnets, rather than to the sonnets themselves.
Defendants fail to demonstrate that the design features of Defendants’ applications must be understood at the pleadings stage to be protected speech or expression. Indeed, throughout their Demurrer, Defendants make clear their position that Plaintiffs’ claims are based on content created by third parties that was merely posted on Defendants’ platforms. (See, e.g., Defs’ Dem., at p. 49.) As discussed above, a trier of fact might find that Plaintiffs’ harms resulted from the content to which they were exposed, but Plaintiffs’ allegations to the contrary control at the pleading stage.
There are some other oddities in the ruling as well, including dismissing the citation to the NetChoice/CCIA victory in the 11th Circuit regarding Florida’s social media moderation law, because the judge says that ruling doesn’t apply here, since the lawsuit isn’t about content moderation. She seems to falsely think that the features on social media have nothing to do with content moderation, but that’s just factually wrong.
There are a few more issues in the ruling, but those are basically the big parts of it. Now, it’s true that this is just based on the initial complaints, and at this stage of the procedure, the judge has to rule assuming that everything pleaded by the plaintiffs is true, but the way it was done here almost entirely wipes out the entire point of Section 230 (not to mention the 1st Amendment).
Letting these cases move forward enables exactly what Section 230 was designed to prevent: creating massive liability and expensive litigation over choices regarding how a website publishes and presents content. The end result, if this is not overturned, is likely to be a large number of similar (and similarly vexatious) lawsuits that overwhelm websites over potential liability. If each one has to go to a jury before its decided, it’s going to be a total mess.
The whole point of Section 230 was to have judges dismiss these cases early on. And here, the judge has gotten almost all of it backwards.
Filed Under: 1st amendment, addiction, california, carolyn kuhl, depression, lawsuits, moral panic, schools, section 230, social media
Companies: facebook, instagram, meta, snap, tiktok, youtube


Comments on “California Court, Ridiculously, Allows School Lawsuits Against Social Media To Move Forward”
What do you call a politician wearing a dress?
A judge. You can tell in this case by the bullshit grandstanding tropes being used.
Seriously, wtf is going on with California lately with all of these dumb, bizarre, and flat-out awful assaults on the internet? They have Silicon Valley for Christ’s sake; they should know how these companies work fundamentally by literally asking them. This is just downright embarrassing.
This comment has been flagged by the community. Click here to show it.
Re:
thing is tho the politician newsom is a ccp boot licker
Re:
The smart people are being priced out of living there.
Re:
Their state is dying and people keep voting for Newsom.
Re:
To paraphrase George Carlin:
Reducto ad whitespace
Very well. Replace all of the third party content on the Defendants’ platforms with white squares. Can you credibly state that minor users would still become addicted?
I didn’t think so.
Is this post tagged with “depression” on account of what decisions like these induce?
Why are the School bringing a case, rather than parents? Are they usurping the parents responsibility for their kids.
Re:
This is California, so yes.
How would a School have standing?
All hail our AI overlords!
So, out of amusement and curiosity, I asked ChatGPT what it thought. Took a bit of back-and-forth for it to finally analyze just the quotations of this article, but I got it to do so. And…. Is it hilarious and ironic that it even knows that this decision is fucked? I used the browsing feature, gave it this article, then explicitly told it to ignore everything but the quotations in the article of the court, and asked it what it thought. I’ll give you the highlights (direct quotations from it’s analysis):
And this was it’s polite, normal tone that it uses that pleads for ethical, moral, just people and society. This wasn’t even remotely scathing, just a “what do you think?” kind of thing. How is it that an AI — which, really, has no understanding of what it’s talking about and just predicts the words that come next — can “see” (for lack of a better term) the harms of a decision like this while the judge couldn’t? Maybe our AI overlords would be better for the world. Lol
I literally just got done re-reading Watters v. TSR (the Dungeons & Dragons suicide case). It’s not from California, but it directly undoes every one of the plaintiff’s arguments here.
From acting as though the various ‘age verification’ laws proposed have been passed and are on the books, arguing that neither the first amendment or 230 apply because the content of the sites isn’t what people are using them for, flatly and baselessly asserting that it must be social media that is making kids depressed…
I honestly can’t tell if this is a case of a judge deciding their ruling from the outset and working backwards to ‘justify’ it or one that is just completely ignorant ruling on a case they had no business hearing, whatever the case hopefully a higher court with judges who actually know what they’re doing will give this one a good raking over the coals and strike down such an absurd decision.
Slightly wrong there. School boards are elected by the local populace, not appointed by some secret cabal. In fact, nearly all of those who run for election to the Board are not educators at all, but citizens who are applying for the job of overseeing the local education system. In short, they are the conduit between parents and the school district administration, and they have, by state law, some discretion in permitting or dispermitting certain actions taken on either side of that coin.
As to standing, again they are the overseers of the district, and that’s where the standing comes from – as locally elected representatives.
Note that nowhere in my treatise do I mention the qualifications for being a member of a school board…. there’s a reason for that, and the current discussion is a good example of such.
Re:
are you hyman supporting dumbassery
Re:
cuase you do realize that the courts ruling is dumb
Re: Re:
Both of you AC’s:
Someone should punish your teachers, because you both failed Reading Comprehension, and badly at that.
TFS speaks to the Court’s ruling. I spoke to one sentence in TFS, the one I partially quoted. And that was simply to clarify how Boards are populated, not how Boards are stupid, nor how they manage to hire lawyers to bribe judges for them – that’s a point where you and I are in agreement.
And no, I’m not a school board member, nor would I make a good one – I’m far too ornery and disruptive!
Re: Re: Re:
thank you atleast your sane unlike the matt jhon and hymans that keeps spewing right wing bullshit
Re: standing comes from special injury
This is not normally the case. The schooo district would need to allege an injury special to it, which is to say, one not suffered by the public at large. One who suffers the same injury as everyone else, without more, generally lacks standing.
You cannot sue the evil that is your least favorite social media site, unless that site has done something that injures you more than the other members of the public. That they are evil and run over puppies is insufficient, you need to allege that they ran over your dog.
Locally elected boards do not magically gain the power to fight all the evil of the world, or even to litigate toward that end.
This comment has been flagged by the community. Click here to show it.
What’s ridiculous about allowing claims related to protecting children to be possibly heard and evaluated in court? Just curious.
This anti-child safeguarding line that weaves through so much TD coverage is somewhat disturbing.
Re:
hello troll
Re:
Hello Evil Bastard,
We assume that you’re a relative of Fat Bastard, sharing the same last name and all. In fact, you just might be Fat Bastard in disguise, because your head is far too fat to let facts seep in, seemingly at all.
Here’s your one-minute score sheet. Keep it in a safe place.
In communion with the old saw about “Every Accusation A Confession”, it’s also true that “Every cry of ‘Think Of The Children’ hides an agenda that is most likely not friendly to the general population.”
You need to understand, without reservation, that no one here wishes harm to children in general, let alone to their own kids. What we’re on about is the fact that the TooC crowd either has a hidden agenda, or they’re so ignorant of what their intentions will do to the very fabric of society. So far in this century, we’ve seen exactly ZERO proposals that don’t carry large and harmful side effects that can easily be demonstrated by observing other countries (usually not our allies). What boggles our collective mind is that when presented with these observable facts, the TooC folk simply ignore it. Or worse, they accuse us of wanting to harm children – just like you did above.
Now I could be an asshole and ask you what’s your hidden agenda, but I don’t think you have one. And I could be insulting and say that you’re too stupid to have your own agenda, but I’m not gonna go down that road either. What’s left here is not for you to apologize, but to instead agree that, in principle, accusing us of wanting to harm kids is not the real problem here. The real problem is that we’re asking for an acceptable and workable solution to the problem of CSAM, one that does not create even bigger problems for everyone.
You got that? Good. Thank you for your time. See ya in the funny papers!
Re: Re:
matt is also into revenge porn btw
Re:
Perhaps it’s the fact that such laws should have never been passed in the first place. When a legislator takes an oath to serve their constituency, and to uphold and defend that laws of this country, etc., they are put on notice that their duty is not to re-write (IOW, try an end-run around) the Constitution. To go ahead and attempt to do so anyway is despicable. To use emotional language that often circumvents stable, rational thinking, that’s criminal. And you know what I mean, so don’t try to weasel out of it. We don’t want our kids harmed in any fashion, no doubt about it, but we also aren’t willing to drop our drawers and bend over for people hell-bent on destroying our Constitutional rights in the name of ‘save the children’.
Now, your real question is “heard and evaluated in court”. None of us have a problem there, that’s the normal course of events. What we’re upset about is that the judge aided and abetted the Legislature in attempting to nullify the First Amendment. That’s what’s ridiculous. Said judge also knows that she too is is forbidden from attempting to “rectify” the Constitution’s failings – that job is reserved solely for The Supremes.
Re:
Would you like an Internet where every site is either child friendly, or requires you to sign in, because if that is what you want you can start by creating an account here so that we know who you are.
Also imagine and Internet where following a link brought up an age validation page, because relying om another sites validation would leave a site open to irresponsible behavior charges. Also a site that allowed underage visitors would also operate in a letters to the editor mode if they allowed comments, as the risk of someone breaking the posting rules would be too high.
Re:
This anti-child safeguarding line that weaves through so much TD coverage is somewhat disturbing.
What should be more disturbing to you are the lazy piss-poor excuses for parents that can’t be bothered to control what their fucking kids look at. Same parents will probably shit themselves dry and start blubbering about ‘censorship’ when they can’t seig heil their fucking heads off on Twatter.
But when it comes to their kids, it’s always someone else’s job. I’ll give a shit about what their fucking chuds see online when I’m the one providing the access.
It’s not anti-child. They’re not my fucking kids and I never signed up to give a shit about what they see online.
This comment has been flagged by the community. Click here to show it.
Re: Re:
Your heart is full of hate and you make me seem very kind and patient in comparison.
BTW: tell us you’re unlikely to ever create a loving family unit w/o telling us directly. Loser.
Re: Re: Re:
hi pathetic troll
Re: Re: Re:
Your heart is full of hate and you make me seem very kind and patient in comparison.
Tell me you’re one of those parents I’m talking about without directly saying it. Take responsibility for your own kids, asshole. You made them, now pay ror them.
Pathetic welfare queen troll.
Re: Re: Re:
[Projects facts contrary to the evidence]
These are claims by personal injury plaintiffs, not school districts. (As the opinion is quite clear about.)
Re:
… and is exactly what the opinion got wrong.
Re:
Right, the school district case is in federal court in Northern California. This was a state court in Los Angeles, which I didn’t even know existed until yesterday.