When a school district sues social media companies claiming they can’t educate kids because Instagram filters exist, that district is announcing to the world that it has fundamentally failed at its core mission. That’s exactly what New York City just did with its latest lawsuit against Meta, TikTok, and other platforms.
The message is unmistakable: “We run the largest school system in America with nearly a million students, but we’re unable to teach children that filtered photos aren’t real or help them develop the critical thinking skills needed to navigate the modern world. So we’re suing someone else to fix our incompetence.”
This is what institutional failure looks like in 2025.
NYC first got taken in by this nonsense last year, as Mayor Adams said all social media was a health hazard and toxic waste. However, that lawsuit was rolled into the crazy, almost impossible to follow, consolidated version of that lawsuit in California that currently has over 2300 filings on the docket. So, apparently, NYC dropped that version, and has now elected to sue, sue again. With the same damn law firm, Keller Rohrback, that kicked off this trend and are the lawyers behind a big chunk of these lawsuits.
The actual complaint is bad, and everyone behind it should feel bad. It’s also 327 pages, and there’s no fucking way I’m going to waste my time going through all of it, watching my blood pressure rise as I have to keep yelling at my screen “that’s not how any of this works.”
The complaint leads with what should be Exhibit A for why NYC schools are failing their students—a detailed explanation of adolescent brain development that perfectly illustrates why education matters:
Children and adolescents are especially vulnerable to developing harmful behaviors because their prefrontal cortex is not fully developed. Indeed, it is one of the last regions of the brain to mature. In the images below, the blue color depicts brain development.
Because the prefrontal cortex develops later than other areas of the brain, children and adolescents, as compared with adults, have less impulse control and less ability to evaluate risks, regulate emotions and regulate their responses to social rewards.
Stop right there. NYC just laid out the neurological case for why education exists. Kids have underdeveloped prefrontal cortexes? They struggle with impulse control, risk evaluation, and emotional regulation? THAT’S LITERALLY WHY WE HAVE SCHOOLS.
The entire premise of public education is that we can help children develop these exact cognitive and social skills. We teach them math because their brains can learn mathematical reasoning. We teach them history so they can evaluate evidence and understand cause and effect. We teach them literature so they can develop empathy and critical thinking.
But apparently, when it comes to digital literacy—arguably one of the most important skills for navigating modern life—NYC throws up its hands and sues instead of teaches.
This lawsuit is a 327-page confession of educational malpractice.
The crux of the lawsuit is, effectively, “kids like social media, and teachers just can’t compete with that shit.”
In short, children find it particularly difficult to exercise the self-control required to regulate their use of Defendants’ platforms, given the stimuli and rewards embedded in those platforms, and as a foreseeable and probable consequence of Defendants’ design choices tend to engage in addictive and compulsive use. Defendants engaged in this conduct even though they knew or should have known that their design choices would have a detrimental effect on youth, including those in NYC Plaintiffs’ community, leading to serious problems in schools and the community.
By this logic, basically any products that children like are somehow a public nuisance.
This lawsuit is embarrassing to the lawyers who brought it and to the NYC school system.
Take the complaint’s hysterical reaction to Instagram filters, which perfectly captures the educational opportunity NYC is missing:
Defendants’ image-altering filters cause mental health harms in multiple ways. First, because of the popularity of these editing tools, many of the images teenagers see have been edited by filters, and it can be difficult for teenagers to remain cognizant of the use of filters. This creates a false reality wherein all other users on the platforms appear better looking than they actually are, often in an artificial way. As children and teens compare their actual appearances to the edited appearances of themselves and others online, their perception of their own physical features grows increasingly negative. Second, Defendants’ platforms tend to reward edited photos, through an increase in interaction and positive responses, causing young users to prefer the way they look using filters. Many young users believe they are only attractive when their images are edited, not as they appear naturally. Third, the specific changes filters make to individuals’ appearances can cause negative obsession or self-hatred surrounding particular aspects of their appearance. The filters alter specific facial features such as eyes, lips, jaw, face shape, and face slimness—features that often require medical intervention to alter in real life
Read that again. The complaint admits that “it can be difficult for teenagers to remain cognizant of the use of filters” and that kids struggle to distinguish between edited and authentic images.
That’s not a legal problem. That’s a curriculum problem.
A competent school system would read that paragraph and immediately start developing age-appropriate digital literacy programs. Media literacy classes. Critical thinking exercises about online authenticity. Discussions about self-image and social comparison that have been relevant since long before Instagram existed.
Instead, NYC read that paragraph and decided the solution is to sue the companies rather than teach the kids.
This is educational malpractice masquerading as child protection. If you run a million-student school system and your response to kids struggling with digital literacy is litigation rather than education, you should resign and let someone competent take over.
They’re also getting sued for… not providing certain features, like age verification. Even though, as we keep pointing out, age verification is (1) likely unconstitutional outside of the narrow realm of pornographic content, and (2) a privacy and security nightmare for kids.
The broader tragedy here extends beyond one terrible lawsuit. NYC is participating in a nationwide trend of school districts abandoning their educational mission in favor of legal buck-passing. These districts, often working with the same handful of contingency-fee law firms, have decided it’s easier to blame social media companies than to do the hard work of preparing students for digital citizenship.
This represents a fundamental misunderstanding of what schools are supposed to do. We don’t shut down the world to protect children from it—we prepare children to navigate the world as it exists. That means teaching them to think critically about online content, understand privacy and security, develop healthy relationships with technology, and build the cognitive skills to resist manipulation.
Every generation gets a moral panic or two, and apparently “social media is destroying kids’ brains” is our version of moral panics of years past. We’ve seen this movie before: the waltz would corrupt young women’s morals, chess would stop kids from going outdoors, novels would rot their brains on useless fiction, bicycles would cause moral decay, radio would destroy family conversation, pinball machines would turn kids into delinquents, television would make them violent, comic books would corrupt their minds, and Dungeons & Dragons would lead them to Satan worship.
As society calmed down, eventually, after each of those, we now look back on those moral panics as silly, hysterical overreactions. You would hope that a modern education system would take note that they have an opportunity to use these new forms of media as a learning opportunity.
But faced with social media, America’s school districts have largely given up on education and embraced litigation. That should terrify every parent more than any Instagram filter ever could.
The real scandal isn’t that social media exists. It’s that our schools have become so risk-averse and educationally bankrupt that they’ve forgotten their core purpose: preparing young people to be thoughtful, capable adults in the world they’ll actually inherit.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Riana Pfefferkorn, a Policy Fellow at the Stanford Institute for Human Centered AI. They cover:
Have we considered giving Supreme Court justices their own blogs in which they can vent their ill-informed brain farts, rather than leaving them to use official Supreme Court order lists as a form of a blog?
Justice Clarence Thomas has been the absolute worst on this front, using various denials of certiorari on other topics to add in a bunch of anti-free speech, anti-Section 230 commentary, on topics he clearly does not understand.
Thomas started this weird practice of Order List blogging in 2019, when he used the denial of cert on a defamation case to muse unbidden on why we should get rid of the (incredibly important) actual malice standard for defamation cases involving public figures.
Over the last few years, however, his main focus on these Order List brain farts has been to attack Section 230, each time demonstrating the many ways he doesn’t understand Section 230 or how it works (and showing why justices probably shouldn’t be musing randomly on culture war topics on which they haven’t actually been briefed by any parties).
He started his Section 230 brigade in 2020, in which he again chose to write his unbidden musings after the court decided not to hear a case that touched on Section 230. At that point, it became clear that he was doing this as a form of “please send me a case in which I can try to convince my fellow Justices to greatly limit the power of Section 230.”
Not having gotten what he wanted, he did it again in 2021, in a case that really didn’t touch on Section 230 at all, but where he started musing that maybe Section 230 itself was unconstitutional and violated the First Amendment.
He did it again a year later, citing his own previous blog posts.
Finally, later that year, the Supreme Court actually took on two cases that seemed to directly target what Thomas was asking for: the Gonzalez and Taamneh cases targeted internet companies over terrorist attacks based on claims that the terrorists made use of those websites, and therefore the sites could be held civilly liable, at least in part, for the attacks.
When those cases were finally heard, it became pretty obvious pretty damn quickly how ridiculous the premise was, and that the Supreme Court Justices seemed to regret the decision to even hear the cases. Indeed, when the rulings finally came out, it was something of a surprise that the main ruling, in Taamneh, was written by Thomas himself, explaining why the entire premise of suing tech companies for unrelated terrorist attacks made no sense, but refusing to address specifically the Section 230 issue.
However, as we noted at the time, Thomas’ ruling in Taamneh reads like a pretty clear support for Section 230 (or at least a law like Section 230) to quickly kick out cases this stupid and misdirected. I mean, in Taamneh, he wrote (wisely):
The mere creation of those platforms, however, is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.
And, I mean, that’s exactly why we have Section 230. To get cases that make these kinds of tenuous accusations into legal claims tossed out quickly.
But, it appears that Thomas has forgotten all of that. He’s forgotten how his own ruling in Taamneh explains why intermediary liability protections (of which 230 is the gold standard) are so important. And he’s forgotten how his lust for a “let’s kill Section 230” case resulted in the Court taking the utterly ridiculous Taamneh case in the first place.
So, now, when the Court rejected another absolutely ridiculous case, Thomas is blogging yet again about how bad 230 is and how he wishes the Court would hear a case that lets him strike it down.
This time, the case is Doe v. Snap, and it is beyond stupid. It may be even stupider than the Taamneh case. Eric Goldman had a brief description of the issues in this case:
A high school teacher allegedly used Snapchat to groom a sophomore student for a sexual relationship. (Atypically, the teacher was female and the victim was male, but the genders are irrelevant to this incident).
The teacher was sentenced to ten years in jail, so the legal system has already held the wrongdoer accountable. Nevertheless, the plaintiff has pursued additional defendants, including the school district (that lawsuit failed) and Snap.
We should be precise about Snap’s role in this tragedy. The teacher and student exchanged private messages on Snap. Snap typically is not legally entitled to read or monitor the contents of those messages. Thus, any case predicated on the message contents runs squarely into Snap’s limitations to know those contents. To get around this, the plaintiff said that Snap should have found a way to keep the teacher and student from connecting on Snap. But these users already knew each other offline; it’s not like some stranger-to-stranger connection. Further, Snap can keep these individuals from connecting on its network only if it engages in invasive user authentication, like age authentication (to segregate minors from adults). However, the First Amendment has said for decades that services cannot be legally compelled to do age authentication online. The plaintiff also claimed Snapchat’s “ephemeral” message functionality is a flawed design, but the Constitution doesn’t permit legislatures to force messaging services to maintain private messages indefinitely. Indeed, Snapchat’s ephemerality enhances socially important privacy considerations. In other words, this case doesn’t succeed however it’s framed: either it’s based on message contents Snap can’t read, or it’s based on site design choices that aren’t subject to review due to the Constitution.
See? It’s just as, if not more, stupid than the Taamneh case. It’s yet another “Steve Dallas” lawsuit, in which civil lawsuits are filed against large companies who are only tangentially related to the issues at play, solely because they have deep pockets.
The historical posture of this case is also bizarre. The lower courts also recognized it was a dumb case, sorta. The district court rejected the case on 230 grounds. The 5th Circuit affirmed that decision but (bizarrely) suggested the plaintiff seek an en banc review from the full contingent of Fifth Circuit judges. That happened, and while the Fifth Circuit refused to hear the case en banc, seven out of the fifteen judges (just under half) wrote a “dissent,” citing Justice Thomas’s unbriefed musings, and suggesting Section 230 should be destroyed.
Justice Thomas clearly noticed that. While the Supreme Court has now (thankfully) rejected the cert petition, Thomas has used the opportunity to renew his grievances regarding Section 230.
It’s as wrong and incoherent as his past musings, but somehow even worse, given what we had hoped he’d learned from the Taamneh mess. On top of that, it has a new bit of nuttery, which we’ll get to eventually.
First, he provides a much more generous to the plaintiff explanation of what he believed happened:
When petitioner John Doe was 15 years old, his science teacher groomed him for a sexual relationship. The abuse was exposed after Doe overdosed on prescription drugs provided by the teacher. The teacher initially seduced Doe by sending him explicit content on Snapchat, a social-media platform built around the feature of ephemeral, selfdeleting messages. Snapchat is popular among teenagers. And, because messages sent on the platform are selfdeleting, it is popular among sexual predators as well. Doe sued Snapchat for, among other things, negligent design under Texas law. He alleged that the platform’s design encourages minors to lie about their age to access the platform, and enables adults to prey upon them through the self-deleting message feature. See Pet. for Cert. 14–15. The courts below concluded that §230 of the Communications Decency Act of 1996 bars Doe’s claims
Again, given his ruling in Taamneh, where he explicitly noted how silly it was to blame the tool for its misuse, you’d think he’d be aware that he’s literally describing the same scenario. Though, in this case it’s even worse, because as Goldman points out, Snap is prohibited by law from monitoring the private communications here.
Thomas then goes on to point out how there’s some sort of groundswell for reviewing Section 230… by pointing to each of his previous unasked-for, unbriefed musings as proof:
Notwithstanding the statute’s narrow focus, lower courts have interpreted §230 to “confer sweeping immunity” for a platform’s own actions. Malwarebytes, Inc. v. Enigma Software Group USA, LLC, 592 U. S. ___, ___ (2020) (statement of THOMAS, J., respecting denial of certiorari) (slip op., at 1). Courts have “extended §230 to protect companies from a broad array of traditional product-defect claims.” Id., at ___–___ (slip op., at 8–9) (collecting examples). Even when platforms have allegedly engaged in egregious, intentional acts—such as “deliberately structur[ing]” a website “to facilitate illegal human trafficking”—platforms have successfully wielded §230 as a shield against suit. Id., at ___ (slip op., at 8); see Doe v. Facebook, 595 U. S. ___, ___ (2022) (statement of THOMAS, J., respecting denial of certiorari) (slip op., at 2).
And it’s not like he’s forgotten the mess with Taamneh/Gonzalez, because he mentions it here, but somehow it doesn’t ever occur to him that this is the same sort of situation, or that his ruling in Taamneh is a perfect encapsulation of why 230 is so important. Instead, he bemoans that the Court didn’t have a chance to even get to the 230 issues in that case:
The question whether §230 immunizes platforms for their own conduct warrants the Court’s review. In fact, just last Term, the Court granted certiorari to consider whether and how §230 applied to claims that Google had violated the Antiterrorism Act by recommending ISIS videos to YouTube users. See Gonzalez v. Google LLC, 598 U. S. 617, 621 (2023). We were unable to reach §230’s scope, however, because the plaintiffs’ claims would have failed on the merits regardless. See id., at 622 (citing Twitter, Inc. v. Taamneh, 598 U. S. 471 (2023)). This petition presented the Court with an opportunity to do what it could not in Gonzalez and squarely address §230’s scope
Except no. If the Taamneh/Gonzalez cases didn’t let you get to the 230 issue because the cases “would have failed on the merits regardless,” the same is doubly true here, where there is no earthly reason why Snap should be held liable.
Then, hilariously, Thomas whines that SCOTUS is taking too long to address this issue with which he is infatuated, even though all it’s done so far is have really, really dumb cases sent to the Court:
Although the Court denies certiorari today, there will be other opportunities in the future. But, make no mistake about it—there is danger in delay. Social-media platforms have increasingly used §230 as a get-out-of-jail free card.
And that takes us to the “new bit of nuttery” I mentioned above. Thomas picks up on a point that Justice Gorsuch raised during oral arguments in the NetChoice cases, and I’ve now seen being pushed by grifters and nonsense peddlers. Specifically, that the posture that NetChoice took in fighting state content moderation laws is in conflict with the arguments made companies making use of Section 230.
Here, we’ll let Thomas explain his argument before picking it apart to show just how wrong it is, and how this demonstrates the risks of unbriefed musings by an ideological and outcomes-motivated Justice.
Many platforms claim that users’ content is their own First Amendment speech. Because platforms organize users’ content into newsfeeds or other compilations, the argument goes, platforms engage in constitutionally protected speech. See Moody v. NetChoice, 603 U. S. ___, ___ (2024). When it comes time for platforms to be held accountable for their websites, however, they argue the opposite. Platforms claim that since they are not speakers under §230, they cannot be subject to any suit implicating users’ content, even if the suit revolves around the platform’s alleged misconduct. See Doe, 595 U. S., at ___–___ (statement of THOMAS, J.) (slip op., at 1–2). In the platforms’ world, they are fully responsible for their websites when it results in constitutional protections, but the moment that responsibility could lead to liability, they can disclaim any obligations and enjoy greater protections from suit than nearly any other industry. The Court should consider if this state of affairs is what §230 demands.
So, the short answer is, yes, this is exactly the state of affairs that Section 230 demands, and the authors of Section 230, Chris Cox and Ron Wyden, have said so repeatedly.
Where Thomas is getting tripped up, is in misunderstanding whose speech we’re talking about in which scenarios. Section 230 is quite clear that sites cannot be held liable for the violative nature of third-party expression (i.e., the content created by users). But the argument in Moody was about the editorial discretion of social media companies to express themselves in terms of what content they allow.
Two different things in two different scenarios. The platforms are not “arguing the opposite.” They are being specific and explicit where Thomas is being sloppy and confused.
Section 230 means no liability for the third party uses of the tool (which you’d think Thomas would understand given his opinion in Taamneh). But Moody isn’t about liability for third party content. It was about whether or not the sites have the right to determine which content they host and which they won’t, and whether or not those choices (not the underlying content) is itself expressive. The court answered (correctly) that it was expressive.
But that doesn’t change the simple fact that the sites still should not be liable for any tort violation created by a user.
Thomas is right, certainly, that more such cases will be sent to the Supreme Court, given all the begging he’s been doing for them.
But he would be wise to actually learn a lesson or two from what happened with Taamneh and Gonzalez, and maybe recognize (1) he shouldn’t spout off on topics that haven’t been fully briefed, (2) there’s a reason why particularly stupid cases like this one and Taamneh are the ones that reach the Supreme Court and (3) that what he said in Taamneh actually explains why Section 230 is so necessary.
And then we can start to work on why he’s conflating two different types of expression in trying to attack the (correct) position of the platforms with regards to their own editorial discretion and 230 protections.
It’s a point we have to make far more often than we should: trademark law is not designed to allow anyone or any company to simply lock up common language as their own. There are lots of ways the confusion around that expresses itself, but one of the most common concerns generic terms for goods and services. Yes, you can trademark Coca-Cola. No, you cannot trademark “soda.” Yes, you can trademark “Apple” for computers. No, you cannot trademark “apples” for your apple-farming company. See? Not too hard!
For us, at least. For the folks at Snap, however, the point seems to elude them. Snap has a line of augmented reality glasses and has unhelpfully decided to name the product “Spectacles.” When Snap applied for a trademark on the name of the product, the USPTO managed to actually get it right and denied the application over the generic nature of the term.
But rather than slinking away with a sly smile at the failed attempt to get one over on the USPTO, Snap has now sued the USPTO instead.
The USPTO rejected Snap’s trademark application for the name in 2020, finding it trademark-ineligible because it was either generic or descriptive. A USPTO tribunal affirmed the decision later that year. Snap asked the California court in 2022 to force the USPTO to grant the trademark, and said that potential buyers think of “Spectacles” as a Snap brand instead of a generic term for smart glasses.
The USPTO asked the court last year to grant it a win without a trial.
And the court just recently denied the USPTO’s request and is allowing the trial to move forward. Why? I have no real idea. The U.S. Magistrate Judge cited “competing evidence” that needed to be sorted out in an actual trial, but I truly can’t understand what in the world that competing evidence would be. The only specifics in the judge’s order reference surveys and expert testimony as to whether the public associates the term “spectacles” with glasses in general, or with Snap’s product. And I suspect the court is allowing this to go to trial mostly as a procedural result, since the burden at this stage would be on the USPTO to demonstrate that the evidence in the case is one-sided to get a judgement without trial.
And the judge apparently thinks it’s not one-sided enough. So now this goes to trial, where one would hope it ultimately becomes a win for the USPTO.
Well, this is not that much of a surprise, but in the leadup to the Senate “child safety” dog and pony show that will be happening in a few hours, Microsoft decided to twist the knife in to some of its competitors. Microsoft’s Vice Chair and President, Brad Smith (who was formerly the company’s general counsel and absolutely understands the impact of what he’s doing) came out and endorsed the Kids Online Safety Act (KOSA).
If you can’t read that, he says:
Technology can be a powerful tool for learning, creativity, communication, and social good, but can equally pose significant challenges and risks for young users. We must protect youth safety and privacy online and ensure that technology – including emerging technologies such as AI – serves as a positive force for the next generation.
The Kids Online Safety Act (KOSA) provides a reasonable, impactful approach to address this issue. It is a tailored, thoughtful measure that can support young people to engage safely online. Microsoft supports this legislation, encourages its passage, and applauds Senators Blumenthal and Blackburn for their leadership.
This is absolute bullshit on multiple levels. First off, weird plug for AI there, which has nothing to do with any of this, and which is likely to be Congress’ next target of “bad” tech.
Of course, that may very well be why Smith is doing this. He knows it’s an easy way to cozy up with Congress and pretend to support their agenda, while the downside risk to Microsoft is minimal. KOSA is going to cause a pain for more consumer-facing social media sites like Instagram and YouTube, but Microsoft-owned sites like LinkedIn and GitHub are most likely to be spared. So, as a totally cynical approach, this saddles some of Microsoft’s largest competitors with a nonsense compliance headache, while letting Microsoft publicly claim it’s “protecting the kids” while getting kudos from Congress.
However the claim that KOSA is “reasonable,” “impactful,” “tailored,” or “thoughtful” is just grade-A bullshit. The law is a total mess, and will do real harm to kids beyond just being obviously unconstitutional. As we’ve pointed out multiple times, GOP support for the bill is because they know it will be used to censor LGBTQ content. The GOP’s leading “think tank,” the Heritage Foundation has publicly supported the bill because they believe censoring LGBTQ content “is protecting kids.” Meanwhile, bill co-sponsor, Marsha Blackburn (whom Smith thanks above for her “leadership,”) has similarly admitted that Congress should pass KOSA to “protect minor children from the transgender in our culture.”
There is no excuse for Microsoft to take this stance, which Smith well knows will likely lead to kids dying, rather than being protected. But, Smith is a cynical political operator and knows exactly what he’s doing. He’s doing a favor for Congress while cynically kicking his rivals while they’re down. Between this and Snap’s similar capitulation, expect to see all sorts of nonsense at the hearing about how “some companies want to protect our children, why won’t you?” addressed to the other company representatives.
Microsoft, over the last decade or so, really rehabilitated its public image as the evil company of the 90s that crushed competitors through any dirty trick imaginable. But that DNA is still in there, and it will miss no opportunity to kick competitors, even if it comes at the expense of children.
Over and over again, we see politicians browbeat companies until they agree to support terrible legislation. Back when FOSTA was being debated, there was tremendous pressure from the media and Congress for tech to support it, falsely claiming that without it they were enabling sex trafficking. Eventually, after a ton of pressure was put on the companies, Meta (then still Facebook) broke ranks with the rest of the industry and came out with full throated support for the law. Congress used that support to claim that the tech industry was on board, and passed FOSTA.
But, we’re seeing the same playbook being run out with KOSA, the Kids Online Safety Act, which has broad bipartisan support in Congress, even as Republicans have made it clear they view it as a tool to silence LGBTQ+ content.
There’s yet another Congressional moral panic hearing happening this week, where the CEOs of Meta, Discord, Snap, TikTok, and ExTwitter will go to DC to get yelled at by very clueless but grandstandingly angry Senators. The whole point of this dog and pony show is to pretend they’re “protecting the children” online, when it’s been shown time and time again that they don’t actually care about the harm they’re doing, or what’s really happening online.
But, because of this, all the companies are looking for ways to make some sort of public claim about how “safe” they keep kids. ExTwitter made some announcements late last week, but Snap decided to go all in and issue a Facebook-like support for KOSA.
A Snap spokesperson told POLITICO about the company’s support of Kids Online Safety Act. The popular messaging service’s position breaks ranks with its trade group NetChoice, which has opposed KOSA. The bill directs platforms to prevent the recommendation of harmful content to children, like posts on eating disorders or suicide.
Snap has been in a rough spot lately for a variety of reasons, including some very dumb lawsuits. Apparently the company feels it needs to make a splash, even if laws like KOSA will do more to put kids in danger than to help them. But, of course, they felt the need to cave to Congressional pressure. Not surprisingly, the censors-in-chief are thrilled with their first scalp.
KOSA co-sponsors Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.) applauded Snap’s endorsement. “We are pleased that at least some of these companies are seemingly joining the cause to make social media safer for kids, but this is long overdue,” they told POLITICO. “We will continue fighting to pass the Kids Online Safety Act, building on its great momentum to ensure it becomes law.”
Of course these two would cheer about this. Blackburn was the one who told a reporter how KOSA would be useful silencing “the transgender.” And Blumenthal simply hates the internet. He’s been pulling exactly this kind of shit since he was Attorney General in Connecticut and he forced Craigslist to close certain sections by fundamentally misrepresenting what Craigslist did. And that closure of parts of Craigslist has since been shown to have literally resulted in the deaths of women.
But Blumenthal has never expressed any concern or any doubt about his bills, even as he leaves a bloody trail in the wake of his legislating. KOSA will lead to much more harm as well, but its supporters have arm-twisted Snap into supporting it so that they get spared the worst of the nonsense on Wednesday.
Well, this is dumb. As detailed by NBC News, Los Angeles Superior Court Judge Lawrence Riff has rejected a perfectly reasonable attempt by SnapChat to have a lawsuit thrown out on Section 230 grounds. The case involves family members of kids who overdosed on illegal drugs like fentanyl, suing Snap for apparently providing the connection between the drug dealers and the kids.
And, if you think this is kinda like suing AT&T because a drug buyer used a phone to call a drug dealer, you’re not wrong.
But, that’s not how Judge Riff seems to see things. In the ruling (which NBC did not post), Judge Riff… seems pretty confused. He buys into a recent line of somewhat troubling cases that argue you can get around Section 230 by arguing “defective” or “negligent” design. Here, the “defective design” is… the fact that Snap has disappearing messages and doesn’t do age verification:
According to plaintiffs: Snapchat is specifically chosen by children, teens and young adults for drug distribution because of how Snap designs, markets, distributes, programs and operates Snapchat (SAC, ‘\I 2); Snapchat’s many data-deletion features and functions made it foreseeable, if not intended, that Snapchat would become a haven for drug trafficking (Id. at 11 3); the combination of practices and multiple features Snap chose to build into its Snapchat product-such as ineffective age and identify verification, facilitating easy creation of multiple, fake accounting, connecting kids with strangers and drug dealers “in-app” through the “quick add” features and a live mapping feature makes Snap an inherently dangerous product for young users (Id. at ‘\113); Snap was on notice that Snapchat was facilitating an enormous number of drug deals (Id. at 11 14)…
Of course, this should be an easy Section 230 dismissal. The claims are entirely based on Snap not magically stopping drug deals, which is about user content. The fact that messages disappear is meaningless. The same is true for a phone call. The idea that Snap “intended” for its product to be used for drug deals is similarly bonkers.
But, Judge Riff appears to have opinions on Section 230 and he’s going to use this case to air them out. He notes that the Supreme Court has not yet ruled specifically on the bounds of 230, pointing to the Gonzalez ruling last year where the Court deliberately chose not to rule on Section 230.
Riff seems to mock the commentary around Section 230:
These are, it has been famously (and at this point monotonously) said, “the twenty-six words that created the internet.” At least dozens if not hundreds of courts, academics, and other commentators have by now explained that the provision was designed, in 1996, to protect then-fledgling internet companies from incurring liability when millions of users posted content and when the companies made moves to police that content.
He follows that up with a footnote mentioning an article by Jeff Kosseff (who he calls Kossoff and seems to mock), whom he calls “section 230’s preeminent historian.”
He then goes through an abbreviated (and slightly misleading) history of 230, and does so in a breezy, somewhat mocking tone.
If Congress had intended to immunize all interactive computer services from liabilities “based on” third-party content, there are straightforward elocutions to express that intention. But that is neither what Congress did nor what Congress could have done consistent with the policy statements in subdivision (b) of section 230. Instead, Congress chose to invoke words of art drawn from common law defamation-liability distinctions between “publishers” and “speakers,” on the one hand, and, apparently, “distributors” on the other.
Again, why those words and why in 1996?
At common law, including in New York state in 1996, publishers were held to a higher standard than distributors over defamatory or other illegal content on the theory they did, or at least reasonably could, exercise editorial control. Distributors, on the other hand, were liable only when they knew or should have known that the publication contained illegal content. It is universally accepted by knowledgeable persons, including the members of the California Supreme Court, that Congress’s decision to use the publisher/distributor distinction for section 230 was in response to a New York decision, Stratton Oakmont, Inc. v. Prodigy Services Co. (N.Y. Sup.Ct. 1995) 1995 WL 323710 (Stratton Oakmont), applying New York law. (Barrett v. Rosenthal (2006) 40 Cal.4th 33, 44 (Barrett).) An early Internet case, Stratton Oakmont held that because the defendant had exercised some editorial control – removin9 offensive content and automatically screening for offensive language – over the third-party content, it was properly treated as a publisher and not a mere distributor. Section 230(c)(1) overruled, as it were, the Stratton Oakmont decision by eliminating common law strict liability for acting like a publisher by posting, or removing some of, a third-party’s false statement.
An early federal appellate decision, Zeran v. America Online, Inc. (4th Cir. 1997) 129 F.3d 327, had an outsized influence on the interpretation of section 230. According to the California Supreme Court (among other courts), Zeran rejected the notion of any distinction between publisher and distributor liability, instead finding that Congress intended to broadly shield all providers from liability for publishing information received from third parties. (Barrett, supra, 40 Cal.4th at p. 53.) The Barrett court explained, “We agree with the Zeran court, and others considering the question, that subjecting Internet service providers and users to defamation liability would tend to chill online speech.” (ld. at p. 56; see also Hassell v. Bird (2018) 5 Cal.5th 522, 556-558 (conc. opn. of Kruger, J.) [Zeran’s broad reach did not, however, prevent the Ninth Circuit’s conclusion in Barnes, namely, that section 230 did not immunize Yahoo for alleged promissory estoppel because the claim did not seek to hold Yahoo liable as a publisher or speaker of third-party content).)
He then goes on to cite a few different judges who have recently called into question the way the courts view 230, including Justice Clarence Thomas who famously went off on a rant about 230 and content moderation in a case that had nothing to do with that issue, and in which there had been no briefing or oral arguments about the issue. Oddly, Riff does not mention Thomas’s writing in the Taamneh ruling (which came out with the punting on actually ruling about 230 in Gonzalez), in which (after being briefed) Thomas seems to have a better, more complete understanding of why companies need to be free to make moderation decisions without fear of liability for the end results.
Either way, Judge Riff takes us on an extended tour of every case where a judge has ruled that 230 doesn’t apply, and seems to take that to mean that 230 shouldn’t apply in this case (or, rather, it says that 230 can apply to some of the claims, regarding moderation choices, but cannot be used against the claims around things like disappearing messages, which he argues could be seen as a negligent design).
The allegations assert conduct beyond “incidental editorial functions” for which a publisher may still enjoy section 230 immunity. (See, Batzel v. Smith (9th Cir. 2003) 333 4 F .3d 1018, 1031 .) Additionally, the court finds that the alleged attributes and features of Snapchat cross the line into “content” – as the Liapes and Lee courts found, too. The court rejects, as did the Ninth Circuit in Barnes, Snap’s assertion of “general immunity” under its “based on”/”flows from”/”but for” reading of the scope of section 230.
Basically, this ruling reads Section 230 as a near dead letter. It says that so long as you allege any problematic content you find on social media is a result of “negligent design” then you can take away the 230 defense. And that basically kills Section 230. As we’ve explained repeatedly, the entire benefit of 230 is that it gets rid of these ridiculous cases quickly, rather than having them drag on in costly ways only to lose eventually anyway.
Here, this case has almost no chance of succeeding in the long run. But, the case must now move forward through a much more expensive process, because the judge is willing to let the plaintiffs plead around 230 by claiming negligent design.
There’s a separate discussion, outside the 230 issue, over whether or not Snap can be held liable for product liability since it offers a service, rather than a “tangible product,” but the judge doesn’t buy that distinction at all:
The tangible product versus (intangible) service test is a false dichotomy as applied to Snapchat, at least as Snapchat is described in the SAC. As noted, even the parties struggle to find language with which to categorize Snapchat, but neither “product” nor “service” are up to the job.
And thus, the case must continue to move forward:
The court’s answer is: not enough information yet to tell, and the question cannot be resolved on demurrer. Accordingly, the court overrules Snap’s demurrer to counts 1, 2, and 3 on the ground that Snapchat is not a tangible product. The court will permit the parties to create a factual record as the characteristics, functionalities, distribution, and uses of Snapchat. The court has no doubt that it will revisit later whether California strict products liability law does or should apply in this case, but it will do so on a developed factual record.
Separately, Snap also had pointed out that even outside of 230, there’s nothing in the complaint that would constitute negligence, but again, Judge Riff punts, repeatedly saying there’s enough in the complaint to move the case forward, including on the absolutely insane “failure to warn” claim (arguing that Snap’s failure to warn people about the dangers of buying drugs is part of the negligence it engaged in):
Snap demurs to count (negligent failure to warn) on the basis that “the danger of buying illegal drugs online is obvious, so no warning is required.” (Demurrer, 2.) The SAC, however, alleges that the harm arose from a non-obvious danger, namely, the presence of fentanyl in the drugs purchased by the minors. The SAC does not allege an obvious danger for which no warning is required.
Again, that makes no sense at all, and is exactly why 230 should apply here. First of all, this is a complaint about the content, which should put it squarely back into 230’s purview, even according to Judge Riff’s own framing. The issue is whether or not Snap needs to warn about some of the content posted by users. That should easily be stopped by 230, but here it’s not.
On top of that, how is Snap (or any website) to know every potential “non-obvious danger” that might arise on their platform and effectively warn users that it might occur? That’s why we have laws like 230. To avoid these kinds of nonsense lawsuits.
Anyway, this case is far from over, but the implications of this ruling are shocking. It would enable lawsuits against any platform used for conversations that don’t record all content indefinitely, where the communications tool is used for anything that might lead to harm.
And, yes, things like Signal disappearing messages or the telephone or meeting in a park seem like they could apply. Again, none of this means the plaintiffs will win. There’s still a decent chance that as the case moves on (if it moves on), that they will lose because the facts here are so silly. But just the fact that the judge is saying 230 doesn’t apply here is tremendously problematic and troubling and gives yet another way for plaintiffs and ambulance chasing lawyers to tie up websites in ridiculous litigation.
Over the last year, we’ve covered a whole bunch of truly ridiculous, vexatious, bullshit lawsuits filed by school districts against social media companies, blaming them for the fact that the school boards don’t know how to teach students (the one thing they’re supposed to specialize in!) how to use the internet properly. Instead of realizing the school board ought to fire themselves, some greedy ambulance-chasing lawyers have convinced them that if courts force social media companies to pay up, they’ll never have a budget shortfall again. And school boards desperate for cash, and unwilling to admit their own failings as educators, have bought into the still unproven moral panic that social media is harming kids. This is despite widespread evidence that it’s just not true.
While there are a bunch of these lawsuits, some in federal court and some in state courts, some of the California state court ones were rolled up into a single case, and on Friday, California state Judge Carolyn Kuhl (ridiculously) said that the case can move forward, and that the social media companies’ 1st Amendment and Section 230 defenses don’t apply (first reported by Bloomberg Law).
There is so much wrong with this decision, it’s hard to know where to start, other than to note one hopes that a higher court takes some time to explain to Judge Kuhl how the 1st Amendment and Section 230 actually work. Because this is not it.
The court determines that Defendants’ social media platforms are not “products” for purpose of product liability claims, but that Plaintiffs have adequately pled a cause of action for negligence that is not barred by federal immunity or by the First Amendment. Plaintiffs also have adequately pled a claim of fraudulent concealment against Defendant Meta.
As noted in that paragraph, the product liability claims fail, as the court at least finds that social media apps don’t fit the classification of a “product” for product liability purposes.
Product liability doctrine is inappropriate for analyzing Defendants’ responsibility for Plaintiffs’ injuries for three reasons. First, Defendants’ platforms are not tangible products and are not analogous to tangible products within the framework of product liability. Second, the “risk-benefit” analysis at the heart of determining whether liability for a product defect can be imposed is illusive in the context of a social media site because the necessary functionality of the product is not easily defined. Third, the interaction between Defendants and their customers is better conceptualized as a course of conduct implemented by Defendants through computer algorithms.
However, it does say that the negligence claims can move forward and are not barred by 230 or the 1st Amendment. A number of cases have been brought using this theory over the last few years, and nearly all of them have failed. Just recently we wrote about one such case against Amazon that failed on Section 230 grounds (though the court also makes clear that even without 230 it would have failed).
But… the negligence argument the judge adopts is… crazy. It starts out by saying that the lack of age verification can show negligence:
In addition to maintaining “unreasonably dangerous features and algorithms”, Defendants are alleged to have facilitated use of their platforms by youth under the age of 13 by adopting protocols that do not verify the age of users, and “facilitate[ed] unsupervised and/or hidden use of their respective platforms by youth” by allowing “youth users to create multiple and private accounts and by offering features that allow youth users to delete, hide, or mask their usage.”
The court invents, pretty much out of thin air, a “duty of care” for internet services. There have been many laws that have tried to create such a duty of care, but as we’ve explained at great length over the years, a duty of care regarding speech on social media is unconstitutional, as it will easily lead to over-blocking out of fear of liability. Even as the court recognized that internet services are not a product in the product liability sense, because that would make no sense, for negligence… it cited a case involving… electric scooters? Yup. Electric scooters.
In Hacala, the Court of Appeal held that defendant had a duty to use care when it made its products available for public use and one of those products harmed the plaintiff. The defendant provided electric motorized scooters that could be rented through a “downloadable app.” (Id. at p. 311.) The app allowed the defendant “to monitor and locate its scooters and to determine if its scooters were properly parked and out of the pedestrian right-of-way.” (Id., internal quotation marks and brackets omitted.) The defendant failed to locate and remove scooters that were parked in violation of the requirements set forth in the defendant’s city permit, including those parked within 25 feet of a single pedestrian ramp. (Id.) The defendant also knew that, because the defendant had failed to place proper lighting on the scooters, the scooters would not be visible to pedestrians at night. (Id. at p. 312.) The court found that these allegations were a sufficient basis on which to find that the defendant owed a duty to members of the public like the plaintiff, who tripped on the back wheel of one of the defendant’s scooters when walking “just after twilight.” (Id. at p. 300.)
Here, Plaintiffs seek to hold Defendants liable for the way that Defendants manage their property, that is, for the way in which Defendants designed and operated their platforms for users like Plaintiffs. Plaintiffs allege that they were directly injured by Defendants’ conduct in providing Plaintiffs with the use of Defendants’ platforms. Because all persons are required to use ordinary care to prevent others from being injured as the result of their conduct, Defendants had a duty not to harm the users of Defendants’ platforms through the design and/or operation of those platforms.
But, again, scooters are not speech. It is bizarre that the court refused to recognize that.
The social media companies also pointed out that the claims made by the school districts about kids saying they ended up suffering from depression, anxiety, eating disorders, and more from social media, can’t be directly traced back to the social media companies. As the social media companies point out, if a student goes to a school and suffers from depression, she can’t sue the schools for causing depression. But, no, the judge says that there’s a “close connection” between social media and the suffering (based on WHAT?!? she does not say).
Here, as previously discussed, there is a close connection between Defendants’ management of their platforms and Plaintiffs’ injuries. The Master Complaint is clear in stating that the use of each of Defendants’ platforms leads to minors’ addiction to those products, which, in turn, leads to mental and physical harms. (See, e.g., Mast. Compl., 11 80-95.) These design features themselves are alleged to “cause or contribute to (and, with respect to Plaintiffs, have caused and contributed to) [specified] injuries in young people….” (Mast. Compl., ¶ 96, internal footnotes omitted; see also Mast. Compl., ¶ 102 [alleging that Defendants’ platforms “can have a detrimental effect on the psychological health of their users, including compulsive use, addiction, body dissatisfaction, anxiety, depression, and self-harming behaviors such as eating disorders”], internal quotation marks, brackets, and footnotes omitted.) Plaintiffs allege that the design features of each of the platforms at issue here cause these types of harms. (See, e.g., Mast. Compl., 11268-337 (Meta); 1 484-487, 489-490 (Snap); 11 589-598 (ByteDance); ¶¶ 713-773, 803 (Google).) These allegations are sufficient under California’s liberal pleading standard to adequately plead causation.
The court also says that if the platforms dispute the level to which they caused these harms, that’s a matter of fact, to be dealt with by a jury.
Then we get to the Section 230 bit. The court bases much of its reasoning on Lemmon v. Snap. This is why we were yelling about the problems that Lemmon v. Snap would cause, even as we heard from many (including EFF?) who thought that the case was decided correctly. It’s now become a vector for abuse, and we’re seeing that here. If you just claim negligence, some courts, like this one, will let you get around Section 230.
As in Lemmon, Plaintiffs’ claims based on the interactive operational features of Defendants’ platforms do not seek to require that Defendants publish or de- publish third-party content that is posted on those platforms. The features themselves allegedly operate to addict and harm minor users of the platforms regardless of the particular third-party content viewed by the minor user. (See, e.g., Mast. Compl., 11 81, 84.) For example, the Master Complaint alleges that TikTok is designed with “continuous scrolling,” a feature of the platform that “makes it hard for users to disengage from the app,” (Mast. Compl., ¶ 567) and that minor users cannot disable the “auto-play function” so that a “flow-state” is induced in the minds of the minor users (Mast. Compl., 1 590). The Master Complaint also alleges that some Plaintiffs suffer sleep disturbances because “Defendants’ products, driven by IVR algorithms, deprive users of sleep by sending push notifications and emails at night, prompting children to re-engage with the apps when they should be sleeping.” (Mast. Comp., 107 [also noting that disturbed sleep increases the risk of major depression and is associated with “future suicidal behavior in adolescents”].)
Also similar to the allegations in Lemmon, the Master Complaint alleges harm from “filters” and “rewards” offered by Defendants. Plaintiffs allege, for example, that Defendants encourage minor users to create and post their own content using appearance-altering tools provided by Defendants that promote unhealthy “body image issues.” (Mast. Compl., 194). The Master Complaint alleges that some minors spend hours editing photographs they have taken of themselves using Defendants’ tools. (See, e.g., Mast. Compl., 318.) The Master Complaint also alleges that Defendants use “rewards” to keep users checking the social media sites in ways that contribute to feelings of social pressure and anxiety. (See, e.g., Mast. Compl., ¶ 257 [social pressure not to lose or break a “Snap Streak”].)
There’s also the fact that kids “secretly” used these apps without their parents knowing, but… it’s not at all clear how that’s the social media companies’ fault. But the judge rolls with it.
Another aspect of Defendants’ alleged lack of due care in the operation of their platforms is their facilitation of unsupervised or secret use by allowing minor users to create multiple and private accounts and allowing minor users to mask their usage. (Mast. Compl., 1929(d), (e), (f).) Plaintiffs J.S. and D.S., the parents of minor Plaintiff L.J.S., allege that L.J.S. was able to secretly use Facebook and Instagram, that they would not have allowed use of those sites, and that L.J.S. developed an addiction to those social media sites which led to “a steady decline in his mental health, including sleep deprivation, anxiety, depression, and related mental and physical health harms.” (J.S. SFC 11 7-8.)
Then, there’s a really weird discussion about how Section 230 was designed to enable users to have more control over their online experiences, and therefore, the fact that users felt out of control means 230 doesn’t apply? Along similar lines, the court notes that since the intent of 230 was “to remove disincentives” for creating tools for parents to filter the internet for their kids, the fact that parents couldn’t control their kids online somehow goes against 230?
Similarly, Congress made no secret of its intent regarding parental supervision of minors’ social media use. By enacting Section 230, Congress expressly sought “to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict children’s access to objectionable or inappropriate online material.” (47 U.S.C. § 230, subd. (b)(4).) While in some instances there may be an “apparent tension between Congress’s goals of promoting free speech while at the same time giving parents the tools to limit the material their children can access over the Internet” (Barrett, supra, 40 Cal.4th at p. 56), where a plaintiff seeks to impose liability for a provider’s acts that diminish the effectiveness of parental supervision, and where the plaintiff does not challenge any act of the provider in publishing particular content, there is no tension between Congress’s goals.
But that’s wholly misunderstanding both the nature of Section 230 and what’s going on here. Services shouldn’t lose 230 protections just because kids are using services behind their parents’ backs. That makes no sense. But, here, the judge seems to think it’s compelling.
The judge also claims (totally incorrectly based on nearly all of the case law) that if, as the social media companies claim, any harms from social media are due to third party content (which would mean Section 230 protections apply), that’s a matter for the jury.
Although Defendants argue they cannot be liable for their design features’ ability to addict minor users and cause near constant engagement with Defendants’ platforms because Defendants create such “engagement” “with user-generated content” (Defs’ Dem., at p. 42, internal italics omitted), this argument is best understood as taking issue with the facts as pleaded in the Master Complaint. It may very well be that a jury would find that Plaintiffs were addicted to Defendants’ platforms because of the third-party content posted thereon. But the Master Complaint nonetheless can be read to state the contrary-that is, that it was the design of Defendants’ platforms themselves that caused minor users to become addicted. To take another example, even though L.J.S. was viewing content of some kind on Facebook and Instagram, if he became addicted and lost sleep due to constant unsupervised use of the social media sites, and if Defendants facilitated L.J.S.’s addictive behavior and unsupervised use of their social media platforms (i.e., acted so as to maximize engagement to the point of addiction and to deter parental supervision), the negligence cause of action does not seek to impose liability for Defendants’ publication decisions, but rather for their conduct that was intended to achieve this frequency of use and deter parental supervision. Section 230 does not shield Defendants from liability for the way in which their platforms actually operated.
But if that’s the case, it completely wipes out the entire point of Section 230, which is to get these kinds of silly, vexatious cases dismissed early on, such that companies aren’t constantly under threat of liability if they don’t magically solve large societal problems.
From there, the court also rejects the 1st Amendment arguments. To get around those arguments, the court repeatedly keeps arguing that the issue is the way that social media designed its services, and not the content on those services. But that’s tap dancing around reality. When you dig into any of these claims, they’re all, at their heart, entirely about the content.
It’s not the “infinite scroll” that is keeping people up at night. It’s the content people see. It’s not the lack of age verification that is making someone depressed. Assuming it’s even related to the social media site, it’s from the content. Ditto for eating disorders. When you look at the supposed harm, it always comes back to the content, but the judge dismisses all of that and says that the users are addicted to the platform, not the content on the platform.
Because the allegations in the Master Complaint can be read to state that Defendants’ liability grows from the way their platforms functioned, the Demurrer cannot be sustained pursuant to the protections of the First Amendment. As Plaintiffs argue in their Opposition, the allegations can be read to state that Plaintiffs’ harms were caused by their addiction to Defendants’ platforms themselves, not simply to exposure to any particular content visible on those platforms. Therefore, Defendants here cannot be analogized to mere publishers of information. To put it another way, the design features of Defendants’ platforms can best be analogized to the physical material of a book containing Shakespeare’s sonnets, rather than to the sonnets themselves.
Defendants fail to demonstrate that the design features of Defendants’ applications must be understood at the pleadings stage to be protected speech or expression. Indeed, throughout their Demurrer, Defendants make clear their position that Plaintiffs’ claims are based on content created by third parties that was merely posted on Defendants’ platforms. (See, e.g., Defs’ Dem., at p. 49.) As discussed above, a trier of fact might find that Plaintiffs’ harms resulted from the content to which they were exposed, but Plaintiffs’ allegations to the contrary control at the pleading stage.
There are some other oddities in the ruling as well, including dismissing the citation to the NetChoice/CCIA victory in the 11th Circuit regarding Florida’s social media moderation law, because the judge says that ruling doesn’t apply here, since the lawsuit isn’t about content moderation. She seems to falsely think that the features on social media have nothing to do with content moderation, but that’s just factually wrong.
There are a few more issues in the ruling, but those are basically the big parts of it. Now, it’s true that this is just based on the initial complaints, and at this stage of the procedure, the judge has to rule assuming that everything pleaded by the plaintiffs is true, but the way it was done here almost entirely wipes out the entire point of Section 230 (not to mention the 1st Amendment).
Letting these cases move forward enables exactly what Section 230 was designed to prevent: creating massive liability and expensive litigation over choices regarding how a website publishes and presents content. The end result, if this is not overturned, is likely to be a large number of similar (and similarly vexatious) lawsuits that overwhelm websites over potential liability. If each one has to go to a jury before its decided, it’s going to be a total mess.
The whole point of Section 230 was to have judges dismiss these cases early on. And here, the judge has gotten almost all of it backwards.