from the pour-one-out-for-230 dept
We’ve been warning for a while now that Section 230 is dying by a thousand legal workarounds rather than a straightforward repeal, and the hits just keep coming. A few weeks ago, I wrote about how two jury verdicts against Meta in New Mexico and California should scare anyone who cares about the open internet, even if the instinct to cheer them on is understandable given how terrible Meta has been. Those verdicts adopted a legal theory that re-frames editorial decisions about how to present user-generated content as “product design” choices outside the scope of Section 230, functionally making the law irrelevant.
Now, the Massachusetts Supreme Judicial Court has gone even further. In a unanimous ruling in Commonwealth v. Meta Platforms, Inc., the state’s highest court has denied Meta’s motion to dismiss the state attorney general’s lawsuit, holding that Section 230 does not bar claims that Meta designed Instagram to be addictive to children, lied to the public about the platform’s safety, failed to properly age-gate underage users, and created a public nuisance. The court’s reasoning provides a clean, easily replicable template for any plaintiff anywhere to plead around Section 230, and it does so by mangling the statute’s text and ignoring key words while drawing a distinction between “content” and “content presentation” that collapses under even the slightest scrutiny.
Once again, since this always needs to be said in all of the articles about these rulings: Meta is a terrible company. It has spent years making terrible decisions. I don’t trust the company to make the right decisions even if only correct decisions were presented to it. Mark Zuckerberg deserves zero benefit of the doubt. But as I said last time, the legal theories being used to go after Meta here will not stay confined to Meta. They will be used against every website, every search engine, every forum, every email provider, and every small platform that makes any decision about how to present user-generated content. That’s what makes this ruling so dangerous.
Professor Eric Goldman, who has been tracking these cases more closely than perhaps anyone, put it bluntly:
This is not a good opinion for Section 230 on several dimensions.
First, as a state supreme court decision, it’s the final word for the Massachusetts state court system (unless the US Supreme Court intervenes). It provides a major beachhead for other courts to follow, both within Massachusetts and beyond.
Second, this court didn’t rely on the Lemmon “design defect” workaround. Instead, it said that the claim doesn’t relate to third-party content unless it’s based on the substance of the third-party content. This provides plaintiffs with another avenue to work around Section 230 in addition to the Lemmon/design defect workaround that other courts are accepting (even if they shouldn’t).
Third, as I explained, I don’t see any distinction between third-party content and the editorial choices about the manner of presenting that third-party content. By embracing that false dichotomy, the court invites plaintiffs to reframe their complaints to focus on content presentation instead of substance.
That last point is the most important part of the whole ruling. The court has now handed plaintiffs’ lawyers a magic formula: just say you’re suing about the presentation of content rather than the content itself, and Section 230 vanishes. Goldman lays out the playbook:
Here’s how a plaintiff’s argument could look: “I’m not suing about the third-party content, I’m suing about the design choices that elevated that third-party content over others.” These are literally the same thing in my mind. If this argument works, Section 230 is dead because plaintiffs will always embrace that workaround.
Looking at the court’s actual reasoning, things get messy fast.
Massachusetts’ complaint alleged that Meta “engaged in unfair business practices by designing the Instagram platform to induce compulsive use by children, engaged in deceptive business practices by deliberately misleading the public about the safety of the platform, and created a public nuisance by engaging in these unfair and deceptive practices.” Meta moved to dismiss on Section 230 grounds. The lower court denied the motion. Meta appealed.
The Massachusetts Supreme Judicial Court actually (correctly!) recognized that Section 230 provides immunity from being sued in the first place, not just a defense against paying up at the end. This matters procedurally, because immunity from suit means you get to appeal the denial of your motion to dismiss before trial — you don’t have to go through the whole expensive litigation process first and then appeal at the end. The court analyzed the language of Section 230(e)(3), and reached the right conclusion:
The plain meaning of “no cause of action may be brought” is that a suit may not be initiated in the first instance and the defendant cannot be forced to litigate the claim.
Great. The court got the procedural question right. Section 230 provides immunity from suit. Meta gets its interlocutory appeal. The whole point of Section 230, after all, has always been to get bad cases tossed early, before the ruinous expense of discovery and trial.
And then the court proceeded to deny the immunity anyway, meaning Meta now has to litigate the entire case on the merits despite supposedly having immunity from suit. The court gave Section 230 its proper procedural dignity with one hand and gutted it substantively with the other. Meta got to appeal early — and lost anyway. Now it faces full litigation on claims that Section 230 was designed to kill at the threshold. The outcome is a complete mess: the court has effectively turned “immunity from suit” into “the right to lose an appeal slightly faster.”
The heart of the court’s logic rests on a distinction between claims that impose liability based on the content of third-party information and claims that merely concern how that content is presented. To get there, the court engaged in a lengthy analysis of the phrase “treated as the publisher . . . of any information” in Section 230(c)(1), concluding that this phrase requires both a “dissemination element” and a “content element.” In other words, the court held that Section 230 only applies when a claim seeks to hold a platform liable for the substance of user-generated content it published — and that claims about design features like infinite scroll, autoplay, algorithmic recommendations, and notification systems target the how of publishing rather than the what, and therefore fall outside Section 230’s protection.
This ignores a long list of precedents — and the explicit statements of Section 230’s authors — establishing that the law was designed to protect platforms from being sued over any editorial decision-making, including how content is presented. To put this in perspective, it’s like saying that someone could sue, say, the evening news based on where they placed a story (top of the show or bottom?) and that the impact of how it was presented is somehow unrelated to the content itself. That makes no sense. But it’s the way this court has interpreted 230.
The court found that with respect to the unfair business practices claim:
The challenged design features (e.g., infinite scroll, autoplay, IVR, and ephemeral content) concern how, whether, and for how long information is published, but the published information itself is not the source of the harm alleged. Instead, the claim alleges that the features themselves induce compulsive use independent of the content provided by third-party users.
Meta tried to point out the obvious problem with this: without user-generated content, these design features don’t do anything harmful. Nobody’s getting addicted to infinite scroll through a feed of nothing. The court waved this away:
But the fact that the features require some content to function is not controlling; instead…to satisfy the content element, we look to whether the claim seeks to hold Meta liable for harm stemming from third-party information that it published. Here, the unfair business practices claim does not; the Commonwealth alleges that the features themselves prolong users’ time on the platform, not that any information contained in third-party posts does so. In this sense, the claim is indifferent as to the content published.
“Indifferent as to the content published.” No matter how many times courts (or media or politicians) make this claim, it never gets any more accurate. As I noted in my earlier piece about the California and New Mexico verdicts: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing? Of course not. Because infinite scroll does nothing without content that makes people want to keep scrolling. The features and the content are inseparable. Saying the claim is “indifferent as to the content published” is a legal fiction, and everyone involved knows it.
Goldman makes this point through a newspaper analogy that’s worth quoting at length:
I don’t see any distinction between third-party content and the editorial choices about the manner of presenting that third-party content. By embracing that false dichotomy, the court invites plaintiffs to reframe their complaints to focus on content presentation instead of substance. … As an analogy, consider a dead-trees newspaper’s decision to publish a story: it is equally part of the newspaper’s editorial prerogative and publication decisions to decide to publish the story at all and to decide if the story should appear on the A1 front page or some interior page; what size typeface to use for the story headline; whether the story runs all on the same page or continues on a later page; etc. As applied to Meta, the decision to vary the delivery timing of new third-party content items (as one example) is just as much of Meta’s publication decision-making process about publishing the third-party content as whether the item will be published at all.
The fallout here goes way beyond just Instagram. A search engine decides to rank certain results higher than others — that’s a “design choice” about content presentation, not about the content itself. A forum uses “newest first” sorting — design choice. An email provider’s spam filter decides what goes to your inbox — design choice. A blog allows comments and displays them in threaded format — design choice. Under this court’s reasoning, all of those are potentially outside Section 230’s protection, because they concern how content is presented rather than the content’s substance. Every editorial decision a website makes about the display, ordering, timing, or format of user-generated content is now potentially a “design” claim that evades Section 230.
Especially given that the whole premise of these lawsuits is that these “design choices” are engineered to “addict” users — a claim that none of the cases have actually established as a clinical matter. They show signs of companies trying to make users of their products like and use them more. Which is what basically every company does. It’s sort of the nature of business. Should a state AG be able to sue a restaurant because its food was too delicious and people ate too much of it? TV shows end on cliffhangers. Books have page-turning chapter endings. Are those addictive design features subject to state AG enforcement?
There’s another serious problem with the court’s statutory analysis that Goldman flagged, and it’s frankly embarrassing for any court to make, let alone a state supreme court. Section 230(c)(1) says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The court spent pages analyzing what “publisher” means, diving into common-law publisher liability, legislative history, and the Cubby/Stratton Oakmont story line. But as Goldman observed:
Worse, the court extensively analyzes the word “publisher” but doesn’t say a word about the companion “speaker” term that appears two words later in the statute. This is another indicator of results-oriented decision-making. No matter what the court says “publisher” means, if the court disregards one of the other 26 words that has direct relevance to its meaning, the court is failing its #1 job of reading the damn statute. This omission is extremely embarrassing for the court, and it thoroughly undermines the credibility of the court’s recitation of precedent.
Whatever narrow common-law meaning you might ascribe to “publisher,” the word “speaker” is right there, broadening the scope. The court just… pretended it wasn’t. When a court conducting what it claims is a careful “plain meaning” analysis of a 26-word clause of the statute at the center of the case manages to ignore one of the operative words, that’s more than a tell. As Goldman noted:
When courts decide to review a 1996 statute from scratch in 2026, after over a thousand Section 230 cases have been decided, that’s usually an indicator that they are engaging in results-oriented decision-making, they don’t like the precedent, and they need another way to reach a different result.
Then there are the deception claims, which the court dispatched with even less effort. Massachusetts alleged that Meta lied to the public about Instagram being safe and not addictive. The court held that because these were Meta’s own statements, Section 230 obviously didn’t apply — the statute only protects against liability for third-party content, and Meta’s PR statements are first-party speech.
That much is technically defensible as a Section 230 matter. But the underlying theory has its own problems that the court didn’t bother grappling with. What does it mean for a company to “deceive” the public by saying its product is “safe”? Almost nothing is 100% safe. Cars aren’t perfectly safe. Food isn’t perfectly safe. Playgrounds aren’t perfectly safe. As we’ve written about before, the social media moral panic has systematically confused risks with harms. Something can carry risks without every user being harmed, and a company saying it takes safety seriously is not a guarantee that no bad outcome will ever occur to any user. If “we prioritize safety” plus “something bad happened to a user” equals fraud, then every tech company, car manufacturer, pharmaceutical firm, and food producer in the country is perpetually liable for “deception.”
Goldman noted that there are “obvious puffery/opinion defenses that could apply here” but weren’t addressed in the Section 230 analysis. That’s true. But the more fundamental problem is that the court’s framing of the deception claims, combined with its evisceration of Section 230’s applicability to the design claims, means all four counts now proceed to full litigation. The “public nuisance” claim got even less analysis — a single footnote saying that because the other claims survive Section 230, so does the nuisance claim that’s based on them. Goldman rightfully calls out how weak this is:
I’ve previously complained before about courts’ complete undertheorizing of how and why public nuisance claims can apply to social media, and this court doesn’t do any better. In a footnote, here is the court’s entire discussion about Section 230’s application to the public nuisance claim: “Because we conclude that § 230(c)(1) does not bar counts I to III, we also conclude that it does not bar the Commonwealth’s public nuisance claim, which is predicated on the same allegedly unfair and deceptive practices in counts I to III.”
Put it all together and the picture for Section 230 is bleak.
A few weeks ago, juries in New Mexico and California found Meta liable using the “design defect” workaround — arguing that features like infinite scroll and algorithmic recommendations are product design choices, not editorial decisions about third-party content. Those verdicts relied on the framework from Lemmon v. Snap, the somewhat problematic Ninth Circuit case that carved out a design-defect exception to Section 230, and which opened the floodgates to lawsuits like the ones we’re discussing here.
Somewhat oddly, the Massachusetts court explicitly declined to follow the Lemmon framework. It developed its own, different workaround: Section 230 only applies when a claim is based on the substance of third-party content, and claims about content presentation fall outside its scope. This is, as Goldman put it, “another avenue to work around Section 230 in addition to the Lemmon/design defect workaround that other courts are accepting.”
So we now have at least two distinct legal theories for pleading around Section 230, both blessed by courts, both available to any plaintiffs’ lawyer nationwide. And both accomplish the same thing: they take the editorial decisions that platforms make about user-generated content — the decisions that are the very heart of what Section 230 was designed to protect — and reclassify them as something else. “Design choices.” “Content presentation.” “Product features.” Call them whatever you want. The result is that Section 230 protects nothing that matters.
Goldman’s metaphor for all of this is apt:
Even if this opinion doesn’t outright eliminate Section 230 in Massachusetts, it’s a sign of how 230 workarounds keep proliferating, contributing to the swiss cheese-ification of Section 230. When the bubbles in the swiss cheese become too large, the cheese wedge lacks structural integrity and falls apart. That is where 230 is heading, if it’s not already there.
And this brings us to the thing that matters most, the thing that gets overlooked in every one of these cases: the procedural advantage of Section 230 was always the point. The whole reason Section 230 exists is to get bad cases thrown out early, before platforms have to spend millions in discovery and trial. Even if the First Amendment eventually protects many of the same editorial decisions, it does so at the end of expensive, protracted litigation. Section 230 was designed to get you out at the motion to dismiss stage.
And it wasn’t just the procedural advantage that mattered — it was the certainty. Platforms could make editorial decisions about how to present content knowing they were protected. That freedom meant editorial reasoning could lead, rather than legal risk-avoidance. A lawyer consulted before every design decision will never tell you to make the best call for users — only the least legally exposed one.
All of that has been thrown out the window. The certainty. The quick resolution. The ability for editorial reasoning to lead, rather than lawyerly concerns. These court rulings chip away at Section 230 bit by bit, and with it the ability for anyone to freely host content online without fear of getting sued.
The Massachusetts court’s ruling is the textbook example of how that benefit has been destroyed. The court correctly held that Section 230 provides immunity from suit — not just immunity from liability. It correctly allowed Meta to take an interlocutory appeal on exactly that basis. And then it ruled that the immunity doesn’t actually apply to any of the claims in the case. Meta exercised its right to an early appeal and got told it has to go litigate the whole thing anyway.
So what was the point? Meta got to go to the state supreme court, argue about immunity from suit, and then get sent right back to trial court to face all the same claims. Every future defendant in Massachusetts who raises a Section 230 defense will look at this ruling and know that the “immunity from suit” is a mirage. You get the appeal. You just don’t get the immunity, so long as the lawyers on the other side say the magic words. Which all of them will.
This is exactly the dynamic I warned about in my piece about the California and New Mexico verdicts. Even if these legal theories eventually get sorted out at the Supreme Court level, even if the First Amendment eventually provides some backstop, the practical reality is that Section 230’s core function — early dismissal of meritless cases — has been gutted. Every plaintiff’s lawyer now knows how to draft a complaint that survives a 230 motion to dismiss: just say “design” instead of “content.” Say “presentation” instead of “publication.” And you’re in. Discovery. Trial. Seven-figure legal bills. The whole show.
And smaller companies know this. Meaning they will either avoid hosting content altogether… or we’ll have the most powerful heckler’s veto in existence. Anyone who wants any third party content removed just needs to threaten a lawsuit using the magic words. And the mere threat of legal bills will mean the “smart” move will be to remove the content. All sorts of forums will suffer. Think about how Republican AGs will use this to argue that any site hosting LGBTQ+ content is causing harm. Think about the plaintiffs’ lawyers who will use any claimed “design” flaw as leverage for a shakedown settlement. If you thought that copyright trolling was bad, just wait until we see an entire collection of plaintiffs lawyers suing (or just threatening to sue while really seeking a settlement) any website they can claim made a “design choice” that leads to harm.
That’s the ballgame for small platforms. For independent forums. For startups trying to compete with the giants. Meta can absorb this. A new social media competitor cannot. Congress doesn’t need to repeal Section 230. The courts are doing it for them, one cleverly worded ruling at a time.
Filed Under: addiction, content presentation, design features, editorial freedom, massachusetts, section 230
Companies: meta