A Trio Of Failed Lawsuits Trying To Sue Websites For Moderating Content

from the that's-not-how-any-of-this-works dept

Why do people still file these lawsuits? For years now, we see lawsuits filed against websites over their content moderation decisions, despite Section 230 barring them (and the 1st Amendment rights of the platform backing that up). These lawsuits always fail.

Perhaps the reason we’re seeing a bunch more of these lately was because a ton of people completely misunderstood (helped along by the guy who I don’t think could fairly describe anything if he really tried) what happened with Twitter and Alex Berenson. All of the 1st Amendment claims in Berenson’s lawsuit were thrown out easily. The only reason the case moved forward (and then settled) was because an executive at Twitter had made statements to Berenson suggesting that he wouldn’t have his account blocked, and that opened up the possibility (though it still would have been a long shot in court) that a Barnes-style “promissory estoppel” ruling would come down.

But, because of how that case has been widely misrepresented to nonsense peddlers, they seemed to think it was open season on suing platforms. Anyway, all those cases are losing. Here are three examples that all happened recently, and all covered by Professor Eric Goldman. I’m playing a bit of catchup combining all three, but honestly, none of them represent anything ground-breaking or new. They’re just standard foolish lawsuits from people falsely thinking you can sue websites for moderating your content.

First up, we have well known nonsense peddler and pretend Presidential candidate RFK Jr. He’s been suing platforms for a while and it hasn’t gone well at all. In this case, RFK argued that YouTube was a “state actor” in taking down some videos, but the court isn’t buying it at all, noting the 9th Circuit has already said that such arguments are nonsense.

The Ninth Circuit held that Twitter exercised its own independent, judgment in adopting its content moderation policies and enforcing them. Id. at 1158. Additionally, the court held that the “private and state actors were generally aligned in their missions to limit the spread of misleading election information” and that “[s]uch alignment does not transform private conduct into state action.” Id. 1156–57.

Similarly, here, under either test, Plaintiff has not shown that the government so “insinuated itself into a position of interdependence” with Google or that it “exercised coercive power or has provided such significant encouragement” to Google that give rise to state action. Since Plaintiff’s counsel, at oral argument, conceded that the evidence provided in support of his application does not show that the government coerced Google, the Court limits its inquiry to whether there is evidence suggesting that the government insinuated itself into a position of interdependence or provided significant encouragement. Regardless of which test is used, the analysis is “necessarily fact-bound ….” Lugar v. Edmondson Oil Co., 457 U.S. 922, 939 (1982).

No state actor, no 1st Amendment. This case is going nowhere.

Next up, was a lawsuit against exTwitter from a pro se plaintiff, Taiming Zhang, arguing that his suspension from Twitter violated his contract with Twitter. That is… not how any of this works, as the court explained.

Zhang’s case gets tossed on straightforward Section 230 grounds, as his attempt to get around 230 was to say “but the contract was breached!” and the court says… nope:

Plaintiff’s argument “CDA 230 carries no relevance” because Twitter breached their contract is unavailing. There is no exception under Section 230 for breach of contract claims. See 47 U.S.C. § 230(e). Courts routinely hold Section 230 immunizes platforms from contract claims, where, as here, they seek to impose liability for protected publishing activity. See, e.g., King v. Facebook, Inc., 845 F. App’x 691, 692 (9th Cir. 2021) (affirming dismissal of pro se plaintiff’s contract claim based on, among other things, Facebook’s suspension of her user account, because “`any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230′”) (quoting Roommates, 521 F.3d at 1170-71); Murphy v. Twitter, Inc., 60 Cal. App. 5th 12, 28 (2021) (“many [courts] have concluded that [contract] claims were barred [by Section 230] because the plaintiff’s cause of action sought to treat the defendant as a publisher or speaker of user generated content”) (collecting cases).

Finally, we have Joseph Mercola, a somewhat infamous purveyor of absolute nonsense regarding vaccines, who had his account taken down by YouTube. He sued. It didn’t go well. He also argues a contractual violation and, as Goldman notes, Mercola seemed to switch legal strategies midstream going from originally suing over the content removals, to arguing that he just wanted access to his content (as if he didn’t already have copies?).

Either way, that’s not how any of this works:

As set forth in the Statement, YouTube had no obligation to host or serve content. The main issue is that the plaintiffs want access to the content. But no provision of the Agreement provides a right to access that content under the circumstances here: termination for cause under the agreement. In a different context, there is an avenue to export content: if YouTube terminates a user’s access for service changes, it gives the user sufficient time to export content, where reasonably possible. But that provision on its face does not apply here. The plaintiffs thus do not plead contract or quasi-contract claims related to denial of access to their content.

Similarly, as set forth in the Statement, YouTube had the discretion to take down content that harmed its users. The content here violated the Community Guidelines. Modifications to the Community Guidelines — such as the modification here to elaborate on YouTube’s existing prohibitions on medical misinformation to add COVID-19 and vaccines — could be effective immediately, without notice. YouTube had the discretion to terminate channels without warning after a single case of severe abuse. Under the contract, this determination was discretionary: the contract said that “[i]f we reasonably believe that any Content is in breach of this agreement or may cause harm, . . . we may remove or take down that Content in our discretion.”

Basically all three of these cases boil down to the same basic thing: a crackpot who a website decided violated its rules has their content taken down, and the crackpot feels entitled to commandeer someone else’s private property to host their speech.

That’s not how it works. It’s not how it’s ever worked. But, somehow, I doubt these lawsuits are going away any time soon.

Filed Under: , , , , , , , ,
Companies: google, twitter, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “A Trio Of Failed Lawsuits Trying To Sue Websites For Moderating Content”

Subscribe: RSS Leave a comment
14 Comments
This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

A general reminder:

The First Amendment protects your rights to speak freely and associate with whomever you want. It doesn’t give you the right to make others listen, make others give you access to an audience, and/or make a personal soapbox out of private property you don’t own. Nobody owes you a platform or an audience at their expense.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

An addendum: The First Amendment protects you against the government preventing your speech. It has no role in protecting you from the consequences of your speech; not from critics, not from private properties (companies) conveying your speech, and not from your mother.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

Or to put is even more simply – if a private actor responds to your free speech, that is also free speech. Unless in the act of expressing such a response they commit another crime (e.g. by assaulting you), it’s their right to speak as much as it was for you to say the thing they’re responding to.

If one feels outnumbered because of their unpopular speech, there’s various valid ways to fix that – but stopping others from exercising their free speech in response to you is not one of them.

That One Guy (profile) says:

Fractally wrong free speech

It’s amazing what sort of legal faceplants can result from people simultaneously thinking that free speech is shorthand for consequence-free speech(but only for them and theirs of course) and that a right to speak includes a right to speak on a platform of your choice to speak from/on, whether or not the owner of said platform actually wants you there.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Why do people still file these lawsuits? For years now, we see lawsuits filed against websites over their content moderation decisions, despite Section 230 barring them (and the 1st Amendment rights of the platform backing that up). These lawsuits always fail.

Because straight white males are terrified that the consequences of their homophobia and impregnation (read: rape) culture are finally starting to catch up with them.

Rocky says:

Re:

IANAL, but here’s my take on it.

It’s not the company making a voluntary agreement, it’s the other way around. The company offers up the TOS, the user either voluntarily accepts it or they don’t which means they can’t use the service. Unless otherwise explicitly stated in the TOS or by the company, any issues about moderation and speech on their service is handled by section 230.

In the case of promissory estoppel, if a current user of the service gets a promise in regards to a moderation decision or the like, that is the company saying “in this particular situation we promised to do this specific something” which means section 230 doesn’t come into the picture at all because it falls under “unless otherwise explicitly stated”.

I’m sure I missed some legal wrinkles in my explanation and I’m sure someone more knowledgeable will correct me.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...