Third Circuit’s Section 230 TikTok Ruling Deliberately Ignores Precedent, Defies Logic
from the that's-not-how-any-of-this-works dept
Step aside Fifth Circuit Court of Appeals, there’s a new contender in town for who will give us the most batshit crazy opinions regarding the internet. This week, a panel on the Third Circuit ruled that a lower court was mistaken in dismissing a case against TikTok on Section 230 grounds.
But, in order to do so, the court had to intentionally reject a very long list of prior caselaw on Section 230, misread some Supreme Court precedent, and (trifecta!) misread Section 230 itself. This may be one of the worst Circuit Court opinions I’ve read in a long time. It’s definitely way up the list.
The implications are staggering if this ruling stands. We just talked about some cases in the Ninth Circuit that poke some annoying and worrisome holes in Section 230, but this ruling takes a wrecking ball to 230. It basically upends the entire law.
At issue are the recommendations TikTok offers on its “For You Page” (FYP), which is the algorithmically recommended feed that a user sees. According to the plaintiff, the FYP recommended a “Blackout Challenge” video to a ten-year-old child, who mimicked what was shown and died. This is, of course, horrifying. But who is to blame?
We have some caselaw on this kind of thing even outside of the internet context. In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.
In many ways, Section 230 was designed to speed up this analysis in the internet era, by making it explicit that a website publisher has no liability for harms that come from content posted by others, even if the publisher engaged in traditional publishing functions. Indeed, the point of Section 230 was to encourage platforms to engage in traditional publishing functions.
There is a long list of cases that say that Section 230 should apply here. But the panel on the Third Circuit says it can ignore all of those. There’s a very long footnote (footnote 13) that literally stretches across three pages of the ruling listing out all of the cases that say this is wrong:
We recognize that this holding may be in tension with Green v. America Online (AOL), where we held that § 230 immunized an ICS from any liability for the platform’s failure to prevent certain users from “transmit[ing] harmful online messages” to other users. 318 F.3d 465, 468 (3d Cir. 2003). We reached this conclusion on the grounds that § 230 “bar[red] ‘lawsuits seeking to hold a service provider liable for . . . deciding whether to publish, withdraw, postpone, or alter content.’” Id. at 471 (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)). Green, however, did not involve an ICS’s content recommendations via an algorithm and pre-dated NetChoice. Similarly, our holding may depart from the pre-NetChoice views of other circuits. See, e.g., Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1098 (9th Cir. 2019) (“[R]ecommendations and notifications . . . are not content in and of themselves.”); Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019) (“Merely arranging and displaying others’ content to users . . . through [] algorithms—even if the content is not actively sought by those users—is not enough to hold [a defendant platform] responsible as the developer or creator of that content.” (internal quotation marks and citation omitted)); Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 21 (1st Cir. 2016) (concluding that § 230 immunity applied because the structure and operation of the website, notwithstanding that it effectively aided sex traffickers, reflected editorial choices related to traditional publisher functions); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 407 (6th Cir. 2014) (adopting Zeran by noting that “traditional editorial functions” are immunized by § 230); Klayman v. Zuckerburg, 753 F.3d 1354, 1359 (D.C. Cir. 2014) (immunizing a platform’s “decision whether to print or retract a given piece of content”); Johnson v. Arden, 614 F.3d 785, 791-92 (8th Cir. 2010) (adopting Zeran); Doe v. MySpace, Inc., 528 F.3d 413, 420 (5th Cir. 2008) (rejecting an argument that § 230 immunity was defeated where the allegations went to the platform’s traditional editorial functions).
I may not be a judge (or even a lawyer), but even I might think that if you’re ruling on something and you have to spend a footnote that stretches across three pages listing all the rulings that disagree with you, at some point, you take a step back and ask:

As you might be able to tell from that awful footnote, the Court here seems to think that the ruling in Moody v. NetChoice has basically overturned those rulings and opened up a clean slate. This is… wrong. I mean, there’s no two ways about it. Nothing in Moody says this. But the panel here is somehow convinced that it does?
The reasoning here is absolutely stupid. It’s taking the obviously correct point that the First Amendment protects editorial decision-making, and saying that means that editorial decision-making is “first-party speech.” And then it’s making that argument even dumber. Remember, Section 230 protects an interactive computer service or user from being treated as the publisher (for liability purposes) of third party information. But, according to this very, very, very wrong analysis, algorithmic recommendations are magically “first-party speech” because they’re protected by the First Amendment:
Anderson asserts that TikTok’s algorithm “amalgamat[es] [] third-party videos,” which results in “an expressive product” that “communicates to users . . . that the curated stream of videos will be interesting to them[.]” ECF No. 50 at 5. The Supreme Court’s recent discussion about algorithms, albeit in the First Amendment context, supports this view. In Moody v. NetChoice, LLC, the Court considered whether state laws that “restrict the ability of social media platforms to control whether and how third-party posts are presented to other users” run afoul of the First Amendment. 144 S. Ct. 2383, 2393 (2024). The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment….
Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, id. at 2409, it follows that doing so amounts to first-party speech under § 230, too….
This is just flat out wrong. It is based on the false belief that any “expressive product” makes it “first-party speech.” That’s wrong on the law and it’s wrong on the precedent.
It’s a bastardization of an already wrong argument put forth by MAGA fools that Section 230 conflicts with the argument in Moody. The argument, as hinted at by Justices Thomas and Gorsuch, is that because NetChoice argues (correctly) that its editorial decision-making is protected by the First Amendment, it’s somehow in conflict with the idea that they have no legal liability for third-party speech.
But that’s only in conflict if you can’t read and/or don’t understand the First Amendment and Section 230 and how they interact. The First Amendment still protects any editorial actions taken by a platform. All Section 230 does is say that it can’t face liability for third party speech, even if it engaged in publishing that speech. The two things are in perfect harmony. Except to these judges in the Third Circuit.
The Supreme Court at no point says that editorial actions turn into first-party speech because they are protected by the First Amendment, contrary to what they say here. That’s never been true, as even the mushroom encyclopedia example shows above.
Indeed, reading Section 230 in this manner wipes out Section 230. It makes it the opposite of what the law was intended to do. Remember, the law was written in response to the ruling in Stratton Oakmont v. Prodigy, where a local judge found Prodigy liable for content it didn’t moderate, because it did moderate some content. As then Reps. Chris Cox and Ron Wyden recognized, that would encourage no moderation at all, which made no sense. So they passed 230 to overturn that decision and make it so that internet services could feel free to engage in all sorts of publishing activity without facing liability for the underlying content when that content was provided by a third party.
But here, the Third Circuit has flipped that on its head and said that the second you engage in First Amendment-protected publishing activity around content (such as recommending it), you lose Section 230 protections because the content becomes first-party content.
That’s… the same thing that the court ruled in Stratton Oakmont, and which 230 overturned. It’s beyond ridiculous for the Court to say that Section 230 basically enshrined Stratton Oakmont, and it’s only now realizing that 28 years after the law passed.
And yet, that seems to be the conclusion of the panel.
Incredibly, Judge Paul Matey (a FedSoc favorite Trump appointee) has a concurrence/dissent where he would go even further in destroying Section 230. He falsely claims that 230 only applies to “hosting” content, not recommending it. This is literally wrong. He also falsely claims that Section 230 is a form of a “common carriage regulation” which it is not.
So he argues that the first Section 230 case, the Fourth Circuit’s important Zeran ruling, was decided incorrectly. The Zeran ruling established that Section 230 protected internet services from all kinds of liability for third-party content. Zeran has been adopted by most other circuits (as noted in that footnote of “all the cases we’re going to ignore” above). So in Judge Matey’s world, he would roll back Section 230 to only protect hosting of content and that’s it.
But that’s not what the authors of the law meant (they’ve told us, repeatedly, that the Zeran ruling was correct).
Either way, every part of this ruling is bad. It basically overturns Section 230 for an awful lot of publisher activity. I would imagine (hope?) that TikTok will request an en banc rehearing across all judges on the circuit and that the entire Third Circuit agrees to do so. At the very least, that would provide a chance for amici to explain how utterly backwards and confused this ruling is.
If not, then you have to think the Supreme Court might take it up, given that (1) they still seem to be itching for direct Section 230 cases and (2) this ruling basically calls out in that one footnote that it’s going to disagree with most other Circuits.
Filed Under: 1st amendment, 1st party speech, 3rd circuit, 3rd party speech, algorithms, fyp, liability, recommendations, section 230
Companies: tiktok


Comments on “Third Circuit’s Section 230 TikTok Ruling Deliberately Ignores Precedent, Defies Logic”
Well this was a rather scary read. So is THIS what upends the entire internet, then? Cause right now this looks very, very bad.
Re:
This is just one ruling and its going back a lower court to reconsider. very likely this will end up at the Supreme Court and be overturned.
Re: Re:
It’s also possible the Third Circuit will realize it’s mistake and overturn it’s own ruling.
Frankly, no one should have to deal with courts inexplicably pulling this sort of nonsense.
Re: Re: Re:
Let’s hope this ruling gets thrown out quickly instead of becoming a new, drawn out thing to worry about.
Re:
So, how likely is it we’ll be seeing the end of the open internet now, then?
Re: Re:
We’ll probably see the end of surveillance capitalist bullshit such as black-box recommendation algos and targeted advertising. The Internet and social media will live on and become better.
Re: Re:
Very unlikely and its very likely this will end up at the Supreme Court and be overturned.
Re: Re:
I think I should consider checking techdirt less. As useful as it can be to stay informed, I find myself easily spiralling into a pit of anxiety when reading some of the articles.
Re: Re: Re:
If anything, at least it’s a reminder that the system works, if only barely.
Re: Re: Re:
I would consider going elsewhere to get alternate perspectives from people who aren’t neck-deep in policy wonkery like Stephen and Mike.
Re: Re: Re:2
Unfortunately this has been the best news source for me so far, aside from EFF and other orgs like them.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:3
Look up Matt Stoller. One of his books was even quoted in this ruling.
Re: Re: Re:3
yep… even one my favorite outlets, Ars Technica, has some pretty garbage reporting on this matter. it seems people just don’t understand. the comments there are quite depressing. it’s like a bad reddit thread. what a nightmare
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:4
Nah. That comment thread actually has people whose perspectives are grounded in reality and what could make the Internet a more sane place. I appreciate Ars Technica’s comment section and forums for having spaces that are open to discussion of more forward-thinking progressive Internet and tech regulation beyond just Net Neutrality and privacy legislation. It’s a breath of fresh air compared to here.
Re: Re: Re:5
I’m all for the internet becoming a more sane space, I’m just afraid of rulings like these indirectly wiping out large swathes of user-generated content, like youtube, comment sections, even art and fanfic sites, etc
Re: Re: Re:5
…hallucinated nobody mentally competent, ever.
Re: Re: Re:4
Ars had trouble a while back where one of its authors was caught blindly repeating disinformation without any vetting whatsoever.
To placate its readers, they fired that author… and hired Ashley Belanger, who 100% of her work is blindly repeating what others say with no vetting or even a scintilla of expert factual analysis, especially if it has an anti-tech bent to it.
Re:
If the precedent holds, it will be internet-breaking. But I doubt that TikTok would take this lying down. Best bet would be with an Appeals court or the Supreme Court, where TikTok has a better chance to challenge the law. But outside of that, there’s no sugarcoating it, this ruling is really bad.
Re: Re:
It’s bad but not doomsday and it conflicts hugely with other circuits.
Re: Re:
I’m not ready to lose contact to all the friends I’ve made online because of some stupid fucking court ruling from the other side of the planet
Re: Re: Re:
You are unlikely to see any affects for a while if at all, this ruling will not stand.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:
Mike is catastrophizing a ruling he dislikes as if it’s the Death Of The Internet. You’ll be fine.
Re: Re: Re:2
I wouldn’t discount the seriousness that this ruling poses and concerns it raises. While this ruling isn’t an internet-breaking threat (YET), it has the potential to be if it isn’t challenged soon.
Re: Re: Re:3
Well someone’s definitely gonna challenge the ruling if it’s as potentially dangerous as you say, suppose.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:3
It only raises concerns for neoliberal busybodies and technolibertarians. Surveillance Capitalism can go rot.
Re: Re: Re:4
Oh, this is a problem for many more people beyond technolibertarians and surveillance capitalists. It would make a Mastodon instance liable for posts that appear in trending feeds, for example.
Re: Re: Re:5
I mean, a Federalist Society judge appointed by Donald Trump is not likely to be basing his decision upon his commitment to fully automated luxury gay space communism.
Re: Re: Re:4
Oh, wow. We’ve got ourselves an edgy anarchist here, 🙄. Listen mate, everyone should be concerned here. Section 230 is not this proponent of a libertarian dystopia fan-fiction you’ve cooking up. In fact, Section 230 does the opposite, it enables that not only your voice is heard, but also the platform/website you’re on doesn’t get Immediately shut down. Even the smallest exception to section 230 causes reverberations throughout the web and this precedent threatens the integrity of that system.
Long story short: S230 lets you speak, edgelord.
Re:
Update: Still incredibly worried over this. How likely is it to be overturned? Would this ruling fail if challenged?
Grasping for any kind of comfort at this point, to be frank.
Re: Re:
Very likely it will be challenged and overturned.
Re: Re: Re:
I hope you’re right..
This comment has been flagged by the community. Click here to show it.
Maybe they rejected the prior caselaw on 230 because they recognized that the prior caselaw was bad and expanded 230 way out of the original scope of it to cover and defend things that it really shouldn’t?
TikTok owns the algorithm. They own the For You Page where the algorithm places media. They should be held liable for what’s published onto the For You Page.
Someone on another message board I frequent said it quite well, IMO:
Re:
Wouldn’t that then basically outlaw targeted advertising? Because the ads are based on an algorithm saying you might like this. A site recommending stories or videos is no different
Re: Re:
Probably. Targeted advertising is bad and honestly I, and countless others, would be glad to see it gone. Nothing of value would be lost.
Re: Re:
It wouldn’t ban it, just make the company liable for the content they distribute so they’re less likely to continue with aggressive targeting.
Re: Re: Re:
They already are, the First Amendment just makes them immune from liability for content someone else posted, and Section 230 enables them to get any case based on content posted by others dismissed at the earliest stage possible.
Re: Re:
The difference being that you don’t have to sacrifice protecting sites against lawsuits for third party speech (Section 230) in order to ban targeted advertising; just create a law to protect personal data and a lot of targeted ads will fall as a result.
Re:
But that’s wrong. As Cox/Wyden have said, they absolutely intended 230 to shield internet services from liability for any publishing activity related to 3rd party content.
And “recommending” (or, not recommending) is absolutely a core publishing function.
At a separate level, as the mushroom case points out, it’s also protected First Amendment activity.
This comment has been flagged by the community. Click here to show it.
Re: Re:
Then maybe Wyden as the only part of that duo still in office should be at the helm of a rewrite of Section 230 that goes into greater detail about what they “absolutely intended”, instead of merely saying that 230 was totally meant to cover things that he and Cox could barely imagine back in 1996. Cause right now Wyden and Cox remind me of how JK Rowling said Dumbledore was totally gay, or that wizards poop on the ground and use magic to disappear the evidence.
Re: Re: Re:
I don’t see any reason why 230 needs a rewrite of any kind. The law is clear: Interactive web services have no liability for, and shouldn’t have any liability for, third-party speech. The position that they should based on how they display that speech to others is like saying Techdirt should have liability for comments based on whether someone looked at the comments in chronological order rather than threaded order (or vice versa).
Re: Re: Re:2
But all algorithms need to analyze third-party content, not just the metadata, to display something relevant (more or less) to the user.
So, how to define the frontier between the algorithm and the content if the latter modifying directly the behavior of the first?
Remove the content from the whole site, and the algorithm will still contain this behavior, so, in a sense, some trace of the data.
Of course, the algorithm would not produce any content by itself (except if it’s built with too much AI) but is no longer displaying the same behavior than when it was conceived.
I guess the interpretation of the Section 230 could be: can you feed a brand new algorithm with a lot of data, and just by removing all the data, having the algorithm behavior reverted automatically to its first state? If not, the algorithm is acting as a content provider.
Re: Re: Re:3
So what? Twitter has a “home timeline” where a user can view both their own posts and posts from the other users they follow in chronological order. That is an algorithm. If someone puts heinous content on that timeline, for what reason should Twitter be held liable for that content only and specifically because of the algorithm used to display it to other users?
Re: Re: Re:4
It’s absurd to be comparing an algo that deals mainly in third-party content and functions such as the flow of time and when users decide to post and who users decide to follow, to an algo consisting of an opaque system designed and operated and altered at the whims of a platform that then publishes content to a “For You” Page where the platform “thinks” you’ll like something it as a first-party serves up on that page.
Re: Re: Re:5
They’re both automated processes that interpret metadata (such as the time a post goes live) and displays posts to an end user based on that user’s decisions about what they want to see. That one is “simpler” than the other is irrelevalt—they’re still both algorithms in the end.
Re: Re: Re:5
How would you legally distinguish the two in a rigorous way? Maybe “sort by chronological order” doesn’t count, but does “sort by upvotes?” What about “sort by popularity?”
Re: Re: Re:6
Even a “sort by chronological order” algorithm will, in practice, depend upon other information beyond bare timestamps. For example, how does your chronological feed handle the case where Alice follows Bob and Charlie, and Bob shares a post by Charlie? Does Alice see it twice or only once?
Re: Re: Re:3
Algorithms are not magical black boxes, you dunce.
They’re lines of code that TAKE IN INPUT and SPIT OUT DESIRED OUTPUT.
A chronological feed sorts content from time posted, whether it be from earliest to latest or vice versa. The feed itself IS an algorithm.
And I have yet to see a chronological feed modify content. Let’s not even involve LLMs into this for now.
Most algorithms that take in user content as an input do NOT modify the content. And yes, I will include moderation tools here, since they’re technically algorithms as well. (Ive never seen an IRC ban command modify a user’s hostmask, for one.)
Re: Re: Re:
So you’re sugersting that Tiktok should have no 1st Amendment protections?
Re: Re:
This decision is really trying to resolve at a contradiction created in light of Netchoice (in a way that may be fishing for supreme court intervention):
–Netchoice found the recommendation algorithms of social media platforms to be expressive within the meaning of the first amendment (i.e., the act of recommending content is, in some form, content in itself, like how a mosaic can be a work of art unto itself even though it is created from other works of art, including where the other works of art are from other artists),
–meanwhile, section 230 says the content of another (the underlying art within the mosaic) is not the responsibility of the social media platform.
It would be contradictory to say that the recommendation of content is the platforms’ speech under the first amendment while simultaneously being the speech of another under 230. At some point you have to draw the line to distinguish between the recommendations as speech of the platform versus the underlying content as speech of the content creators. Put simply, when does the arrangement and presentation of the content of others become content unto itself?
I get that mere “publishing” should be protected (though I think this is not the word you want to use because 230 says the opposite – “NO PROVIDER or user of an interactive computer service shall be TREATED AS THE PUBLISHER…”), but even publishers can be held responsible for the content of their publications (e.g., defamation, incitements to violence, etc.). But again, the fact that the recommendations can be expressive implies that there is more than simply distribution of content, and section 230 only insulates the platform from the content of others, NOT from its own expressions.
Re: Re: Re:
Section 230 provides liability protection for moderation, including acts of moderation considered “expressive”.
Re: Re: Re:
Irrelevant, because nobody is arguing that.
Re: Re: Re:2
I mean it literally is what the court ruled and what Mike quoted and then said is “wrong”:
“Anderson asserts that TikTok’s algorithm “amalgamat[es] [] third-party videos,” which results in “an expressive product” that “communicates to users . . . that the curated stream of videos will be interesting to them[.]” ECF No. 50 at 5. The Supreme Court’s recent discussion about algorithms, albeit in the First Amendment context, supports this view. In Moody v. NetChoice, LLC, the Court considered whether state laws that “restrict the ability of social media platforms to control whether and how third-party posts are presented to other users” run afoul of the First Amendment. 144 S. Ct. 2383, 2393 (2024). The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment….
Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, id. at 2409, it follows that doing so amounts to first-party speech under § 230, too….”
This whole opinion seems to circle around this concept of “if the algorithm is expression of the platform, then it is not the expression of another, and therefore is not covered by 230”. So … very relevant.
Re: Re: Re:3
I get the feeling this ruling won’t be staying, it seems too contradictory.
Re: Re: Re:3
That’s a “contradiction” that the 3rd Circuit panel made up out of whole cloth. Editorial decisions by a platform are expressions of that platform, and Section 230 protects the platform for making them.
230 is a procedural shortcut for getting more quickly to the same place that a lengthy and expensive First Amendment case would arrive at. It’s completely illogical to say that if the First Amendment applies, then 230 cannot.
Re: Re: Re:4
I think what most people forget about 230 or have misconceptions about is that any site that hosts 3rd party content is a publisher but isn’t treated as such for such content and that is true regardless how a site present that content even if they are using algorithms.
If we take the courts opinion and applies it to content-recommendations in general, if I recommend a video on a blog I have and then someone goes and do something criminal because of watching that video, am I somehow guilty for that? And in the prior example I know exactly what the content is, what if I only recommended the video without watching it because it happens to be popular? The later is what TikTok did.
Re: Re: Re:4
My interpretation of the what the court said is not that “first amendment applied == section 230 cannot apply”, rather it is about to whom is the expression attributed to. The obvious examples are that TikTok is not liable for defamation if it hosts a post that lies about a person, but TikTok would be liable for defamation if TikTok itself posted a lie about the person from TikTok’s own account. TikTok is obviously liable in the latter case because it is TikTok’s expression that formed the lie.
Now if TikTok can tune its algorithms such that the algorithm is expressive of some view, then that view could also (potentially) be attributable to TikTok. There are plenty of cases that say no, but even in many of those, including in Force v. Facebook and others, there is often this concept of the algorithm being “neutral.” So even without the Netchoice first amendment view, there is the potential for an algorithm to be non-neutral such that 230 does not apply.
All that being said, I think the courts have actually already developed a test for this where the court is to analyze whether the platform “materially contributed” to the unlawfulness of the content, which is part of the reason why I kind of think this opinion is fishing for supreme court intervention, to force the issue and create a test or standard for how to understand the expressive relevance of the algorithm itself.
Re: Re: Re:5
Isn’t the whole “materially contributed” based on what they knew at the time and how they acted (or didn’t) on that information?
Re: Re: Re:
Except for Trump, apparently.
Re:
You’re explaining what you think should happen, but you’re simply wrong on the matter of the law. The content (information) recommended by the algorithm was provided not by TikTok but by a user. And Section 230 says,
The algorithm itself is TikTok’s first-party speech. TikTok’s choice to use such and such an algorithm is TikTok’s first-party speech. But the content that the algorithm recommends is third-party speech, provided by users.
Re:
Here’s the Internet-destroying question: Why?
All social media services—which are wholly covered by 230, let’s not forget—use some form of algorithm to show users content. The “For You” algorithm on TikTok is little different than, say, a chronological feed of content from creators one follows on TikTok: They’re both ways to sort and display third-party content. One could certainly argue that TikTok’s “For You” algorithm has issues, sure. TikTok may even put its finger on the scales for what content it chooses to promote, much like how Elon puts his entire fist on the scales over on X. But even if we hold that to be true, that doesn’t mean TikTok published the speech being shown through those algorithms. It could just as easily found a way to shift the scales against “blackout challenge” videos and still found itself in this situation because the user still found one such video on their own.
To place liability for third-party speech on an interactive web service because of how it chooses to display that speech would be to upend the Internet. Even if those services still decide to allow third-party speech, they would disable any method of displaying that speech through an algorithm—which means no more “for you” feeds, no more search feeds, and no more “home” feeds. Every social media service would effectively become unusable to most people if that were to ever happen.
This comment has been flagged by the community. Click here to show it.
Re: Re:
So what you are saying is that if a site, like Timtok would only recommending Nazi propaganda to all users between 13 and 20 on their platform, they have no liability when the result of that is clearly going to be a bunch of young people with neo-nazi views?
Re: Re: Re:
I don’t respond to otherwording. Address the point I actually made instead of a point I didn’t make despite your attempt to attribute it to me.
Re: Re: Re:
What if the nuclear bomb can only be defused by repealing Section 230 and saying the N-word?
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2
Being a theoretical physicist and following the word of a bunch of aging white-ass technolibertarians who act like only their opinions on tech policy and law are the correct ones sounds like a really depressing existence, Blake.
Re:
As courts have repeatedly noted:
under the first amendment, there is no actionable lawsuit against tiktok for saying “hey here are some videos other users liked we think you might like”. Aside from questioning whether a 10-yo having a tiktok account and using tiktok unsupervised is within the T&C (i can’t scroll a feed without subscribing), and whether tiktok could have known it was sharing the video with an impressionable minor rather than an adult responsible for their own actions; Because tiktok doesn’t pre-vet video content, TikTok can not know the content of the video before it is uploaded. Under this regime, The legal liability only comes in once you say “Hey, this content is bad”. Logically therefore, the claim to police the algorithm is an attempt to attach liability for content via claims against the algorithm. An end-run around the protections you claim exist.
Any content seen on social media, unless you clicked a direct link given to you by the creator themselves whom you met somewhere outside the internet, algorithms played a part in you being shown that video. but unless you can show TikTok knew what content was present, the reasonable conclusion was tiktok shared something that was popular, or made by someone who is popular, or made by a creator you interact with, or a topic you interact with, and incurs no more liability than recommending the music of a band and your friend going groupie and ending up dead of an overdose.
Re:
Rather than provide quotes from the ruling where the court actually explains itself to support my alternative read, I’m just going to quote a rando from some ransom unnamed site and assume the court agrees with my random take that algorithms for search and discovery, otherwise known as “feed(s)”, without which any UGC website is functionally useless, are illegal, and therefore social media is illegal. – you.
Im sure you know the exact algorithm that is super simple, can intuit when the content of videos is dangerous, objectively find the correct order and priority to give everyone, and in fact isn’t an algorithm, but a divine will that fails to interact with the CPU at all.
The person who told that child to intentionally try to blackout is the person who harmed that child. If you want to just ban social media, just say it. Any time bad content appears, there will be someone who sees it who can reasonably claim the algorithm recommended it. There is no world where the existence of algorithms imposes liability and social media still exists. A message board’s content is still populated by an algorithm. The tools which allow moderators to engage in moderation algorithmically change the feed you see. They decide what you see. Bad stuff still on the board?Now the website can be found liable for not moderating well enough, they once took down a post about the blackout challenge, now they have to perfectly and proactively ban every instance or lose the protection you claim they should have.
Re:
Obviously not.
Re:
230 was never “expanded”. The authors of the law have been very clear on this, and so has normal evidence-based logic.
I do think there are no protection surrounding the reccomendation of content, but…
if you are suing for the content itself, and somehow not soley the reccomendation then you run a foul of section 230.
That only brings up the question, what realm of legal liability could stem from soley a reccomendation that wouldnt immediately die from the first amendment.
Ruling Deliberately Ignores Precedent, Defies Logic
So you admit it’s consistent with Trump v. United States.
This comment has been flagged by the community. Click here to show it.
There’s a key part you’re leaving out here, which is an important factor. I had to go back into the decision to find it:
She alleges that TikTok: (1) was aware of the Blackout Challenge
so Anderson’s claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children can proceed. So too for her claims seeking to hold TikTok liable for its targeted recommendations of videos it knew were harmful
As written, 230 (and more so precedent following Zeran) immunizes it. But the fact that 230 immunizes regardless of how involved they were is one of it’s biggest problems.
That’s not the same thing, as you said in your previous paragraph. Stratton treated things they didn’t moderate as first party: where a local judge found Prodigy liable for content it didn’t moderate, because it did moderate some content.
There’s a big, big difference between liability on everything posted because you moderate some things, vs liability on specifically on things you’ve actively reviewed. Prodigy’s defense specifically relied on the fact that they didn’t know the material as a distributor.
To quote:
That such control is not complete and is enforced both as early as the notes arrive and as late as a complaint is made, does not minimize or eviscerate the simple fact that PRODIGY has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards.
Re:
No, it isn’t. Unless the video was first-party speech, TikTok should be immunized from legal liability for that video under 230. That its “For You” algorithm found the video and displayed to a user, or that TikTok knew about the “blackout challenge”, are irrelevant facts. At the bare minimum, this incident suggests that TikTok’s moderation practices are lacking in some areas (which is understandable, since moderation doesn’t scale well). But I see no reason to hold TikTok liable for this incident any more than I would see a reason to hold Twitter liable for, say, distribution of CSAM because its “home timeline” algorithm displayed CSAM to anyone who happened to be following Dom Lucre at the time he posted that shit.
This comment has been flagged by the community. Click here to show it.
Re: Re:
So what did you want the parents of the kid that died to do instead? Go after the person who posted the video who likely disappeared into the aether never to be found again? The “Go after the poster” finger-wagging limitations of Section 230’s ability for victims or families of victims to seek recompense or any sort of justice feels less like a flaw and more like a key feature that you, Mike, Wyden, and Cox are glad to have.
Re: Re: Re:
And yet, the Internet exists in the form it does now precisely because interactive web services can be free of legal liability for third-party speech. Hell, Techdirt itself benefits from 230 because without those protections, the comments section you’re in right now wouldn’t exist because Mike wouldn’t even think to open a comments section and risk liability for someone else’s speech.
Sometimes there is no “easy target” to go after in situations like this. Life is unfair in many ways. But to go after the deepest pockets because they’re the easiest target—to file a Steve Dallas lawsuit out of grief and anger and self-righteous fury—and effectively try to take down the rest of the Internet in the process is insane. That the Third Circuit aligned with the parents instead of existing caselaw is equally insane. This ruling shouldn’t hold up on appeal, and if it does, it will doom entire swaths of the Internet.
Re: Re: Re:
Even going after the poster, they can’t prove foreseeability and causation. What if Tiktok, Youtube, or any other video site hosts skateboarding footage, and that is recommended to a kid with an interest in skateboarding? Would you hold the site (or the original poster) liable if the kid attempts a stunt outside his skill level and breaks his neck? In the context of TV and movies, courts have been loathe to find that someone imitating something they saw was foreseeable and thus a proximate cause of the injury.
Re: Re: Re:2
Skateboarding and hanging yourself are way different activities.
Re: Re: Re:
Well, yes. In fact, Section 230 saves plaintiffs a bunch of money otherwise wasted in going after the wrong defendant.
Re:
There’s no knowledge standard in 230.
This comment has been flagged by the community. Click here to show it.
Re: Re:
Yes, that’s my issue with it. The argument is that there should be, not that it currently has one.
Re: Re: Re:
That would create a perverse incentive for companies to know less about what is happening on their own platforms.
Re: Re: Re:2
If this ruling holds, it seems that might be what’ll happen?
Can’t be held liable for pushing bad content if you don’t moderate content, I guess.
Re: Re: Re:2
That depends entirely on how it’s structured. That perverse incentive happens because not knowing about it at all eliminates liability (Which is how it worked pre-230), and there’s no standard for e.g. negligence/safe harbour/etc. Yes, if you make it so not knowing is a guaranteed way to have no liability, that becomes an easy short cut. There are ways around that, in terms of things like safe harbour laws. (Although those come with other potential trade offs).
A good counterexample are criminal charges. Those are already explicitly exempt from 230 protections, and yet companies are manage to not be conveniently blind.
Re:
This is false. It does not immunize a company if it creates (in part, or in whole) the violative content. So it’s simply false to say that it immunizes “regardless of how involved.”
Separately “knowing” that the blackout challenge exists is very different from knowing that this video was about the blackout challenge. And that’s the issue. It’s impossible to know what every video is about. So you have to have specific knowledge for there to be any liability even under the First Amendment and there’s no duty to investigate (that’s in the GP Putnam case).
But no one has alleged “active” reviewing here. An algorithmic recommendation does not mean active knowledge of the content.
Re: Re:
its amazing. a decade after viacom v google, we still have lawyers claiming confusion about the standard for red flag knowledge.
Re: Re:
Sorry, you’re right, that’s bad wording on my part. I meant involved purely from a publishing standpoint, but “involved” could mean creating.
That’s true. I took So too for her claims seeking to hold TikTok liable for its targeted recommendations of videos it knew were harmful to mean it knew about specific videos, but it doesn’t give much detail.
Under 230, even if you have specific knowledge, there’s still no liability though, right? 230 doesn’t mention knowledge at all, it’s just a blanket protection, so it’s moot.
Separately though, having no duty to investigate seems problematic. If you know your algorithm is distributing content that is killing people, and you can ignore liability simply by not looking into it, that’s…bad? If a platform like Tiktok knows there’s something like a Blackout challenge, it seems reasonable for them to have a duty to look into it, at some point. The fact that they can let it run rampant without liability seems pretty messed up, and not something that’s needed for the Internet to function more broadly.
The lawsuit says TikTok knew that: 1) “the deadly Blackout Challenge was spreading through its app,” 2) “its algorithm was specifically feeding the Blackout Challenge to children,” and 3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31–32. Yet TikTok “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their [For You Pages].”. That seems like a reasonable set of facts where you’d expect a company to do something, at some point. They might not necessarily catch everything, but doing nothing is negligent.
You’d have to be very very careful in how you set it up, but it seems like there should be a middle ground there. You can’t expect them to investigate everything (due to scale, moderation is hard, etc), but at some point it’s crossing into negligence if it’s blanket immunity. That seems like you’re running into the same problem 230 was designed to solve, companies are incentivized to put their heads in the sand to avoid liability.
Yeah, my point was just that Stratton went much further than that, it wasn’t reliant on active knowledge. It was way way broader/worse.
I keep wanting to write a “Who’s on First… party speech” skit. But I dunno.
This looks like legislating from the bench.
Re:
Which has happened for centuries. It’s called “setting precedent.”
Eric Goldman’s take, for those curious:
Re:
That’s a very bleak take, makes me nervous.
This comment has been flagged by the community. Click here to show it.
Re: Re:
Goldman needs to make it sound bleak for his posts to sound correct. He makes a living off of concern trolling any time a tech company gets a ruling that causes them to face consequences.
Re: Re: Re:
I’m just looking for evidence that the internet isn’t doomed.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2
Then you need to stop looking at Techdirt and Eric Goldman who constantly make it sound like the Internet is doomed whenever tech companies are made to face consequences or are able to be held liable for their actions.
Another tech lawyer, Mike Dunford over on Bluesky, talks about how cases such as these don’t mean the net is doomed. You could start there.
Re: Re: Re:3
Much as I’d like to extract some optimism, that’s not a very encouraging way to begin.
This bit seems opaque. Section 230 provides liability protection for moderation decisions, i.e., “the provider’s choice of what to present”.
Is it privileging a company to recognize that speech is not lawn darts? The chain of causation from a design decision to an actual harm is much more fuzzy.
But in Lemmon v Snap, the product design directly encouraged risky behavior. The design feature here seems to be more like “show a user videos that have already become popular with similar users”, rather than “promote videos because they portray dangerous behavior”. Indeed, the 9th Circuit held that Snap would not be liable for making available other user-generated content, like “Snaps of friends speeding dangerously” that could have “incentivized” risk-taking.
This seems to argue for liability protection only to apply when content is made less visible. Such a standard would run into problems, I think: Any website that delivers content using a search function would be in a legal gray area at best. And it would be a circuit split; for example, the 9th Circuit held in Dyroff v The Ultimate Software Group that “recommending user groups and sending email notifications” are “acting as a publisher of others’ content”. The DC Circuit held in Marshall’s Locksmith v Google that “the choice of presentation does not itself convert [a] search engine into an information content provider”, and indeed “were the display of this kind of information not immunized, nothing would be”. Making content more visible than it would otherwise be is one of “a publisher’s traditional editorial functions”, to use the Zeran wording; maybe it’s the most basic “editorial function” of them all.
Does “most of Earth” have the same litigation culture as the United States, the same standards for who bears the cost of a frivolous lawsuit, etc.? I could very well be wrong, but this seems like a hollow consolation.
Re: Re: Re:2
Its not doomed.
Re: Re: Re:2
The internet will change. Maybe not due to this, but it always changes. As far as i’m concerned, “the internet” is long past doomed and already in hell. (That would be mistly due to corporate occupation of the nets, and the explosion of advertising – not so much from law.)
You have to get used to it. Eventually whatever sites or services you love will change or die.
If the law goes that badly in the US, you have a lot more to worry about (in the US) than that particular law. And your internet services will all move to Iceland or something.
Re: Re: Re:
…hallucinated nobody mentally competent, ever.
Re: Re: Re:2
Said nobody with a modicum of empathy ever.
Re:
As much as I love and deeply respect Eric Goldman but he often very doom and gloom and sometime wrong about outcomes of some court cases, He very much believed Twitter, Inc. v. Taamneh and Gonzalez v. Google LLC would overturn 230 before the first hearing happen (he did become more optimistic after)
I deeply disagree with him saying that the end of Section 230–and the Internet as we know it is inevitable. Its not impending and inevitable. This type of wording is not really helpful seeing this is likely to be challenged fast.
Re: Re:
I can understand his sentiment, but you’re right. Defeatism doesn’t solve anything.
Re: Re: Re:
Again NO disrespect to him and I agree with 99% of his analysis on this case.
dont call it promoting
Call it algorithmic weighting to be more accurate. ALL content on tiktok/youtube/FB is algorithmically weighted plus or minus depending on whether they think you would want to see it. If weighting loses 230 protection that means they are no longer protected from frivolous lawsuits, which is really what 230 is about.
The deadline for requesting an en banc rehearing is two weeks after the judgment, so I guess we’ll know pretty soon if TikTok is going that route.
Funny thing we never seem hear hear about is how the algorithms chose a suggestion to list for an individual. How often is the offending suggestion due to raw, not-logged-in, popularity feed*, and how often is it a reflection of what people already view and search for?
*i used to like some services better logged out. But some of them, at some point, became utter monstrosites of idiocy without my history to inform the algorithms.
'Facts, law and legal precedent be damned we're ruling against that law!'
Yeah, when your ruling is so wildly conflicting with previous ones that you feel the need to include a multi-page footnote about all the other cases that went the other way and that you’re ignoring that positively reeks of ‘We started this case intending to rule against 230, and damned if we weren’t going to end there.’
Would a ruling like this have an effect on messenger platforms like discord?
Speech is the New Piracy
One of the things I’ve personally been arguing in recent years is that speech has become the new piracy. This is especially true with social media in general. The government wants to crack down on it, but, much like p2p file-sharing, it’ll ultimately become a game of whack a mole. Look at the history of file-sharing and look at what we are seeing today with social media and the parallels become disturbingly similar.
This comment has been flagged by the community. Click here to show it.
Ignores precedent no doubt
Defies logic my arse.
They promoted a post that led to a child’s death. Trying to avoid that happening again is the rational response.
This is why you are the only country where school shootings are normal.
Re:
Kid sees ad on TV: “Our new vodka is the best ever!!”
Kid sees such a bottle in parents liquor-cabinet, proceed to drink the whole bottle and then dies of alcohol poisoning.
Is the broadcaster guilty?
Is the ad-company guilty?
Is the sellers of the vodka guilty?
Who is actually guilty?
There is only one logical choice, but I’m sure someone in their outrage will sue a 3rd party. But hey, you are free to think feelz is a good basis for how cases should be decided instead of established law but you won’t like where it will take you in the end.
This comment has been flagged by the community. Click here to show it.
Re: Re:
We have to water the tree of liberty with the blood of dead kids. It’s the only way to keep our freedoms. Tech companies have to be immune from practically all consequence even when people get killed because they put profit above principle and school shootings are the necessary price we have to pay for our right to have assault rifles.
Re: Re: Re:
I guess generalizations works for you so you can feel good about who to blame for what, it’s a bit like a mob-mentality when a group of people start blaming the nearest convenient scapegoat for any ills.
Re: Re: Re:
Your strawman is bad and you should feel bad.
Re: Re:
The parents who didn’t lock their liquor cabinet, I would have thought, but nobody can sue themselves for their own carelessness, can they?
Re: Re: Re:
Precisely, but other parties can bring suit against the parents for their negligence.
Daphne Keller’s take, for those interested:
Re:
His take is very relatable.
I’m so tired of this circus boss.
I just wanna have my online friendgroup, youtube, etc in peace.
This comment has been flagged by the community. Click here to show it.
Re: Re:
I honestly think you need therapy if you keep acting like the words of every tech policy wonk that says the Internet will die is some sort of gospel truth.
Re: Re: Re:
They’re clearly very smart people with a lot of know-how of tech policy.
How could I not take what they say seriously, even if biased in one direction or the other?
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2
Because their livelihoods depend on making people irrationally afraid that the Internet will die if tech corpos are made to face any shred of consequence for their actions.
Re: Re: Re:3
Anybody who unironically uses corpo seriously needs to touch grass.
Re: Re:
Its very likely it will be overturned, don’t lose hope!
This comment has been flagged by the community. Click here to show it.
Re:
So cool, that’s a third technolibertarian policy wonk whose job depends on keeping people afraid. Thank you for pointing these folks out so I can ignore them, Blake. Should you be off doing some shit with theoretical physics instead of pretending to be a tech law expert?
Re: Re:
What are your credentials for making opinions on other peoples opinions and how that is connect with what they do for a living?
Personally, I think you sound a little whiny. You should do something about that.
Re: Re:
Feel free to leave Techdirt and take your mentally ill delusions with you.
Re:
On that thread, some fool called Gilad Edelman asked:
Since I can’t post my response there, I’ll post it here instead:
This comment has been flagged by the community. Click here to show it.
BentFranklin had the courage to post something that went against the grain on Section 230 in the Techdirt Insider Hidey-Hole. That’s really rad. I hope that the cultists in there like John Roddy who are too scared to post in the comments don’t have too much of a conniption fit.
If I understand correctly, the court finds that TikTok is liable because what they did is protected by the First Amendment. Then surely the First Amendment protects them from being punished for it?
Re: your right, but theres an issue with that
The first amendment does protect you for that, but you still lose money defending yourself. 230 was intended to prevent frivolous those lawsuits that ultimately would lose, by shutting them down right off.
Well, you know...
… If Republican-appointed Judges can pull this crap, why can’t Democratic-nominated Judges get in on the fun too?
Dear Trump judges,
You might want to rethink this, because what you are saying is Trumps rants can be held against platforms and then you are gonna bitch that he is being silences because no one wants to risk allowing him on their platform.
In closing you all hate America & freedom, I hope your hemorrhoids burn extra & your side chick gets knocked up.
Trying to figure out if this will or won’t end badly is like screaming into a void at this point, with people giving conflicting answers.
I guess no one really knows.
Re:
Maybe I should just clock out, wait for the day where all of the social media platforms and websites no longer work.
This comment has been flagged by the community. Click here to show it.
Re: Re:
You should probably clock out to a mental health facility with the sheer level of irrational fear you have over this. How old are you? Cause your panic is reminding me of the 12-year-olds who spammed Congress with calls when TikTok steered them to go harass their local lawmakers.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:
Now you’re just being rude to me.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2
Adding onto my earlier comment: Honestly it disappoints me, because untill you started acting rude, I actually found some mild comfort in what you said. Now I’m less inclined to believe anything you say.
Re: Re: Re:2
They may have phrased it rudely but they’re not wrong, if reading articles like this is going to stress you out so much that you’re repeatedly going on long ‘are we all screwed’ comment chains you likely would benefit from seeing a professional to help you deal with that stress and develop ways to better handle it.
Re: Re: Re:3
Hmm, you may have a point then, I guess.
I’ve been advised to simply stop paying attention to it. I can see the logic in it. Suppose this is neither the first nor the last time there’s been talks of the internet as we know it being in danger.
I find it difficult not to frequently try and keep myself informed on everything, as if waiting for some kind of “all clear” situation where there’s nothing in the pipeline to worry about, but that’s not exactly realistic nor helpful for myself, I’m starting to realize.
For the lack of a better term, maybe I do need a cold turkey on tech news.
Re: Re: Re:4
Speaking as a non-professional I’d suggest maybe either cutting back on articles like this or trying to go without for a week or so, see if that helps.
Keeping informed is all well and good but when it starts to affect your health and/or mental well being those really need to take priority.
Re: Re: Re:5
Sounds like a good idea. Thank you for the advice, and thank you for listenning.
Re: Re: Re:6
No problem, hope it helps.
This might be the single most batshit crazy opinion, but the Fifth Circuit still has the most opinions on on the batshit crazy top ten list.
If the mushroom encyclopedia was not crowd sourced the way Wikipedia and some other online encyclopedia are, then this ruling is incorrect.
Re:
Agreed. This ruling misses the mark. A mushroom encyclopedia that misleads someone into picking and eating mushrooms that were inedible, it feels to me like that publisher should definitely be liable. A misfire of the court giving companies more power and weight than the people that those companies harm.
Re: Re:
The publisher is not the author. The publisher might publish countless books from countless authors on countless subjects and is unlikely to have such a great deal of knowledge as to the accuracy of every thing said on every page of every book.
Re: Re: Re:
No, the publisher is not the author, but if the publisher pays the author and claims copyright of the work in return, then the publisher is legally the author for copyright purposes, and should therefore be the author for Section 230 purposes as well. The problem with all you brunchlords is that you want to have your cake and eat it too.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2
…said nobody mentally competent, ever.
Re: Re: Re:2
47 USC §230 (c)(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
And the “There has never been an argument against Section 230’s protections that did not lie about it” remains in no danger of being broken.
So if tiktok doesn’t take further legal action can someone else like EFF?