Third Circuit’s Section 230 TikTok Ruling Deliberately Ignores Precedent, Defies Logic

from the that's-not-how-any-of-this-works dept

Step aside Fifth Circuit Court of Appeals, there’s a new contender in town for who will give us the most batshit crazy opinions regarding the internet. This week, a panel on the Third Circuit ruled that a lower court was mistaken in dismissing a case against TikTok on Section 230 grounds.

But, in order to do so, the court had to intentionally reject a very long list of prior caselaw on Section 230, misread some Supreme Court precedent, and (trifecta!) misread Section 230 itself. This may be one of the worst Circuit Court opinions I’ve read in a long time. It’s definitely way up the list.

The implications are staggering if this ruling stands. We just talked about some cases in the Ninth Circuit that poke some annoying and worrisome holes in Section 230, but this ruling takes a wrecking ball to 230. It basically upends the entire law.

At issue are the recommendations TikTok offers on its “For You Page” (FYP), which is the algorithmically recommended feed that a user sees. According to the plaintiff, the FYP recommended a “Blackout Challenge” video to a ten-year-old child, who mimicked what was shown and died. This is, of course, horrifying. But who is to blame?

We have some caselaw on this kind of thing even outside of the internet context. In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.

In many ways, Section 230 was designed to speed up this analysis in the internet era, by making it explicit that a website publisher has no liability for harms that come from content posted by others, even if the publisher engaged in traditional publishing functions. Indeed, the point of Section 230 was to encourage platforms to engage in traditional publishing functions.

There is a long list of cases that say that Section 230 should apply here. But the panel on the Third Circuit says it can ignore all of those. There’s a very long footnote (footnote 13) that literally stretches across three pages of the ruling listing out all of the cases that say this is wrong:

We recognize that this holding may be in tension with Green v. America Online (AOL), where we held that § 230 immunized an ICS from any liability for the platform’s failure to prevent certain users from “transmit[ing] harmful online messages” to other users. 318 F.3d 465, 468 (3d Cir. 2003). We reached this conclusion on the grounds that § 230 “bar[red] ‘lawsuits seeking to hold a service provider liable for . . . deciding whether to publish, withdraw, postpone, or alter content.’” Id. at 471 (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)). Green, however, did not involve an ICS’s content recommendations via an algorithm and pre-dated NetChoice. Similarly, our holding may depart from the pre-NetChoice views of other circuits. See, e.g., Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1098 (9th Cir. 2019) (“[R]ecommendations and notifications . . . are not content in and of themselves.”); Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019) (“Merely arranging and displaying others’ content to users . . . through [] algorithms—even if the content is not actively sought by those users—is not enough to hold [a defendant platform] responsible as the developer or creator of that content.” (internal quotation marks and citation omitted)); Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 21 (1st Cir. 2016) (concluding that § 230 immunity applied because the structure and operation of the website, notwithstanding that it effectively aided sex traffickers, reflected editorial choices related to traditional publisher functions); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 407 (6th Cir. 2014) (adopting Zeran by noting that “traditional editorial functions” are immunized by § 230); Klayman v. Zuckerburg, 753 F.3d 1354, 1359 (D.C. Cir. 2014) (immunizing a platform’s “decision whether to print or retract a given piece of content”); Johnson v. Arden, 614 F.3d 785, 791-92 (8th Cir. 2010) (adopting Zeran); Doe v. MySpace, Inc., 528 F.3d 413, 420 (5th Cir. 2008) (rejecting an argument that § 230 immunity was defeated where the allegations went to the platform’s traditional editorial functions).

I may not be a judge (or even a lawyer), but even I might think that if you’re ruling on something and you have to spend a footnote that stretches across three pages listing all the rulings that disagree with you, at some point, you take a step back and ask:

Principal Skinner meme. First frowning and looking down with hand stroking chin saying: "Am I so out of touch that if every other circuit court ruling disagrees with me, I should reconsider?" Second panel has him looking up and saying "No, it's the other courts who are wrong."

As you might be able to tell from that awful footnote, the Court here seems to think that the ruling in Moody v. NetChoice has basically overturned those rulings and opened up a clean slate. This is… wrong. I mean, there’s no two ways about it. Nothing in Moody says this. But the panel here is somehow convinced that it does?

The reasoning here is absolutely stupid. It’s taking the obviously correct point that the First Amendment protects editorial decision-making, and saying that means that editorial decision-making is “first-party speech.” And then it’s making that argument even dumber. Remember, Section 230 protects an interactive computer service or user from being treated as the publisher (for liability purposes) of third party information. But, according to this very, very, very wrong analysis, algorithmic recommendations are magically “first-party speech” because they’re protected by the First Amendment:

Anderson asserts that TikTok’s algorithm “amalgamat[es] [] third-party videos,” which results in “an expressive product” that “communicates to users . . . that the curated stream of videos will be interesting to them[.]” ECF No. 50 at 5. The Supreme Court’s recent discussion about algorithms, albeit in the First Amendment context, supports this view. In Moody v. NetChoice, LLC, the Court considered whether state laws that “restrict the ability of social media platforms to control whether and how third-party posts are presented to other users” run afoul of the First Amendment. 144 S. Ct. 2383, 2393 (2024). The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment….

Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, id. at 2409, it follows that doing so amounts to first-party speech under § 230, too….

This is just flat out wrong. It is based on the false belief that any “expressive product” makes it “first-party speech.” That’s wrong on the law and it’s wrong on the precedent.

It’s a bastardization of an already wrong argument put forth by MAGA fools that Section 230 conflicts with the argument in Moody. The argument, as hinted at by Justices Thomas and Gorsuch, is that because NetChoice argues (correctly) that its editorial decision-making is protected by the First Amendment, it’s somehow in conflict with the idea that they have no legal liability for third-party speech.

But that’s only in conflict if you can’t read and/or don’t understand the First Amendment and Section 230 and how they interact. The First Amendment still protects any editorial actions taken by a platform. All Section 230 does is say that it can’t face liability for third party speech, even if it engaged in publishing that speech. The two things are in perfect harmony. Except to these judges in the Third Circuit.

The Supreme Court at no point says that editorial actions turn into first-party speech because they are protected by the First Amendment, contrary to what they say here. That’s never been true, as even the mushroom encyclopedia example shows above.

Indeed, reading Section 230 in this manner wipes out Section 230. It makes it the opposite of what the law was intended to do. Remember, the law was written in response to the ruling in Stratton Oakmont v. Prodigy, where a local judge found Prodigy liable for content it didn’t moderate, because it did moderate some content. As then Reps. Chris Cox and Ron Wyden recognized, that would encourage no moderation at all, which made no sense. So they passed 230 to overturn that decision and make it so that internet services could feel free to engage in all sorts of publishing activity without facing liability for the underlying content when that content was provided by a third party.

But here, the Third Circuit has flipped that on its head and said that the second you engage in First Amendment-protected publishing activity around content (such as recommending it), you lose Section 230 protections because the content becomes first-party content.

That’s… the same thing that the court ruled in Stratton Oakmont, and which 230 overturned. It’s beyond ridiculous for the Court to say that Section 230 basically enshrined Stratton Oakmont, and it’s only now realizing that 28 years after the law passed.

And yet, that seems to be the conclusion of the panel.

Incredibly, Judge Paul Matey (a FedSoc favorite Trump appointee) has a concurrence/dissent where he would go even further in destroying Section 230. He falsely claims that 230 only applies to “hosting” content, not recommending it. This is literally wrong. He also falsely claims that Section 230 is a form of a “common carriage regulation” which it is not.

So he argues that the first Section 230 case, the Fourth Circuit’s important Zeran ruling, was decided incorrectly. The Zeran ruling established that Section 230 protected internet services from all kinds of liability for third-party content. Zeran has been adopted by most other circuits (as noted in that footnote of “all the cases we’re going to ignore” above). So in Judge Matey’s world, he would roll back Section 230 to only protect hosting of content and that’s it.

But that’s not what the authors of the law meant (they’ve told us, repeatedly, that the Zeran ruling was correct).

Either way, every part of this ruling is bad. It basically overturns Section 230 for an awful lot of publisher activity. I would imagine (hope?) that TikTok will request an en banc rehearing across all judges on the circuit and that the entire Third Circuit agrees to do so. At the very least, that would provide a chance for amici to explain how utterly backwards and confused this ruling is.

If not, then you have to think the Supreme Court might take it up, given that (1) they still seem to be itching for direct Section 230 cases and (2) this ruling basically calls out in that one footnote that it’s going to disagree with most other Circuits.

Filed Under: , , , , , , , ,
Companies: tiktok

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Third Circuit’s Section 230 TikTok Ruling Deliberately Ignores Precedent, Defies Logic”

Subscribe: RSS Leave a comment
149 Comments

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:4

Nah. That comment thread actually has people whose perspectives are grounded in reality and what could make the Internet a more sane place. I appreciate Ars Technica’s comment section and forums for having spaces that are open to discussion of more forward-thinking progressive Internet and tech regulation beyond just Net Neutrality and privacy legislation. It’s a breath of fresh air compared to here.

This comment has been deemed funny by the community.
Toom1275 (profile) says:

Re: Re: Re:4

Ars had trouble a while back where one of its authors was caught blindly repeating disinformation without any vetting whatsoever.

To placate its readers, they fired that author… and hired Ashley Belanger, who 100% of her work is blindly repeating what others say with no vetting or even a scintilla of expert factual analysis, especially if it has an anti-tech bent to it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Cat_Daddy (profile) says:

Re: Re: Re:4

Oh, wow. We’ve got ourselves an edgy anarchist here, 🙄. Listen mate, everyone should be concerned here. Section 230 is not this proponent of a libertarian dystopia fan-fiction you’ve cooking up. In fact, Section 230 does the opposite, it enables that not only your voice is heard, but also the platform/website you’re on doesn’t get Immediately shut down. Even the smallest exception to section 230 causes reverberations throughout the web and this precedent threatens the integrity of that system.

Long story short: S230 lets you speak, edgelord.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Maybe they rejected the prior caselaw on 230 because they recognized that the prior caselaw was bad and expanded 230 way out of the original scope of it to cover and defend things that it really shouldn’t?

TikTok owns the algorithm. They own the For You Page where the algorithm places media. They should be held liable for what’s published onto the For You Page.

Someone on another message board I frequent said it quite well, IMO:

They are still protected from consequences for hosting other people’s speech; they are still able to moderate and filter that speech with that same protection; what they can’t do is use an algorithm to then selectively promote that speech to specific users under their own programmed criteria while still having that same protection from consequences. That makes sense: the algorithm of promotion and engagement is their speech. They are responsible for that algorithm (and who else could be? They make and control it!).

Anonymous Coward says:

Re:

You’re explaining what you think should happen, but you’re simply wrong on the matter of the law. The content (information) recommended by the algorithm was provided not by TikTok but by a user. And Section 230 says,

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The algorithm itself is TikTok’s first-party speech. TikTok’s choice to use such and such an algorithm is TikTok’s first-party speech. But the content that the algorithm recommends is third-party speech, provided by users.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

what they can’t do is use an algorithm to then selectively promote that speech to specific users under their own programmed criteria while still having that same protection from consequences

Here’s the Internet-destroying question: Why?

All social media services⁠—which are wholly covered by 230, let’s not forget⁠—use some form of algorithm to show users content. The “For You” algorithm on TikTok is little different than, say, a chronological feed of content from creators one follows on TikTok: They’re both ways to sort and display third-party content. One could certainly argue that TikTok’s “For You” algorithm has issues, sure. TikTok may even put its finger on the scales for what content it chooses to promote, much like how Elon puts his entire fist on the scales over on X. But even if we hold that to be true, that doesn’t mean TikTok published the speech being shown through those algorithms. It could just as easily found a way to shift the scales against “blackout challenge” videos and still found itself in this situation because the user still found one such video on their own.

To place liability for third-party speech on an interactive web service because of how it chooses to display that speech would be to upend the Internet. Even if those services still decide to allow third-party speech, they would disable any method of displaying that speech through an algorithm⁠—which means no more “for you” feeds, no more search feeds, and no more “home” feeds. Every social media service would effectively become unusable to most people if that were to ever happen.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re:

As courts have repeatedly noted:

under the first amendment, there is no actionable lawsuit against tiktok for saying “hey here are some videos other users liked we think you might like”. Aside from questioning whether a 10-yo having a tiktok account and using tiktok unsupervised is within the T&C (i can’t scroll a feed without subscribing), and whether tiktok could have known it was sharing the video with an impressionable minor rather than an adult responsible for their own actions; Because tiktok doesn’t pre-vet video content, TikTok can not know the content of the video before it is uploaded. Under this regime, The legal liability only comes in once you say “Hey, this content is bad”. Logically therefore, the claim to police the algorithm is an attempt to attach liability for content via claims against the algorithm. An end-run around the protections you claim exist.

Any content seen on social media, unless you clicked a direct link given to you by the creator themselves whom you met somewhere outside the internet, algorithms played a part in you being shown that video. but unless you can show TikTok knew what content was present, the reasonable conclusion was tiktok shared something that was popular, or made by someone who is popular, or made by a creator you interact with, or a topic you interact with, and incurs no more liability than recommending the music of a band and your friend going groupie and ending up dead of an overdose.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

Rather than provide quotes from the ruling where the court actually explains itself to support my alternative read, I’m just going to quote a rando from some ransom unnamed site and assume the court agrees with my random take that algorithms for search and discovery, otherwise known as “feed(s)”, without which any UGC website is functionally useless, are illegal, and therefore social media is illegal. – you.

Im sure you know the exact algorithm that is super simple, can intuit when the content of videos is dangerous, objectively find the correct order and priority to give everyone, and in fact isn’t an algorithm, but a divine will that fails to interact with the CPU at all.

The person who told that child to intentionally try to blackout is the person who harmed that child. If you want to just ban social media, just say it. Any time bad content appears, there will be someone who sees it who can reasonably claim the algorithm recommended it. There is no world where the existence of algorithms imposes liability and social media still exists. A message board’s content is still populated by an algorithm. The tools which allow moderators to engage in moderation algorithmically change the feed you see. They decide what you see. Bad stuff still on the board?Now the website can be found liable for not moderating well enough, they once took down a post about the blackout challenge, now they have to perfectly and proactively ban every instance or lose the protection you claim they should have.

EricDraccip says:

I do think there are no protection surrounding the reccomendation of content, but…

if you are suing for the content itself, and somehow not soley the reccomendation then you run a foul of section 230.

That only brings up the question, what realm of legal liability could stem from soley a reccomendation that wouldnt immediately die from the first amendment.

This comment has been flagged by the community. Click here to show it.

Arianity says:

At issue are the recommendations TikTok offers on its “For You Page” (FYP), which is the algorithmically recommended feed that a user sees. According to the plaintiff, the FYP recommended a “Blackout Challenge” video to a ten-year-old child, who mimicked what was shown and died. This is, of course, horrifying. But who is to blame?

There’s a key part you’re leaving out here, which is an important factor. I had to go back into the decision to find it:

She alleges that TikTok: (1) was aware of the Blackout Challenge

so Anderson’s claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children can proceed. So too for her claims seeking to hold TikTok liable for its targeted recommendations of videos it knew were harmful

As written, 230 (and more so precedent following Zeran) immunizes it. But the fact that 230 immunizes regardless of how involved they were is one of it’s biggest problems.

But here, the Third Circuit has flipped that on its head and said that the second you engage in First Amendment-protected publishing activity around content (such as recommending it), you lose Section 230 protections because the content becomes first-party content.

That’s… the same thing that the court ruled in Stratton Oakmont, and which 230 overturned.

That’s not the same thing, as you said in your previous paragraph. Stratton treated things they didn’t moderate as first party: where a local judge found Prodigy liable for content it didn’t moderate, because it did moderate some content.

There’s a big, big difference between liability on everything posted because you moderate some things, vs liability on specifically on things you’ve actively reviewed. Prodigy’s defense specifically relied on the fact that they didn’t know the material as a distributor.

To quote:

That such control is not complete and is enforced both as early as the notes arrive and as late as a complaint is made, does not minimize or eviscerate the simple fact that PRODIGY has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards.

Stephen T. Stone (profile) says:

Re:

the fact that 230 immunizes regardless of how involved they were is one of it’s biggest problems

No, it isn’t. Unless the video was first-party speech, TikTok should be immunized from legal liability for that video under 230. That its “For You” algorithm found the video and displayed to a user, or that TikTok knew about the “blackout challenge”, are irrelevant facts. At the bare minimum, this incident suggests that TikTok’s moderation practices are lacking in some areas (which is understandable, since moderation doesn’t scale well). But I see no reason to hold TikTok liable for this incident any more than I would see a reason to hold Twitter liable for, say, distribution of CSAM because its “home timeline” algorithm displayed CSAM to anyone who happened to be following Dom Lucre at the time he posted that shit.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re:

So what did you want the parents of the kid that died to do instead? Go after the person who posted the video who likely disappeared into the aether never to be found again? The “Go after the poster” finger-wagging limitations of Section 230’s ability for victims or families of victims to seek recompense or any sort of justice feels less like a flaw and more like a key feature that you, Mike, Wyden, and Cox are glad to have.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

The “Go after the poster” finger-wagging limitations of Section 230’s ability for victims or families of victims to seek recompense or any sort of justice feels less like a flaw

And yet, the Internet exists in the form it does now precisely because interactive web services can be free of legal liability for third-party speech. Hell, Techdirt itself benefits from 230 because without those protections, the comments section you’re in right now wouldn’t exist because Mike wouldn’t even think to open a comments section and risk liability for someone else’s speech.

Sometimes there is no “easy target” to go after in situations like this. Life is unfair in many ways. But to go after the deepest pockets because they’re the easiest target⁠—to file a Steve Dallas lawsuit out of grief and anger and self-righteous fury⁠—and effectively try to take down the rest of the Internet in the process is insane. That the Third Circuit aligned with the parents instead of existing caselaw is equally insane. This ruling shouldn’t hold up on appeal, and if it does, it will doom entire swaths of the Internet.

This comment has been deemed insightful by the community.
Mr. Blond says:

Re: Re: Re:

Even going after the poster, they can’t prove foreseeability and causation. What if Tiktok, Youtube, or any other video site hosts skateboarding footage, and that is recommended to a kid with an interest in skateboarding? Would you hold the site (or the original poster) liable if the kid attempts a stunt outside his skill level and breaks his neck? In the context of TV and movies, courts have been loathe to find that someone imitating something they saw was foreseeable and thus a proximate cause of the injury.

This comment has been flagged by the community. Click here to show it.

Arianity says:

Re: Re: Re:2

That depends entirely on how it’s structured. That perverse incentive happens because not knowing about it at all eliminates liability (Which is how it worked pre-230), and there’s no standard for e.g. negligence/safe harbour/etc. Yes, if you make it so not knowing is a guaranteed way to have no liability, that becomes an easy short cut. There are ways around that, in terms of things like safe harbour laws. (Although those come with other potential trade offs).

A good counterexample are criminal charges. Those are already explicitly exempt from 230 protections, and yet companies are manage to not be conveniently blind.

This comment has been deemed insightful by the community.
blakestacey (profile) says:

Eric Goldman’s take, for those curious:

This opinion urgently needs en banc review. (Whether that happens or not, this case eventually will be appealed to the Supreme Court, and the express conflict with other circuits makes it a good cert candidate). Unless the 3rd Circuit en banc quickly and decisively rejects this opinion, it will be celebrated by other judges eager to blow up Section 230 (of which there are many). As a result, I expect this opinion provides another hard shove towards the impending and seemingly inevitable end of Section 230–and the Internet as we know it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:2

Then you need to stop looking at Techdirt and Eric Goldman who constantly make it sound like the Internet is doomed whenever tech companies are made to face consequences or are able to be held liable for their actions.

Another tech lawyer, Mike Dunford over on Bluesky, talks about how cases such as these don’t mean the net is doomed. You could start there.

This comment has been deemed insightful by the community.
blakestacey (profile) says:

Re: Re: Re:3

This is an area where I disagree with quite a few other tech lawyers.

Much as I’d like to extract some optimism, that’s not a very encouraging way to begin.

I’m unconvinced that there is a strong product liability case on these facts. But I don’t think 230 immunity is appropriate; the theory of liability is based on the provider’s choice of what to present, not the content itself.

This bit seems opaque. Section 230 provides liability protection for moderation decisions, i.e., “the provider’s choice of what to present”.

And, frankly, I don’t see compelling policy reasons to privilege these companies over other industries when it comes to product design liability law.

Is it privileging a company to recognize that speech is not lawn darts? The chain of causation from a design decision to an actual harm is much more fuzzy.

If you’re going to make a product where anyone with any experience of life in this century is going to take one look and go “oh shit, this is going to get people literally killed” (eg Snap speed filter), probably it’s good policy for involved companies to have to budget for litigation counsel.

But in Lemmon v Snap, the product design directly encouraged risky behavior. The design feature here seems to be more like “show a user videos that have already become popular with similar users”, rather than “promote videos because they portray dangerous behavior”. Indeed, the 9th Circuit held that Snap would not be liable for making available other user-generated content, like “Snaps of friends speeding dangerously” that could have “incentivized” risk-taking.

“We’re immune from claims based on user speech” should not excuse decisions about which user speech to affirmatively present to users.

This seems to argue for liability protection only to apply when content is made less visible. Such a standard would run into problems, I think: Any website that delivers content using a search function would be in a legal gray area at best. And it would be a circuit split; for example, the 9th Circuit held in Dyroff v The Ultimate Software Group that “recommending user groups and sending email notifications” are “acting as a publisher of others’ content”. The DC Circuit held in Marshall’s Locksmith v Google that “the choice of presentation does not itself convert [a] search engine into an information content provider”, and indeed “were the display of this kind of information not immunized, nothing would be”. Making content more visible than it would otherwise be is one of “a publisher’s traditional editorial functions”, to use the Zeran wording; maybe it’s the most basic “editorial function” of them all.

And as far as the parade of horribles arguments go – most of earth doesn’t have 230 protections, and most of the major platforms are globally available. So I’m very unconvinced that the platforms not having an immunity on these cases that they don’t have anywhere else is going to kill the web.

Does “most of Earth” have the same litigation culture as the United States, the same standards for who bears the cost of a frivolous lawsuit, etc.? I could very well be wrong, but this seems like a hollow consolation.

Anonymous Coward says:

Re: Re: Re:2

The internet will change. Maybe not due to this, but it always changes. As far as i’m concerned, “the internet” is long past doomed and already in hell. (That would be mistly due to corporate occupation of the nets, and the explosion of advertising – not so much from law.)

You have to get used to it. Eventually whatever sites or services you love will change or die.

If the law goes that badly in the US, you have a lot more to worry about (in the US) than that particular law. And your internet services will all move to Iceland or something.

Anonymous Coward says:

Re:

As much as I love and deeply respect Eric Goldman but he often very doom and gloom and sometime wrong about outcomes of some court cases, He very much believed Twitter, Inc. v. Taamneh and Gonzalez v. Google LLC would overturn 230 before the first hearing happen (he did become more optimistic after)

I deeply disagree with him saying that the end of Section 230–and the Internet as we know it is inevitable. Its not impending and inevitable. This type of wording is not really helpful seeing this is likely to be challenged fast.

Anonymous Coward says:

Funny thing we never seem hear hear about is how the algorithms chose a suggestion to list for an individual. How often is the offending suggestion due to raw, not-logged-in, popularity feed*, and how often is it a reflection of what people already view and search for?

*i used to like some services better logged out. But some of them, at some point, became utter monstrosites of idiocy without my history to inform the algorithms.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

'Facts, law and legal precedent be damned we're ruling against that law!'

Yeah, when your ruling is so wildly conflicting with previous ones that you feel the need to include a multi-page footnote about all the other cases that went the other way and that you’re ignoring that positively reeks of ‘We started this case intending to rule against 230, and damned if we weren’t going to end there.’

This comment has been deemed insightful by the community.
Drew Wilson (user link) says:

Speech is the New Piracy

One of the things I’ve personally been arguing in recent years is that speech has become the new piracy. This is especially true with social media in general. The government wants to crack down on it, but, much like p2p file-sharing, it’ll ultimately become a game of whack a mole. Look at the history of file-sharing and look at what we are seeing today with social media and the parallels become disturbingly similar.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

Kid sees ad on TV: “Our new vodka is the best ever!!”

Kid sees such a bottle in parents liquor-cabinet, proceed to drink the whole bottle and then dies of alcohol poisoning.

Is the broadcaster guilty?

Is the ad-company guilty?

Is the sellers of the vodka guilty?

Who is actually guilty?

There is only one logical choice, but I’m sure someone in their outrage will sue a 3rd party. But hey, you are free to think feelz is a good basis for how cases should be decided instead of established law but you won’t like where it will take you in the end.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re:

We have to water the tree of liberty with the blood of dead kids. It’s the only way to keep our freedoms. Tech companies have to be immune from practically all consequence even when people get killed because they put profit above principle and school shootings are the necessary price we have to pay for our right to have assault rifles.

This comment has been deemed insightful by the community.
blakestacey (profile) says:

Daphne Keller’s take, for those interested:

I can’t express how unutterably tired I feel after reading this absurd 3rd Cir ruling. It denies TikTok 230 immunity for a claim that is (very thinly) framed as liability based on algorithmic promotion, instead of liability based on user content. This issue was fully briefed to the Supreme Court, with approximately infinity amicus briefs covering every possible angle, a little over a year ago. The Court decided not to decide.

Now the 3rd Circuit is engaging in the absurd pretense that the Court actually decided this issue in Moody v. NetChoice, because it said algorithmic ranking that advances the platform’s content moderation goals is the platform’s 1st Am protected speech. Check out fn 13.

There is NO CONFLICT between moderation and ranking being (1) the platform’s speech and also (2) immunized by 230. The whole point of 230 was to encourage and immunize moderation. As the law’s drafters pointed out, that includes algorithmic ranking.

If Section 230 did not apply to platforms’ constitutionally protected exercise of editorial freedoms, then it would apply to basically nothing. It would be a nullity.

Seriously how can we still be having this conversation. How can the 3rd Circuit believe its own nonsense on this. How dare they create a circuit split and force an exhausted nation to go through this AGAIN.

By “exhausted nation” I mean me.

Please let this go away on banc. Please.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re:

On that thread, some fool called Gilad Edelman asked:

Would you think differently about the claim being made here if instead of TikTok, the case was about someone who walked over to a kid and said “Hey look at this cool video of a kid trying to choke themselves out”? In that admittedly outlandish hypothetical, it seems intuitive that the purported liability is for the person’s choice to show the kid the video — not for the content of the video itself.

Since I can’t post my response there, I’ll post it here instead:

In that admittedly outlandish hypothetical, Section 230 wouldn’t even apply since it protects against liability for third party content online.

This comment has been flagged by the community. Click here to show it.

That Anonymous Coward (profile) says:

Dear Trump judges,

You might want to rethink this, because what you are saying is Trumps rants can be held against platforms and then you are gonna bitch that he is being silences because no one wants to risk allowing him on their platform.

In closing you all hate America & freedom, I hope your hemorrhoids burn extra & your side chick gets knocked up.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

That One Guy (profile) says:

Re: Re: Re:2

They may have phrased it rudely but they’re not wrong, if reading articles like this is going to stress you out so much that you’re repeatedly going on long ‘are we all screwed’ comment chains you likely would benefit from seeing a professional to help you deal with that stress and develop ways to better handle it.

Anonymous Coward says:

Re: Re: Re:3

Hmm, you may have a point then, I guess.

I’ve been advised to simply stop paying attention to it. I can see the logic in it. Suppose this is neither the first nor the last time there’s been talks of the internet as we know it being in danger.

I find it difficult not to frequently try and keep myself informed on everything, as if waiting for some kind of “all clear” situation where there’s nothing in the pipeline to worry about, but that’s not exactly realistic nor helpful for myself, I’m starting to realize.

For the lack of a better term, maybe I do need a cold turkey on tech news.

Anonymous Coward says:

In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.

If the mushroom encyclopedia was not crowd sourced the way Wikipedia and some other online encyclopedia are, then this ruling is incorrect.

Anonymous Coward says:

Re:

Agreed. This ruling misses the mark. A mushroom encyclopedia that misleads someone into picking and eating mushrooms that were inedible, it feels to me like that publisher should definitely be liable. A misfire of the court giving companies more power and weight than the people that those companies harm.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

The publisher is not the author. The publisher might publish countless books from countless authors on countless subjects and is unlikely to have such a great deal of knowledge as to the accuracy of every thing said on every page of every book.

Anonymous Coward says:

Re: Re: Re:

No, the publisher is not the author, but if the publisher pays the author and claims copyright of the work in return, then the publisher is legally the author for copyright purposes, and should therefore be the author for Section 230 purposes as well. The problem with all you brunchlords is that you want to have your cake and eat it too.

This comment has been flagged by the community. Click here to show it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...