Yes, Section 230 Should Apply Equally To Algorithmic Recommendations

from the it-won't-do-what-you-think-if-you-remove-it dept

If you’ve spent any time in my Section 230 myth-debunking guide, you know that most bad takes on the law come from people who haven’t read it. But lately I keep running into a different kind of bad take—one that often comes from people who have read the law, understand the basics passably well, and still say: “Sure, keep 230 as is, but carve out algorithmically recommended content.”

Unlike the usual nonsense, this one is often (though not always) offered in good faith. That makes it worth engaging with seriously.

It’s still wrong.

Let’s start with the basics: as we’ve described at great length, the real benefits of Section 230 are its procedural protections, which make it so that vexatious cases get tossed out at the earliest (i.e., cheapest) stage. That makes it possible for sites that host third party content to do so in a way that they won’t get sued out of existence any time anyone has a complaint about someone else’s content being on the site. This important distinction gets lost in almost every 230 debate, but it’s important. Because if the lawsuits that removing 230 protections would enable would still eventually win on First Amendment grounds, the only thing you’re doing in removing 230 protections is making lawsuits impossibly expensive for individuals and smaller providers, without doing any real damage to large companies, who can survive those lawsuits easily.

And that takes us to the key point: removing Section 230 for algorithmic recommendations would only lead to vexatious lawsuits that will fail.

But what about [specific bad thing]?

Before diving into the legal analysis, let’s engage with the strongest version of this argument. Proponents of carving out algorithmic recommendations typically aren’t imagining ordinary defamation suits. They’re worried about something more specific: cases where an algorithm itself arguably causes harm through its recommendation patterns—radicalization pipelines, engagement-driven amplification of dangerous content, recommendation systems that push vulnerable users toward self-harm.

The theory goes something like this: maybe the underlying content is protected speech, but the act of recommending it—especially when the algorithm was designed to maximize engagement and the company knew this could cause harm—should create liability, usually as some sort of “products liability” type complaint.

It’s a more sophisticated argument than “platforms are publishers.” But it still fails, for reasons I’ll explain below. The short version: a recommendation is an opinion, opinions are protected speech, and the First Amendment doesn’t carve out “opinions expressed via algorithm” as a special category.

A short history of algorithmic feeds

To understand why removing 230 from algorithmic recommendations would be such a mistake, it helps to remember the apparently forgotten history of how we got here. In the pre-social media 2000s, “information overload” was the panic of the moment. Much of the discussion centered on the “new” technology of RSS feeds, and there were plenty of articles decrying too much information flooding into our feed readers. People weren’t worried about algorithms—they were desperate for them. Articles breathlessly anticipated magical new filtering systems that might finally surface what you actually wanted to see.

The most prominent example was Netflix, back when it was still shipping DVDs. Because there were so many movies you could rent, Netflix built one of the first truly useful recommendation algorithms—one that would take your rental history and suggest things you might like. The entire internet now looks like that, but in the mid-2000s, this was revolutionary.

Netflix’s approach was so novel that they famously offered $1 million to anyone who could improve their algorithm by 10%. We followed that contest for years as it twisted and turned until a winner was finally announced in 2009. Incredibly, Netflix never actually implemented the winning algorithm—but the broader lesson was clear: recommendation algorithms were valuable, and people wanted them.

As social media grew, the “information overload” panic of the blog+RSS era faded, precisely because platforms added recommendation algorithms to surface content users were most likely to enjoy. The algorithms weren’t imposed on users against their will—they were the answer to users’ prayers.

Public opinion only seemed to shift on “algorithms” after Donald Trump was elected in 2016. Many people wanted something to blame, and “social media algorithms” was a convenient excuse.

Algorithmic feeds: good or bad?

Many people claim they just want a chronological feed, but studies consistently show the vast majority of people prefer algorithmic recommendations, because they surface more of what users actually want, compared to chronological feeds.

That said, it’s not as simple as “algorithms good.” There’s evidence that algorithms optimized purely for engagement can push emotionally charged political content that users don’t actually want (something Elon Musk might take notice of). But there’s also evidence that chronological feeds expose users to more untrustworthy content, because algorithms often filter out garbage.

So, algorithms can be good or bad depending on what they’re optimized for and who controls them. That’s the real question: will any given regulatory approach give more power to users, to companies, or to the government?

Keep that frame in mind. Because removing 230 protections for algorithmic recommendations shifts power away from users and toward incumbents and litigants.

The First Amendment still exists

As mentioned up top, the real role of Section 230 is providing a procedural benefit to get vexatious lawsuits tossed well before (and at much lower cost) they would get tossed anyway, under the First Amendment. With Section 230, you can get a case dismissed for somewhere in the range of $50k to $100k (maybe up to $250k with appeals and such). If you have to rely on the First Amendment, it’s up in the millions of dollars (probably $5 to $10 million).

And, the crux of this is that any online service sued over an algorithmic recommendation, even for something horrible, would almost certainly win on First Amendment grounds.

Because here’s the key point: a recommendation feed is a website’s opinion of what they think you want to see. And an opinion is protected speech. Even if you think it’s a bad or dangerous opinion. One thing that the US has been pretty clear on is that opinions are protected speech.

Saying that an internet service can be held liable for giving its opinion on “what we think you’d like to see” would be earth shatteringly problematic. As partly discussed above, the modern internet today relies heavily on algorithms recommending stuff, giving opinions. Every search result is just that, an opinion.

This is why the “algorithms are different” argument fails. Yes, there’s a computer involved. Yes, the recommendation emerges from machine learning rather than a human editor’s conscious decision. But the output is still an expression of judgment: “Based on what we know, we think you’ll want to see this.” That’s an opinion. The First Amendment doesn’t distinguish between opinions formed by editorial meetings and opinions formed by trained models.

In the earlier internet era, there were companies that sued Google because they didn’t like how their own sites appeared (or didn’t appear) in Google search results. The E-Ventures v. Google case here is instructive. Google determined that E-Venture’s “SEO” techniques were spammy, and de-indexed all its sites. E-Ventures sued. Google (rightly) raised a 230 defense which (surprisingly!) a court rejected.

But the case went on longer, and after lots more money on lawyers was spent, Google did prevail on First Amendment grounds.

This is exactly what we’re discussing here. Google search ranking is an algorithmic recommendation engine, and in this one case a court (initially) rejected a 230 defense, causing everyone to spend more money… to get to the same basic result in the long run. The First Amendment protects a website using algorithms to express an opinion over what it thinks you’ll want… or not want.

Who has agency?

This brings us back to the steelman argument I mentioned above: what about cases where an algorithm recommends something genuinely dangerous?

Our legal system has a clear answer, and it’s grounded in agency. A recommendation feed is not hypnotic. If an algorithm surfaces content suggesting you do something illegal or dangerous, you still have to make the choice to do the illegal or dangerous thing. The algorithm doesn’t control you. You have agency.

But there’s a stronger legal foundation here too. Courts have consistently found that recommending something dangerous is still protected by the First Amendment, particularly when the recommender lacks specific knowledge that what they’re recommending is harmful.

The Winter v. GP Putnam’s Sons case is instructive here. The publisher of a mushroom encyclopedia included recommendations to eat mushrooms that turned out to be poisonous—very dangerous! But the court found the publisher wasn’t liable because they didn’t have specific knowledge of the dangerous recommendation. And crucially, the court noted that the “gentle tug of the First Amendment” would block any “duty of care” that would require publishers to verify the safety of everything they publish:

The plaintiffs urge this court that the publisher had a duty to investigate the accuracy of The Encyclopedia of Mushrooms’ contents. We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs.

Now, I should acknowledge that Winter was a products liability case involving a physical book, not a defamation or tortious speech case involving an algorithm, but almost all of the current cases challenging social media are self-styled as product liability cases to try (usually without success) to avoid the First Amendment. And that’s all they would be regarding algorithms as well.

The underlying principle remains the same whether you call it a products liability case or one officially about speech: the First Amendment bars requirements that publishing intermediaries must “investigate” whether everything they distribute is accurate or safe. The reason is obvious—such liability would prevent all sorts of things from getting published in the first place, putting a massive damper on speech.

Apply that principle to algorithmic recommendations, and the answer is clear. If a book publisher can’t be required to verify that every mushroom recommendation is safe, a platform can’t be required to verify that every algorithmically surfaced piece of content won’t lead someone to harm.

The end result?

So what would it mean if we somehow “removed 230 from algorithmic recommendations”?

Practically, it means that if companies have to rely on the First Amendment to win these cases, only the biggest companies can afford to do so. The Googles and Metas of the world can absorb $5-10 million in litigation costs. For smaller companies, those costs are existential. They’d either exit the market entirely or become hyper-aggressive about blocking content at the first hint of legal threat—not because the content is harmful, but because they can’t afford to find out in court.

The end result would be that the First Amendment still protects algorithmic recommendations—but only for the very biggest companies that can afford to defend that speech in court.

That means less competition. Fewer services that can recommend content at all. More consolidation of power in the hands of incumbents who already dominate the market.

Remember the frame from earlier: does this give more power to users, companies, or the government? Removing 230 from algorithmic recommendations doesn’t empower users. It doesn’t make platforms more “responsible.” It just makes it vastly harder for anyone other than the giant platforms to exist while also giving more power to governments, like the one currently run by Donald Trump, to define what things an algorithm can, and cannot, recommend.

Rather than diminishing the power of billionaires and incumbents, this would massively entrench it. The people pushing for this carve-out often think they’re fighting Big Tech. In reality, they’re fighting to build Big Tech a new moat.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Yes, Section 230 Should Apply Equally To Algorithmic Recommendations”

Subscribe: RSS Leave a comment
37 Comments
Anonymous Coward says:

People, alas, do not understand what words mean

The system which allows me to tailor my Youtube recommendations based on my viewing preferences to keep out all the slop, pointless “top five” lists, text to speech wikipedia articles, and gym bros hawking “supplements” is an algorithm.

I honestly think the protest we need from all the big platforms is a single day where they disable ALL their curation algorithms, and see if people are still clamouring for bans on them.

Arianity (profile) says:

The theory goes something like this: maybe the underlying content is protected speech, but the act of recommending it—especially when the algorithm was designed to maximize engagement and the company knew this could cause harm—should create liability, usually as some sort of “products liability” type complaint.

There’s a trickier argument you’re kind of missing. Simply- are algorithms first party speech, or they editorial in nature? Because 230 doesn’t apply to first party speech. You actually kind of mention it here:

Because here’s the key point: a recommendation feed is a website’s opinion of what they think you want to see. And an opinion is protected speech. Even if you think it’s a bad or dangerous opinion.

First party speech still gets 1A protections, but not necessarily 230, as written. And importantly, 1A protections are limited (notably, for things like defamation) whereas they aren’t for 230. Most bad things are both 1A and 230 protected, but there’s a sliver that aren’t.

To be clear, this isn’t my argument, I’m just highlighting it as relevant. My stance is somewhere in the middle- it is probably technically first party speech, but it would have the same chilling effects, so it probably needs to be covered. 230 should probably be expanded to more explicitly cover it, though, instead of relying on precedent.

(There is also a separate argument about whether it should be 1A protected. A lot of people who don’t like algorithms of course don’t think they should be. You’re starting from the premise that it is 1A protected, which it currently is, but that’s not necessarily immutable. This also assumes 1A lawsuits have to stay expensive)

The Winter v. GP Putnam’s Sons case is instructive here.

You’re opening up a can of worms with this example. Putnam (and paper distributors in general) still have distributor liability, and would face liability if they had specific knowledge. Something 230 would not allow. At distributor liability, it can’t force them to investigate everything, but it can attach liability if they have scienter. ie, if someone gives them proper notice. In terms of meta concerns, putting it at distributor liability would open up expensive lawsuits again (although, the argument is more interesting than user content. Because a company can’t just not look at it’s own algorithms in the same way as UGC).

This would matter a lot in e.g. the TikTok Blackout challenge case, where the plaintiffs argue that TikTok had direct knowledge. (You actually brought this up when writing about that exact case )

Stephen T. Stone (profile) says:

Re:

Putnam (and paper distributors in general) still have distributor liability, and would face liability if they had specific knowledge.

Paper distributors generally know what content they’re publishing and distributing. They should have liability if they knowingly publish and distribute defamatory/illegal content. An algorithm is, in broadly applicable terms, built in a way that doesn’t give exact knowledge of the exact content being published by a user and distributed to other users. The algorithm may be built to recommend, but it isn’t fine-tined for an individual user’s tastes in the same way a service might perform broader tuning of the algorithm to manage broader swaths of content. And some content could still sift through the cracks anyway because no algorithm is perfect. I’m wary of attaching first-party liability to third-party speech recommended by an algorithm because those imperfections could be purely accidental instead of intentional.

Arianity (profile) says:

Re: Re:

Paper distributors generally know what content they’re publishing and distributing. They should have liability if they knowingly publish and distribute defamatory/illegal content.

Eh, not necessarily; as in Putnam, they didn’t know. That’s why they were protected in the first place- the reason distributor liability is lower than strict liability is because they won’t know everything they’re publishing/distributing, and aren’t expected to. The scale isn’t as bad as online, but it is large enough that they don’t know a lot of it. A bookstore isn’t going to have read every book they’re selling.

An algorithm is, in broadly applicable terms, built in a way that doesn’t give exact knowledge of the exact content being published by a user and distributed to other users. The algorithm may be built to recommend, but it isn’t fine-tined for an individual user’s tastes in the same way a service might perform broader tuning of the algorithm to manage broader swaths of content.

This is broadly true, with the caveat that you could tune it that way, if you wanted to. You could make your algorithm based on stuff e.g. Elon Musk personally likes and wants you to see. And we’ve seen some of that, with e.g. Twitter/Elon Musk (or more recently, concerns over TikTok’s new owners). Most don’t, because that’s not the goal, but it is possible.

And there’s also a grey area where you don’t tune it based on exact posts, but just broader themes. And hey, if something like defamation is more likely, well you still have plausible deniability. There’s an intent without exact knowledge.

I’m wary of attaching first-party liability to third-party speech recommended by an algorithm because those imperfections could be purely accidental instead of intentional.

Same, more or less. I’m kind of in the weird position of I think it’s technically first-party speech, but you shouldn’t attach first-party liability because the consequences would be unworkable. And for precisely that reason- even if it’s technically “first party”, it may not be intentional. It’s a weird spot because we use first party/intentionality as interchangeable, but algorithms are in a place where they can be arguably the former but not the latter. And really what you care about is the latter. It doesn’t make sense to hold them liable for something unintended and unavoidable.

Unfortunately, that does come at a cost. Cases where it is intentional will end up protected as a byproduct. But you can’t really fix that without opening the doors to expensive vexatious lawsuits, so I think we’re kind of stuck with it.

Stephen T. Stone (profile) says:

Re: Re: Re:

you could tune it that way, if you wanted to

When I talk fine-tuning (and I apologize for the misspelling of that term in my prior post), I’m referring to how a user’s decisions narrow an algorithm’s recommendations to fit that user’s tastes. It is possible to do that tuning for an individual user if a service really wanted to do that. But that kind of tuning doesn’t scale to millions of users because good lord, who would even have the time. That’s why…

there’s also a grey area where you don’t tune it based on exact posts, but just broader themes

…I mentioned that exact point in regards to tuning algorithms.

Unfortunately, that does come at a cost. Cases where it is intentional will end up protected as a byproduct. But you can’t really fix that without opening the doors to expensive vexatious lawsuits, so I think we’re kind of stuck with it.

That’s a lot like my position on speech and the First Amendment. I’d love to be able to say “Nazis shouldn’t have the right to spread their bullshit without being arrested”, but I can’t take that position without opening myself up to a similar attack on my right to speak that I couldn’t logically rebut.

Anonymous Coward says:

Re: Re:

How is that different than a human looking through things?

If a news paper editor missed the words, well that’s just human error, they can’t catch everything, so they shouldn’t be liable for it.

Code isn’t some fucking fae ritual that known one alive knows. It was written by humans, with a goal and purpose, it was tested, approved, and pushed out. In places like youtube it sure as hell requires the stamp of approval from the legal team and others.

Stephen T. Stone (profile) says:

Re: Re: Re:

How is that different than a human looking through things?

Imagine creating an algorithm for a single user that caters specifically to their tastes. Imagine tweaking that one specific algorithm every day to further refine what content that user sees.

Now imagine one person trying to do that another three million times. (And that’s just for the Hellsing Ultimate Abridged reference.)

Algorithms are designed to handle broad strokes at first, then refine themselves based on user input. You literally can’t assign one person to refine one algorithm for one user every day and scale that up to millions of users without wasting a massive amount of resources. Services like TikTok tend to alter algorithms at “broad strokes” levels because doing so is more cost-effective. Therein lies the rub with trying to apply first-party liability to algorithms: How do you do that if an individual’s algorithm wasn’t directly tuned to a fine level by an employee of the service?

Anonymous Coward says:

Re: Re: Re:2

I fail to see how even if that were true, how it matters. Should a newspaper be allowed to print out anything tech related simply because it just matches the word tech and nothing more?

I think you are extremely under estimating the amount of controls, filters, and oversight that goes into these systems. Youtube doesn’t filter all of that just for the law, it does it to keep it’s advertisers happy as well. The law might be fine with Google recommending a nazi video, but blabla company might not like their add appearing next to it or on it.

“Algorithms are designed to handle broad strokes at first, ”

And we aren’t at first anymore. Google employees 190,000 people, and had a profit of 34 billion. They likely have tens if not hundreds of people that are ultimately responsible for just youtube recommendations by the time things go out the door.

And at this point, via the current laws around the world, no small business with 5 people is going to be running an international social media site with millions of users.

Stephen T. Stone (profile) says:

Re: Re: Re:3

Should a newspaper be allowed to print out anything tech related simply because it just matches the word tech and nothing more?

That’s not really what happens with user-tuned algorithms, at least to my understanding. Yes, asking to see more tech-related content will likely put such content in front of a user⁠—but it’s the kind of tech content where algorithms come in. One user might want to see content about retro tech, whereas another user might want to see content about current tech. A broad-strokes algorithm designed by the service could put both types of content in front of both users. The users themselves would tune that algorithm towards their own refined tastes and effectively create two distinct “sub-algorithms” without further input from the service.

I think you are extremely under estimating the amount of controls, filters, and oversight that goes into these systems.

Possibly. I’ll be the first to admit I’m a bit of a dumbass, and I’m well aware of my ignorance on a lot of things.

I’m also someone who is incredibly permissive of legally protected speech (e.g., I’ve literally said “Nazis should have the right to say their bullshit” on this site multiple times). Algorithms sit in a weird space where they’re not exclusively first- or third-party speech. How we resolve when and how (or even if) they become one or the other is tricky because of that Schrödinger’s Cat–like situation. I’d prefer to see that issue resolved in a way that lets services tune those algorithms at a broad/macro level without risking legal liability for what content might show up through an algorithm that has been fine-tuned by an individual user.

They likely have tens if not hundreds of people that are ultimately responsible for just youtube recommendations by the time things go out the door.

How many users does YouTube have? What I mean by that question is that I don’t doubt how YouTube can shape its algorithms, but on a user-to-user level, the fine-tuning is done by the user instead of the service. Even if YouTube employs hundreds of people to shape its algorithms, that can still only be done at a level where the changes are broad enough to keep such work cost-effective.

This comment has been deemed insightful by the community.
TKnarr (profile) says:

This is where it’d be nice to have a Federal anti-SLAPP law in place. Then you could simply separate claims about the content the platform selects for recommendation from claims about the user-generated content itself. The content is attributed to the user, Section 230 applies to trying to hold the platform liable for it. The recommendation is attributed to the platform, and the anti-SLAPP law would apply to any lawsuit over that. It’d be on the plaintiff to show that the recommendation falls into the handful of exceptions to First Amendment protection, and the judge can rule on that without involving the platform at all. That takes all the wind out of the sails of the people trying to get rid of Section 230.

Anonymous Coward says:

Re:

That’s about where my opinion lies.

Section 230 shouldn’t protect algorithms because, per e-ventures Worldwide v. Google which this very article cites, they’re the host’s *own, free speech, not the poster’s, and 230 is to prevent hosts from being held liable for speech that is not their own.

But yes, there is always the possibility of being sued frivolously for your speech, which is what anti-SLAPP laws are supposed to prevent. They seem a better solution for the problem of “removing Section 230 for algorithmic recommendations would only lead to vexatious lawsuits that will fail.”

So: algorithmic recommendations should be protected like any other opinion should be. But Section 230 (which isn’t meant to protect you from the consequences of hosting your opinion but someone else’s) is the wrong tool for the job, and using it for this job weakens it and opens it up to attack by people who don’t understand its intended purpose.

A federal anti-SLAPP law, on the other hand, is intended to be used for exactly what Mike is calling for here: to protect people from being sued for expressing their protected opinions. It is the right tool for the job.

Anonymous Coward says:

Can we find a way to separate openly malicious behavior from behavior that tries to serve the users?

This is an excellent and valuable analysis. I think where it misses is how we manage the edges, and where we define where that occurs.

An algorithm that is meant to surface things that it thinks people want in order to provide the best possible service is different from one in impact and effect that has less ethical goals. The challenge is how do you find the line?

Imagine you’re a company that has staff writers producing articles as well as an open forum where anyone can post. Section 230 protection only applies to the open forum posts – not to the ones your staff write. (This is also an issue for DMCA for example.)

If 10 people on your site write articles about the joys of gathering mushrooms and how to identify the poison ones, you’re protected by Section 230 if some of them are actually wrong or dangerous.

If your staff writer collects those 10 articles and posts them as staff as “How to Mushroom” you’re liable.

So our question is, what if your algorithm surfaced those 10 articles and showed them to every user? It’s one thing if the algorithm is “show what’s popular” in a way that’s relatively agnostic about the content but if the algorithm for example highlights articles with comments that say “OMG this is wrong and could kill people” is that still Section 230 protected? And how would you litigate this or even cause the company to stop, if harms like this were being done?

Code is written by staff and it is part of the product, which could have liability.

Let me make this even thornier – what if an LLM writes that “How To Mushroom” article from the user content as a faux staff writer? Can they say, “oh, an algorithm did that, it wasn’t us” and be free and clear? Do they have any obligation to stop such behavior?

Anonymous Coward says:

Re:

So our question is, what if your algorithm surfaced those 10 articles and showed them to every user? It’s one thing if the algorithm is “show what’s popular” in a way that’s relatively agnostic about the content but if the algorithm for example highlights articles with comments that say “OMG this is wrong and could kill people” is that still Section 230 protected?

What if the only reason the algorithm keeps showing the “OMG this could kill people” articles is because they get the most comments? Is that suddenly okay where it wouldn’t be otherwise? Because now you’re back in the mid-2010s with ragebait being everything…

Lou Covey (user link) says:

230 changes

I agree with your position in this perspective, but I still think it needs adjusting.
When X creates a bot army (and it has) using AI to generate content, it is no longer acting in the realm of a telecommunications provider. It becomes a publisher and no longer qualifies in that perspective for 230 protections.
Legacy media publisher can get sued if they don’t use the term “alleged” for a criminal before conviction. A bot manufactured by a social media platform cannot. Under 230, even the platform, acting as a publisher is exempt from lawsuits for promoting that accusation, even when it’s technology produced it. That should be changed.

Strawb (profile) says:

Re:

Under 230, even the platform, acting as a publisher is exempt from lawsuits for promoting that accusation, even when it’s technology produced it. That should be changed.

Anthony v. Yahoo! Inc already established precedence for something like this in 2006. Yahoo generated fake dating profiles to entice users to re-subscribe to the dating service. A user sued, Yahoo tried to have the suit dismissed on Section 230 grounds and were denied, because:

CDA § 230 does not apply to content that an interactive service provider “developed or created entirely by itself.”

So Section 230 doesn’t need to be changed to facilitate that scenario.

Anonymous Coward says:

Re: Re:

I only know of one. It was a case involving yelp where an injunction was granted against the poster but yelp appealed it being used against its site. The court found the injunction couldn’t be used against yelp and that yelp had section 230 immunity. I may be confusing state and federal law or not I don’t read entire opinions usually because it would take too much time. Hassell v. Bird (2018) is one example but apparently there have been others.

Anonymous Coward says:

I would agree only on one basis.

That the user fully controlled it. That it is fully known. That why the recommendation is clear. And that no for profit, ad generated content is allowed.

Because the real world use of these systems mixes user generated content, user picked content, with the forced and chosen for profit content of the platform. Ask any youtuber and they will tell you that the content that gets recommended is strongly filtered.

And the real world effect is that Google can profit from pushing content that kills someone.

At the end of the day, this content is content that has been chosen to be pushed to users by the platform.

By this kind of argument, if a newspaper used a random number generator and closed their eyes real hard to pick a user submitted story, then they should be free and clear.

David Karger (profile) says:

Boundary between recommendations and ads

A lot of the stuff in my algorithmic feed is “recommended” there because the publisher paid the platform to recommend it to me. It’s an ad. And in general commercial speech is more heavily regulated than political speech (see previous comments r.e. Winter v. Putnam). Does section 230 protect this as well? I don’t think it should.

Rocky (profile) says:

Re:

Does section 230 protect this as well? I don’t think it should.

Yes and no, depends on how they are doing it.

Generally speaking, if a company chooses what ads to put in your feed they are the content provider. If on the other hand they use for example Google-ads were some algorithm deep in Google’s datacenters decide what ad to place it isn’t as straightforward anymore because the company then has no prior knowledge what the ad contains and isn’t really the content provider for it.

This comment has been flagged by the community. Click here to show it.

Stephen T. Stone (profile) says:

Re:

What OldTwitter would do (shadowbanning, yes they commited perjury, btw) was really bad.

Even if I were to agree with you on all of that: How does that violate 230 when 230 is designed to let services like Twitter decide how to moderate what would otherwise be legally permissible speech? And please remember that Twitter, both then and now, is a privately owned service that has no obligation⁠—legal or otherwise⁠—to host a given user (and their speech) or display a user’s speech to anyone else.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...