No, Section 230 Doesn’t ‘Circumvent’ The First Amendment – But This Harvard Article Circumvents Reality

from the that's-not-how-any-of-this-works dept

When it comes to Section 230, we’ve seen a parade of embarrassingly wrong takes over the years, all sharing one consistent theme: the authors confidently opine on the law despite clearly not understanding it. Each time I think we’ve hit bottom with the most ridiculously wrong take, along comes another challenger.

This week’s is a doozy.

I don’t want to come off as harsh in critiquing these articles, but it’s exasperating. It’s very possible for the people writing these articles to actually educate themselves. And in this case in particular, at least two of the authors have published something similar before and have been called out for their factual errors, and have chosen to double down, rather than educate themselves. So if the tone of this piece sounds angry, it’s exasperation that the authors are now deliberately choosing to misrepresent reality.

I’ve written twice about Professor Allison Stanger, both times in regards to her extraordinarily confused misunderstandings about Section 230 and how it intersects with the First Amendment. It appears that she has not taken the opportunity in the interim to learn literally anything about the law. Instead, she is now taking (1) an association with Harvard’s prestigious Kennedy School to further push utter batshit nonsense disconnected from reality, and (2) sullying others’ reputations in the process.

I first wrote about her when she teamed up with infamous (and frequently wrong) curmudgeon Jaron Lanier to write a facts-optional screed against Section 230 in Wired magazine that got so much factually wrong that it was embarrassing. The key point that Stanger/Lanier claimed was that Section 230 somehow gave the internet an ad-based business model, which is not even remotely close to true. Among other things, that article confused Section 230 with the DMCA (two wholly different laws) and then tossed in a bunch of word salad about “data dignity,” a meaningless phrase.

Even weirder, the beginning of that article seems to complain that not enough content is moderated (too much bad content!), but by the end they’re complaining that too much good content is moderated. Somehow, the article suggests, if we got rid of Section 230, exactly the right kinds of content would be moderated, and somehow advertising would no longer be bad and harassment would disappear. Then they say websites should only moderate based on the First Amendment which would forbid sites from moderating a bunch of the things the article said needed moderating. I dunno, man. It made no sense.

Somehow, Stanger leveraged that absolute nonsense into a chance to appear before a congressional committee, where she falsely claimed that decentralized social media apps were the same thing as decentralized autonomous organizations. They’re wholly different things. She also told the committee that Wikipedia wouldn’t be sued without Section 230 because “their editing is done by humans who have first amendment rights.”

Which is quite an incredibly confusing thing to say. Humans with First Amendment rights still get sued all the time.

Anyway, Stanger and Lanier are back with a new article, this time published at the Harvard Kennedy School’s Ash Center for Democratic Governance and Innovation. Once again, they are completely and totally getting Section 230 twisted around to make it unrecognizable from reality.

Unfortunately, this time, they’ve dragged along Audrey Tang as a co-author. I’ve met Tang and I have tremendous respect for her. As digital minister of Taiwan, she did some amazing things to use the internet for good in the world of civic tech. She’s also spoken about the importance of the internet on free speech in Taiwan, and the importance of the open World Wide Web on democracy in Taiwan. She’s very thoughtful about the intersection of technology, speech, and law.

But she is not an expert on Section 230 or the First Amendment, and it shows in this piece.

At least this article starts with a recognition of the First Amendment, but it even gets the very basics of that wrong:

The First Amendment is often misunderstood as permitting unlimited speech. In reality, it has never protected fraud, libel, or incitement to violence. Yet Section 230, in its current form, effectively shields these forms of harmful speech when amplified by algorithmic systems. It serves as both an unprecedented corporate liability shield and a license for technology companies to amplify certain voices while suppressing others. To truly uphold First Amendment freedoms, we must hold accountable the algorithms that drive harmful virality while protecting human expression.

Yes, some people misunderstand the First Amendment that way, but no, Section 230 does not shield “those forms of harmful speech.” Also, the “incitement to violence” is from the Brandenburg Test and is technically “incitement to imminent lawless action” which is not the same thing as “incitement to violence.” To pass the Brandenburg test, the speech has to be “intended to incite or produce imminent lawless action, and likely to incite such action.”

This is an extremely high bar, and nearly all harassment does not cross that bar.

Also, this completely misunderstands Section 230, which does not actually “shield these forms of harmful speech.” If the speech is actually illegal under the First Amendment, Section 230 does absolutely nothing to “shield” it. All 230 does is say that we place the liability on the speaker. If the speech actually does violate the First Amendment (and, as we’ll get to, this piece plays fast and loose with how the First Amendment actually works), then 230 doesn’t stand in the way at all of holding the speaker liable.

Yet, this piece seems to argue that if we got rid of Section 230 and somehow forced websites to only moderate to the Brandenburg standard, it would somehow magically stop harassment.

The choice before us is not binary between unchecked viral harassment and heavy-handed censorship. A third path exists: one that curtails viral harassment while preserving the free exchange of ideas. This balanced approach requires careful definition but is achievable, just as we’ve defined limits on viral financial transactions to prevent Ponzi schemes. Current engagement-based optimization amplifies hate and misinformation while discouraging constructive dialogue.

To put it mildly, this is delusional. This “third path” is basically just advocating for dictatorial control over speech.

This is a common stance for people with literally zero experience with the challenges of trust & safety and content moderation. These people seem to think if only they were put in charge of writing the rules, it’s possible to write perfect rules that stop the bad stuff but leave the good stuff.

That’s not possible. And anyone with any experience in a trust & safety role would know that. Which is why it would be great if non-experts stopped cosplaying as if they understand this stuff.

There’s a reason that we created two separate trust & safety and content moderation games to help people like the authors of this piece understand that it’s not so simple. People are complicated. So many things involve subjective calls in murky gray areas, that even experts in the field who have spent years adjudicating these things rarely agree on how best to handle different situations.

Our proposed “repeal and renew” approach would remove the liability shield for social media companies’ algorithmic amplification while protecting citizens’ direct speech. This reform distinguishes between fearless speech—which deserves constitutional protection—and reckless speech that causes demonstrable harm. The evidence of such harm is clear: from the documented mental health impacts of engagement-optimized content to the spread of child sexual abuse material (CSAM) through algorithm-driven networks.

Ah, so your problem is with the First Amendment, not Section 230. The idea that only “fearless speech” deserves constitutional protection is a lovely fantasy for law professors, but it’s not the law. And never has been. You would need to first completely dismantle over a century’s worth of First Amendment jurisprudence before we even get to the question of 230, which wouldn’t do what you want it to do in the first place.

Under the First Amendment, “reckless speech” remains protected, except in some very specific, well-delineated cases. And you can’t just wave your arms and pretend otherwise, even though that’s what Stanger, Lanier, and Tang do here.

That’s not how it works.

And, because the three of them seem to be coming up with simplistically wrong solutions to inherently complex problems, let’s dig in a bit more on the examples they have. First off, CSAM is already extremely illegal and not protected by either the First Amendment or Section 230. So it’s bizarre that it’s even mentioned here (unless you don’t understand how any of this works).

But how about “the documented mental health impacts of engagement-optimized content”? That’s… not actually proven? This has been discussed widely over the last few years, but the vast majority of research finds no such causal links. Yes, you have a few folks who claim it’s proven, but many of the leading researchers in the field, and multiple meta-analyses of the research have found no actual evidence to support a causal link between social media and mental health.

So… then what?

Stanger, Lanier, and Tang seem to take it as given that such harm is there, even as the evidence has disagreed with that claim. Do we wave a magic wand and say “well, because these three non-experts insist that social media is harmful to mental health that we suddenly make such content… no longer protected under the First Amendment?”

That’s not how the First Amendment works, and it’s not how anything works.

Or, how about we take a more specific example, even though it’s not directly raised in the article. One area of content that many people are very concerned about is “eating disorder content.” Based on what’s in this article, I’m pretty sure that Stanger, Lanier, and Tang would argue that, obviously, eating disorder content should be deemed “harmful” and therefore unprotected under the First Amendment (again, this would require a massive change to the First Amendment, but let’s leave that fantasyland in place for a moment.)

Okay, but now what?

Multiple studies have shown that (1) determining what actually is “eating disorder content” is way more difficult than most people think, because the language around it is so ever-changing, to the point that sometimes people argue that photos of gum are “eating disorder content” and (2) perhaps more importantly, simply removing eating disorder content has been shown to make eating disorder issues worse for some users!

Often, this is because eating disorder content is a demand-side issue, where people are looking for it, rather than being driven to eating disorders based on the content. Removing it often just drives those seeking it out into darker corners of the internet where, unlike in the mainstream areas of the internet, they’re less likely to see useful interventions and resources (including help from others who have recovered from eating disorders).

So, what should be done here? Under the Stanger/Lanier/Tang proposal, the answer is to make such content illegal and require websites to block it, even though that likely does even more harm to vulnerable people.

And that’s ignoring the whole First Amendment problem. Repeatedly throughout the article, Stanger/Lanier/Tang handwave around all this by suggesting that you can create a new law that concretely determines what content is allowed (and must be carried) and what content is not.

But that’s not how it works in both directions. The law can no more compel websites to keep up speech they don’t want to host, than it can force them to take down content the three authors think is “harmful” but does not pass the existing tests regarding what is not protected under the First Amendment.

Given its many problems regarding the authors’ understanding of speech, it will not surprise you that they trot out the “fire in a crowded theater” line, which is the screaming siren of “this is written by people unfamiliar with the First Amendment.”

Just as someone shouting “fire” in a crowded theater can be held liable for resulting harm, operators of algorithms that incentivize harassment for engagement should face accountability.

Earlier in the piece, they pointed (incorrectly) to the Brandenburg test on incitement to imminent lawless action. Given that, you might think that someone might have pointed out to them that Brandenburg effectively rejected Schenck, the case in which the “fire in a crowded theater” line was uttered as dicta (i.e., not controlling or meaningful). But, nope. They pretend it’s the law (it’s not), just like they pretend the Brandenburg standard can magically be extended to harassment (it cannot).

The piece concludes with even more nonsense:

Section 230 today inadvertently circumvents the First Amendment’s guarantees of free speech, assembly, and petition. It enables an ad-driven business model and algorithmic moderation that optimize for engagement at the expense of democratic discourse. Algorithmic amplification is a product, not a public service. By sunsetting Section 230 and implementing new legislation that holds proprietary algorithms accountable for demonstrable harm, we can finally extend First Amendment protections to the digital public square, something long overdue.

Literally every sentence of that paragraph is wrong. Harvard should be ashamed for publishing something that would flunk a first-year Harvard Law class. Section 230 does nothing to “circumvent” the First Amendment. The First Amendment does not guarantee free speech, assembly, and petition on private property. It simply limits the government from suppressing it. Private property owners still have the editorial discretion to do as they wish, which is supported by Section 230.

As for the claim that you can magically apply liability to “algorithmic amplification” and not have that violate the First Amendment, that’s also wrong. We discussed that just last week, so I’m not going to rehash the entire argument. But algorithmic amplification is literally speech as well, and it is very much protected under the First Amendment as an opinion on “we think you’d like this.” You can’t just magically move that outside of the First Amendment. That’s not how it works.

The point is that this piece is not serious. It does not grapple with the realities of the First Amendment. It does not grapple with the impossibilities of content moderation. It does not grapple with the messiness of societal level problems with no easy solution. It ignores the evidence on social media’s supposed harms.

It sets up a fantasyland First Amendment that does not exist, it misrepresents what Section 230 does, it mangles the concept of “harms” in the online speech context, and it punts on what the simple “rules” they think they can write to get around all of that would be.

It’s embarrassing how disconnected from reality the article is.

Yet, Harvard’s Kennedy School was happy to put it out. And that should be embarrassing for everyone involved.

Filed Under: , , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “No, Section 230 Doesn’t ‘Circumvent’ The First Amendment – But This Harvard Article Circumvents Reality”

Subscribe: RSS Leave a comment
29 Comments
Anonymous Coward says:

She also told the committee that Wikipedia wouldn’t be sued without Section 230 because “their editing is done by humans who have first amendment rights.”

WOW. so what? her argument was that all 1st amendment cases in the US where over the speech of… monkeys and cats? And I guess defamation and libel only applied when animals said/wrote it (at least up until people could confuse themselves into sue a computer, program or algorithm)?

That… that is one crazy world she’s been living in. Should we throw her a “welcome to our reality, here is a crash course on things you must know” party?

This comment has been deemed insightful by the community.
That One Guy (profile) says:

A right you can't afford to exercise is a right you do not have

Section 230 today inadvertently circumvents the First Amendment’s guarantees of free speech, assembly, and petition.

That’s not just wrong it’s the complete opposite of reality.

230 protects the first amendment by making it so that platforms can afford to exercise their first amendment rights, short-circuiting lawsuits meant to punish first amendment activity in the form of moderation and prevent platforms from being held liable for speech that isn’t theirs.

As the old article notes their problem isn’t with 230 it’s with the first amendment, and specifically with the fact that people they don’t like or agree with are covered by it as well.

Anonymous Coward says:

At some point, I’m just lost on why the push for repealing Section 230 is a thing. I get that trying to push it through academic and ex-political voices is a way to legitimize the hostility, but genuinely, would anybody actually benefit from its appeal? The stuff that would be the likely factors (like copyright and anything to do with kids) are already exemptions, so that’s (probably) not it. Why is it even being pushed, and by who? What is the hypothetical end goal here, exactly?

Anonymous Coward says:

Re:

It’s a do something solution. It’s not about fixing anything. For those who genuinely want the repeal of section 230 for altruistic reasons, It’s about the emotional toll and a need to hold someone responsible for terrible things that have happened. They are not engaging on the law, they are engaging on feelings and a desire for ‘fairness’ or ‘equity’.

in so far as they have less than altruistic reasons, many will be mislead by those who deliberately misrepresent the law for fiscal gain. Other people seek the power to abuse via the legal system with frivolous lawsuits. they simply want remove the laws which allow them to threaten, abuse and harass using the legal system

Anonymous Coward says:

Re:

Some of it is no doubt driven by those who want to suppress speech they dislike, and (correctly) see Section 230 as a barrier to doing so.

But a lot of it is an instance of a particular political strategy. The strategy is this: choose any non-solution to a non-problem, and convince your supporters that it is a real solution to a real problem. Then two things happen:

  • First, you can campaign unopposed on this issue. Since you are the only one talking publicly about it, everyone only gets their information from you, and you can undermine trust in other sources of information by pointing out how they are ignoring or covering up this important issue.
  • Eventually, journalists have to take notice and point out that this isn’t a real problem. Now they are dismissing your supporters’ concerns! And your supporters are primed to ignore any debunking because (as established in the previous step) they are part of the coverup.
  • Opposition politicians can’t be seen to dismiss a “problem” which is so important to people, but they also can’t argue against the “solution” without inviting attacks. It is politically safer to just let you pass a law about it.
  • In the best case, your law has little or no actual effect on the real world (because it’s a non-solution to a non-problem), and at worst it will make things bad for people who aren’t on your side (assuming your choice of non-problem and non-solution were wise), but your supporters probably like it when the right people get hurt, anyway. Now it’s a legislative achievement you can point to, maybe even a bipartisan one.
Anonymous Coward says:

Re:

Then again..This is far from the first time it’s been mislabelled by journalists, and yet it’s still here, with most attempts to repeal it not getting anywhere in congress so far.

Maybe there’s still hope, I’m just hoping articles like these won’t spur more politicians onto the idea of killing the internet, suppose.

This comment has been deemed insightful by the community.
cashncarry (profile) says:

Magical thinking

I suspect the scat-for-brains reasoning is as simplistic as this: (1) I’ve seen something on the Internet which I regard as “bad”; (2) I’ve heard that Section 230 is “the 26 words that created the Internet”; therefore (3) Repealing Section 230 will magically uncreate the Internet and make all the “bad” things go away.

Anonymous Coward says:

Re: Re: Re:2

A good deal of social media and messenger platforms, as well as smaller websites, originate in the US.

And there’s 17 that didn’t, including TikTok (China) and Telegram (United Arab Emirates), proving that the Internet can exist without Section 230 because it’s international. Doomsaying like yours could help to destroy Section 230 if it encourages the Republicans to do so on the basis that the whole world will be forced to hear only their messages, whereas if we force them to recognize the Internet as having a global reach and something that therefore does not need Section 230 everywhere, they may get put off destroying the shortcut to First Amendment protections that is Section 230 on the basis that htere isn’t any point. Nice attempt at a strawman on your part, though.

MrWilson (profile) says:

However, these concerns presuppose that the First Amendment cannot protect unmediated online speech in a post-230 world. We believe it can, through carefully crafted legislation that distinguishes between service providers and engagement-optimizing platforms. This distinction is readily definable, as algorithmic curation for engagement is openly advertised and sold as a product.

First, did we just discover Arianity’s real identity?

Second, that supposed distinction is not readily definable because Facebook is both, Google is both, TikTok is both, etc. The idea that algorithmic curation for engagement is advertised and sold as a product (which it isn’t, it’s sold as a service, significant red flag on the terminology there) is all that is needed to define the distinction is naive hand-waving from people who can’t be bothered to understand how these services work at best, and disingenuous hand-waving from people who don’t care about the difference but are writing a talking point to try to preemptively dodge very real criticism of their very bad ideas.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

You know what I find exasperating? Self-indulgent invective, for one thing. But what really bugs me is when people like this author scream that someone else made “factual errors,” when what they’re really disagreeing about is a matter of characterization or opinion. This author is clearly smart. It’s unfortunate that he has given us this a bombastic conflation of fact and opinion, utterly lacking in the nuance appropriate for a topic that a whole lot of other smart people seem to think is quite complicated. My takeaway is that the people he smugly attacks are probably worth a lot more of my attention than he is.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...