The NY Times Got It Backwards: Section 230 Helps Limit The Spread Of Hate Speech Online

from the get-it-straight dept

A few weeks back, we wrote about the NY Times absolutely terrible front page of the Business Section headline that, incorrectly, blamed Section 230 for “hate speech” online, only to later have to edit the piece with a correction saying oh, actually, it’s the 1st Amendment that allows “hate speech” to exist online. Leaving aside the problematic nature of determining what is, and what is not, hate speech — and the fact that governments and autocrats around the globe regularly use “hate speech” laws to punish people they don’t like (which is often the marginalized and oppressed) — the entire claim that Section 230 “enables” hate speech to remain online literally gets the entire law backwards.

In a new piece, Carl Szabo, reminds people about the second part of Section 230, which is what says that websites aren’t held liable for their moderation choices in trying to get rid of “offensive” content. Everyone focuses on part (c)(1) of the law, the famous “26 words” that note:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

But section (c)(2) is also important, and part of what makes it possible for companies to clean up the internet:

No provider or user of an interactive computer service shall be held liable on account of?

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

That part was necessary to respond to (and directly overrule) the ruling in Stratton Oakmont v. Prodigy, in which a colorful NY judge ruled that because Prodigy moderated its forums to keep them “family friendly,” it was then legally liable for all the content it didn’t moderate. The entire point of 230 was to create this balancing carrot and stick, in which companies would have incentive both to allow third parties to post content but also to make their own decisions and experiment with how to moderate.

As Szabo notes, it’s this part of (c)(2) that has kept the internet from getting overwhelmed by spam, garbage and hate speech.

Section 230(c)(2) enables Gmail to block spam without being sued by the spammers. It lets Facebook remove hate speech without being sued by the haters. And it allows Twitter to terminate extremist accounts without fear of being hauled into court. Section 230(c)(2) is what separates our mainstream social media platforms from the cesspools at the edge of the web.

[….]

While some vile user content is posted on mainstream websites, what is often unreported is how much of this content is removed. In just six months, Facebook, Twitter, and YouTube took action on 11 million accounts for terrorist or hate speech. They moderated against 55 million accounts for pornographic content. And took action against 15 million accounts to protect children.

All of these actions to moderate harmful content were empowered by Section 230(c)(2).

What isn’t mentioned is that, somewhat oddly, the courts have mostly ignored (c)(2). Even in cases where you’d think the actions of various internet platforms are protected under (c)(2), nearly every court notes that (c)(1)’s liability protections also cover the moderation aspect. To me, that’s always been a bit weird, and a little unfortunate. It gets people way too focused on (c)(1), without realizing that part of the genius in the law is the way it balances incentives with the combination of (c)(1) and (c)(2).

Either way, for those who keep arguing that Section 230 is why we have too much garbage online, the only proper response is that they’re wrong. Section 230 also encourages platforms to clean up the internet. And many take that role quite seriously (sometimes too seriously). But it has resulted in widespread experimentation on content moderation that is powerful and useful. Taking away Section 230’s protections, or limiting them, will make it that much more difficult.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The NY Times Got It Backwards: Section 230 Helps Limit The Spread Of Hate Speech Online”

Subscribe: RSS Leave a comment
40 Comments
Toom1275 (profile) says:

Re: Re: Re: Re:

Note: "learn to code" is, as one should expect coming from Zof the Liar’s world of false narratives, not so innocent as he pretends.

In the real world, it’s become something vaguely similar to the mailing of five orange pips.

"Learn to code" is meant as a thinly vailed threat aimed at a journalist it’s sent to; that they should now be afraid that if they say something they alt-right doesn’t like, they might find themselves in the crosshairs of the alt-right’s next targeted harassment/smear campaign†

I.e.: "You should learn to code, because you won’t have your job as as a journalist much longer."

† see also: Sarah Jeong, James Gunn

Anonymous Coward says:

Re: Re: You folks do know that...

Funny as hell that would be, Zof is miles more coherent than blue.

Which makes Zof’s recent attempts to try and out-blue blue all the more confusing.

What dreams of chronic and sustained cruelty do you have to experience such that it makes you decide to transition from "frantic alt-right spokesperson" to "ignorant motherfucker"?

Jim says:

Speech

Interesting. In the us, speech is going underground. No more free speech. How sad, now I will have to hunt dissenting opinions. Lack of dissent means only the approved speech will be heard.thats called propaganda. In effect, lying to the people. That means conformity. Believe one way or else. Vote one way or else, fear all, or else. 1984 or else.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »