The NY Times Got It Backwards: Section 230 Helps Limit The Spread Of Hate Speech Online
from the get-it-straight dept
A few weeks back, we wrote about the NY Times absolutely terrible front page of the Business Section headline that, incorrectly, blamed Section 230 for “hate speech” online, only to later have to edit the piece with a correction saying oh, actually, it’s the 1st Amendment that allows “hate speech” to exist online. Leaving aside the problematic nature of determining what is, and what is not, hate speech — and the fact that governments and autocrats around the globe regularly use “hate speech” laws to punish people they don’t like (which is often the marginalized and oppressed) — the entire claim that Section 230 “enables” hate speech to remain online literally gets the entire law backwards.
In a new piece, Carl Szabo, reminds people about the second part of Section 230, which is what says that websites aren’t held liable for their moderation choices in trying to get rid of “offensive” content. Everyone focuses on part (c)(1) of the law, the famous “26 words” that note:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
But section (c)(2) is also important, and part of what makes it possible for companies to clean up the internet:
No provider or user of an interactive computer service shall be held liable on account of?
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
That part was necessary to respond to (and directly overrule) the ruling in Stratton Oakmont v. Prodigy, in which a colorful NY judge ruled that because Prodigy moderated its forums to keep them “family friendly,” it was then legally liable for all the content it didn’t moderate. The entire point of 230 was to create this balancing carrot and stick, in which companies would have incentive both to allow third parties to post content but also to make their own decisions and experiment with how to moderate.
As Szabo notes, it’s this part of (c)(2) that has kept the internet from getting overwhelmed by spam, garbage and hate speech.
Section 230(c)(2) enables Gmail to block spam without being sued by the spammers. It lets Facebook remove hate speech without being sued by the haters. And it allows Twitter to terminate extremist accounts without fear of being hauled into court. Section 230(c)(2) is what separates our mainstream social media platforms from the cesspools at the edge of the web.
While some vile user content is posted on mainstream websites, what is often unreported is how much of this content is removed. In just six months, Facebook, Twitter, and YouTube took action on 11 million accounts for terrorist or hate speech. They moderated against 55 million accounts for pornographic content. And took action against 15 million accounts to protect children.
All of these actions to moderate harmful content were empowered by Section 230(c)(2).
What isn’t mentioned is that, somewhat oddly, the courts have mostly ignored (c)(2). Even in cases where you’d think the actions of various internet platforms are protected under (c)(2), nearly every court notes that (c)(1)’s liability protections also cover the moderation aspect. To me, that’s always been a bit weird, and a little unfortunate. It gets people way too focused on (c)(1), without realizing that part of the genius in the law is the way it balances incentives with the combination of (c)(1) and (c)(2).
Either way, for those who keep arguing that Section 230 is why we have too much garbage online, the only proper response is that they’re wrong. Section 230 also encourages platforms to clean up the internet. And many take that role quite seriously (sometimes too seriously). But it has resulted in widespread experimentation on content moderation that is powerful and useful. Taking away Section 230’s protections, or limiting them, will make it that much more difficult.