On Its 30th Birthday, Section 230 Remains The Linchpin For Users’ Speech
from the 230-is-free-speech dept
For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.
Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.
But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end the dominance of the current handful of large tech companies—it would cement their monopoly power.
The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.
This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.
Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.
It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.
Section 230 Alternatives Would Protect Less Speech
With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?
The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.
Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.
Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal of lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.
The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.
By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.
In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.
EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.
Reposted from EFF’s Deeplinks blog.
Filed Under: content moderation, free speech, section 230


Comments on “On Its 30th Birthday, Section 230 Remains The Linchpin For Users’ Speech”
There are ways to improve the DMCA’s takedown regime to make it more robust to misuse (like you know, literally any actually enforced penalty for frivolous claims). The flaws in the DMCA are not necessarily inherent to takedown notices, but rather the lack of protections for bad faith behavior. Even just as a model for notice-and-takedown, the DMCA is poorly written, and any comparison should factor that in.
Re:
No, they’re inherent to notice-and-takedown systems. Their whole point is to make content disappear without due process granted to the potential offender behind that content. Even if you were to give the system some teeth in re: punishing false claims beyond “we’re going to make you pay a fine”, the system would still “reward” such claims by taking down the content, even if only temporarily. If the DMCA can be abused that way—and it has been, numerous times, to which this site’s history of talking about such abuse can attest—a similar system for “defamatory” speech would be abused much the same way. And all that is before we get into the much broader issue of such systems working heavily in favor of powerful and wealthy entities such as corporations and the rich motherfuckers who run those corporations.
Re: Re:
And specifically, they give this power selectively to wealthy and powerful players.
If a big company issues a DMCA notice on indie content that doesn’t actually infringe (often as the result of LLM or automated searches), it’s often immediately removed and it’s an uphill battle to get it restored.
But if a small artist issues a DMCA notice on a large commercial project by a large company that is actually infringing, it’s not going to get taken down immediately. It’ll get referred to the company’s reps to respond to, regardless of what the law supposedly requires.
Re: Re:
Yeah, that is fair on the due process part, I meant just the abuse since that’s what they were calling out.
I dunno, we seem to have some laws with teeth, like anti-SLAPP, and they seem to work more or less fine? The important thing is that in order to actually be teeth, the cost has be higher than the reward, even for the rich. An irrelevant fine isn’t actual teeth, for a billionaire, unless you’re scaling it with wealth or something. It’s a cost of buying a new toy. But an important thing to remember is that it doesn’t need to just be monetary, though- there are a lot of things even the rich and powerful care about that aren’t money. Is someone really going to go to jail to try to temporarily remove speech? The DMCA had this idea with perjury, and then… completely biffed it.
You can also reduce the ‘reward’ by e.g. restoring the content as soon as a counternotice is filed, and require the content to stay up while it’s being adjudicated. DMCA it can’t be restored in under 10 days, which ends up favoring the censor.
The DMCA has no teeth, so I don’t think you can extrapolate to something with teeth. If the “punishment” is ineffectual, then yeah we can expect abuse.
It’s not a hill I’d die on, but I suspect there’s a lot you could do if you’re actually willing to get creative about making it hurt. Like, how many abusive DMCA claims is Nintendo going to file if it risks losing it’s copyright over a frivolous claim? Throw the book at them.
Re: Re: Re:
I mean, the same problem exists even when someone isn’t out to abuse the DMCA. The mechanism is there. Whether it won’t be abused and whether it can’t be abused are two different ideas.
Those laws work with due process. Anti-SLAPP laws require a finding of fact in a court of law that a SLAPP action has occured before any penalties kick in. With the DMCA, the penalty (the takedown of content) occurs before any finding of fact in a court of law. Notice-and-takedown systems like the DMCA have that route-around because that’s how they were always intended to be used.
See, now you’re getting it. Fines for the wealthy absolutely should scale and should be far, far, far larger than those for people whose bank accounts are five digits on a good day. That includes corporations.
The punishment for abusing a notice-and-takedown system is irrelevant to the abuse being possible. The abusable part of the system would still be there even if the punishment for abusing the system were to make that abuse less likely. Fixing that part of the system would be more preferable to punishing abuse of it because fixing it would lessen the chances of that abuse happening. And any “we’re stopping the abuse” change to a notice-and-takedown system would fundamentally require getting rid of the notice-and-takedown system.