Section 230 Turns 30; Both Parties Want It Gone—For Contradictory Reasons
from the they're-both-wrong dept
Here’s what’s strange about Section 230 of the Communications Decency Act, the law that made the open internet possible: Both sides of the traditional political spectrum hate it. But for opposite reasons. That, alone, should highlight that something is wrong in their analysis.
Republicans hate it because they say it lets websites censor conservative speech. Democrats hate it because they say it lets websites host dangerous disinformation.
Read those two sentences again.
One side is furious that platforms can moderate. The other side is furious that platforms don’t have to moderate. Both sides are attacking the same 26-word provision of a 30-year-old law—and if you understand why their complaints are contradictory, you understand what Section 230 actually does.
This weekend marked the 30th anniversary of the Telecommunications Act of 1996, which contained the mostly unconstitutional Communications Decency Act, which inexplicably contained Section 230. (If you want the full history, I hosted a podcast series about it last year.) And after three decades, there’s now a concerted, bipartisan effort to kill it—by people who either don’t understand what the law does, or understand perfectly well and see its destruction as a path to controlling the flow of information online.
Years back I wrote a piece debunking many of the myths about 230. The myths have only multiplied since.
Both critiques, stripped of their partisan framing, are about the same thing: who gets to control what speech appears where. And Section 230’s answer to both sides is the same: pound sand.
That’s what the law actually does. It doesn’t mandate or prohibit “censorship.” It doesn’t require neutrality (that’s a myth that won’t die). It simply says: if you have a problem with content online, take it up with the person who created it, not the service hosting it. Platforms can moderate however they see fit—aggressively, lightly, inconsistently, politically—and they won’t face ruinous liability for those choices. They also won’t face liability for what they don’t remove.
This is what makes an open internet possible. Without that protection, no service would risk hosting user content at all. Or if they did, every moderation decision would require a lawyer’s sign-off, optimizing for liability reduction rather than healthy communities. The people who actually understand how to build good online spaces—trust and safety professionals, community managers—would be overruled by legal departments playing defense.
Almost all criticism of Section 230 is not actually about Section 230. It’s about one of two things: (1) not liking something in society that manifests online, and incorrectly believing that changing the law will somehow fix it, or (2) wanting control over what content platforms host.
So what happens if critics get their way? There’s a lobbying campaign right now claiming that reforming or repealing 230 will lead to “greater responsibility from tech companies.”
This is exactly backwards.
Without 230’s protections, smaller platforms—the ones that might actually compete with the giants—get destroyed first. They can’t afford the vexatious lawsuits. They can’t afford buildings full of lawyers. The big players survive, and their market position gets locked in even harder.
And those surviving giants won’t become more responsible. They’ll become less. Any competent legal team will tell them: the less you know, the less liability you have. Don’t proactively look for harmful content. Don’t research how your platform causes harm—those findings would be exhibit A in every lawsuit. Just stick your head in the sand and let the lawyers handle the subpoenas.
This is how liability regimes work, and America’s exceptionally litigious legal culture makes these incentives even stronger. The critics either don’t understand this or don’t care, because their actual goal was never “responsibility.” It was control. That they’ve duped some tech critics into thinking it’s about “responsibility” or “safety” doesn’t change that. Because it won’t improve responsibility or safety. But it will give politicians tremendous power over online speech.
Thirty years ago, a 26-word provision buried in a mostly unconstitutional law kicked off the open internet. It let anyone build a platform, host a community, create something new—without needing permission from lawyers or regulators first. That era is now under direct attack by people who misrepresent what Section 230 does and misrepresent what killing it would mean.
The open web turned 30 this weekend. The bipartisan campaign to kill it was never about responsibility or safety, it was always about control. Whether the open web sees age 31 comes down to 26 words that tell both sides to pound sand.
Filed Under: control, free speech, intermediary liability, open internet, section 230


Comments on “Section 230 Turns 30; Both Parties Want It Gone—For Contradictory Reasons”
One side (Republicans) lies and the other side (Democrats) wants to stop the first side from lying. Free speech is a real bitch, ain’t it.
Re:
Arguably, some of the lies from Republicans aren’t protected by the 1st Amendment, but it’s just difficult to hold them accountable for defamation, fraud, and credible threats when they’ve stacked the judiciary in their favor and own the DOJ.
And if anyone needs an example of what content will go unmoderated in a situation like that, I give to you a single acronym: CSAM.
Re:
This is indeed what history teaches us. Platform holders will go RIGHT BACK to the situation before 230, wherein they were actively motivated to cover it up so as to avoid liability, which was a not insignificant component of the very scandals which eventually led to the realisation we needed 230 in the first place.
Re: Re:
It was less about “cover[ing] it up” and more about “ignoring it unless actively told about it”. The pre-230 situation was headed in a direction where a platform would be incentivized to ignore user-generated content unless it was informed of specific content that violated laws. 230 made it possible for a platform to moderate UGC and not get legally dunked if it didn’t moderate content about which the platform was unaware.
Re: Re: Re:
Oh, I definitely recall testimonies by people who when they brought stuff to their bosses with a post-it attached like, “hey, maybe we ought to forward this to the police”, were told in no uncertain terms, “You did not see this, you did not bring it to me, and please shred this document before you clock out tonight, the company has no knowledge of our services being used for such activity”
Re: Re: Re:
It’s actually bugging me now. I recall a big feature I read on it, but this was years and years ago and I don’t recall all the specific names involved, and can’t guarantee even if I could recall where I saw it that I could guarantee I could track down a copy, or that a digital version would even be on the internet archive. It was pretty broad, going into loads of early internet history and talking about everything from niche stuff like early 90s online paedophile hunters to contemporary coverage in mostly print media outlets of the time, and various hearings and whistleblowers… Maybe someone else here’s memory will be jogged?
Re:
?
Why are you using something with criminal liability, not just civil?
Re: Re:
Because much of the motivation behind the law’s structure was built on court rulings that said, essentially, “if you proactively look at anything then you are likely liable for everything you didn’t look at.” Shifting back towards that structure will result in less looking for CSAM, because by merely looking for CSAM (even if you find and remove 100% of it), your liability for civil infractions inceases dramatically.
Re: Re: Re:
Ahh, right. That’s not where my brain went with it at first read, but it’s a fair point
Re: Re: Re:
Even back then, they knew of Masnick’s impossibility theorem, before it was given a name. And they responded in the only way they could to the perverse incentives of a system where if you take any form of action at all, it has to be 100% effective at correctly identifying and removing everything.
With inaction.
It’s a modern day Goldilocks tale…
Republicans: You moderate too hard.
Democrats: You moderate too soft.
If you are pissing off both sides for opposite reasons, perhaps you are doing something “just right…”
Both critiques, stripped of their partisan framing, are about the same thing: who gets to control what speech appears where. And Section 230’s answer to both sides is the same: pound sand.
Therein lies the rub, as long as the response is ‘pound sand’ without recourse for the two complaints, it will invariably become a casualty of the collapse of rule of law. This is regardless of whether you agree or disagree with the law itself, it’s on a collision course for repeal and making everything worse.
Ultimately, the only way to save the law is to get rid of the people trying to repeal it and then put the people trying to reform it on notice that the disinformation spreaders are gone for reasons other than their disinformation spreading.