Section 230 Didn’t Fail Rand Paul. He Just Doesn’t Like the Remedy That Worked.
from the do-you-even-1st-amendment,-bro? dept
Rand Paul is furious. That’s because someone posted a video falsely accusing the Kentucky senator of taking money from Venezuela’s Maduro regime.
Paul should know that the First Amendment sets a deliberately high bar for defamation of public officials like him. Under New York Times v. Sullivan, he must show not just falsity, but that the speaker knew it was false or had serious doubts about the validity and published it anyway That demanding standard known as “actual malice” exists for a reason — to ensure that fear of lawsuits does not silence criticism of those who hold power, even when the speech is offensive, wrong, or deeply unfair.
Instead of fighting this battle in court against the person who created this video, Paul has redirected his anger toward Section 230, the law often described as the 26 words that created the modern Internet. Although he once defended the law’s provisions that shield online platforms from liability for user speech, Paul now argues in a recent New York Post op-ed that the only solution is to tear it down.
At the heart of Paul’s argument is a simple demand: YouTube should have stepped in, judged the accusation against him to be false, and removed it. Once notified that the video was false, the platform should have been legally responsible for leaving it up. Section 230, he argues, prevents that from happening.
But who decides what is false? Who decides what is defamatory? And how quickly must those judgments be made — under threat of crushing lawsuits — by platforms hosting speech from millions of users around the world?
It’s surprising to see Senator Paul, who’s been vocal against government jawboning of speech, pledge to pursue legislation that would amend the law because a private platform failed to moderate speech the way he wanted.
Paul insists this distinction is hypocritical because platforms removed his COVID-era statements they deemed as false while leaving up a lie about him. This argument collapses under its own weight. The Supreme Court has repeatedly held that private companies can make editorial decisions. They are allowed to be inconsistent, mistaken, biased, or wrong.
As the Court affirmed in Moody v. NetChoice, “it is no job for government to decide what counts as the right balance of private expression—to ‘un-bias’ what it thinks biased [ . . . ] That principle works for social-media platforms as it does for others.” In other words, the First Amendment protects editorial discretion precisely because the government cannot be trusted with it.
If Section 230 protections are rolled back, the consequences could be profound. Some platforms will over-moderate to avoid legal exposure, removing lawful but controversial content. Others will under moderate, allowing harmful content to spread unchecked since any moderation decision could open them up to liability.
Such a shift will not harm the powerful but the vulnerable, the dissenters, and the voices that depend on intermediaries to be heard. Smaller platforms and start-ups may shut down, avoid hosting speech, or change their business models altogether due to litigation risk.
Paul draws a comparison between platforms and newspapers, arguing that publishers historically avoided defamation through editorial judgment. But newspapers choose what they print before publication. Platforms host speech created entirely by others, at unimaginable scale. The New York Post is still protected by Section 230 from being liable for the comment section on its online articles.
The real, speech-protective answer is defamation law. If Paul believes that a video contains lies about him, he could sue the creator for defamation and prove actual malice under the Sullivan standard.
But we cannot and should not dismantle the legal foundation of online speech because it failed to protect one powerful man. That sets a precedent that will harm millions of marginalized voices.
Ashkhen Kazaryan is a Senior Legal Fellow at The Future of Free Speech, a nonpartisan think tank at Vanderbilt University.
Filed Under: 1st amendment, defamation, free speech, nyt v. sullivan, rand paul, section 230


Comments on “Section 230 Didn’t Fail Rand Paul. He Just Doesn’t Like the Remedy That Worked.”
This comment has been flagged by the community. Click here to show it.
NYT v Sullivan is made up nonsense and will probably be overturned soon.
Re:
By all means, tell us how dismantling Sullivan will be a net good for anyone but rich motherfuckers looking to silence critics with the legal equivalent of a “nice place you got here” extortion bit.
Re:
AC Esq., eh? Give us your legal arguments, then.
With cited precedence, of course.
They get to Mad-Lib their own “warrants.”
I think Rand’s opinion piece states the original video was taken down when the original author was threatened legally.
Still, it doesn’t matter because the video made it’s way somewhere other than YouTube.
I don’t think Section 230 will survive even if it deserves to or many need minor reforms, but there are vast swaths of the Internet that are quite loud that don’t give a damn about the presence or absence of 230 anyway so I’m not sure what he hopes to accomplish without ditching his libertarian principles wholesale.
He did get the person to take it down? the individual who posted the video finally took down the video under threat of legal penalty
Something being hypocritical is different from being legal, those are two different arguments. (And of course, one of them being defamation muddies the waters, since it’s one of the few types of wrongness that isn’t covered under 1A editorial protections).
However, this is kind of missing part of the point. A big part of the justification for 230 is that it gives room and incentives for companies to moderate, even as it doesn’t technically require it (as indeed, this article does, with ‘over/under’ moderation claims). If a company is choosing not to moderate, it’s not unreasonable to point out that part isn’t being delivered on as originally claimed.
Eh, for newspapers specifically yes, but distributors can also face liability for carrying other people’s speech. And at pretty large scale to boot, albeit still much smaller than the internet.
Paul’s oped directly mentions why he finds that solution lacking: Yet, the defamatory video still has a life of its own circulated widely on the internet and the damage done is difficult to reverse. (And that’s assuming you can find the person to sue them in the first place, afford it, they’re subject to U.S. jurisdiction, etc.). Even if his is a bad solution, I do think you need to actually grapple with why he doesn’t think that’s an answer. Not just ignore it. It’s fine to make the argument that the benefit is worth the cost (or that the cost isn’t so large), but you do have to actually make it.
What happened to that one powerful man could just as easily happen to someone marginalized (indeed, they would find it much harder to actually pay for a defamation suit). It is a bit hypocritical of Paul to only flip once it actually happened to himself, but it is not a situation that is unique to the powerful.
Re:
230 never promised to get companies to moderate. It promised to give them incentives to moderate by removing the threat of ruinous liability for any imperfection in moderation (see Stratton Oakmont, Inc. v. Prodigy Services Co.). It’s the 1st Amendment that prevents forcing companies to moderate their platforms, no law can get around that.
We actually can. There’s a long-standing understanding that there are damages that the law simply can’t provide a remedy for. People like Rand Paul are rarely on the receiving end of that, but just ask anyone who’s had their house trashed by mistake by the cops how much help the law is in getting more than a small fraction of the damage paid for. Or anyone who’s had their name smeared in the press over bogus criminal charges and then been told after the charges get dismissed because there’s absolutely no evidence supporting them that they can’t sue the press for reporting what the cops said and not saying a word about the dismissal.
Re: Re:
Right, but I don’t think a lot of people, are going to see the distinction between those two sentences. The Senator clearly didn’t. It’s not trivially obvious, especially when what’s missing from that conversation is what exactly those incentives are, and more importantly, their limitations. There’s a lot of mushiness on what exactly “give them incentives to moderate” actually entails, which leaves people reading into it. It does set someone up to feel tricked/surprised when they get the latter sentence, thinking they were signing onto the former.
Yes and no, you have to be a bit careful. The 1A prevents forcing companies from most moderation in general, but it doesn’t sever publisher liability for a small subset of things that aren’t 1A protected (namely, defamation). That’s why Prodigy lost pre-230 under just a 1A defense, and 230 was needed (separately from the cost savings for defending a suit). And even with Compuserve, it was still ruled to be a distributor, it just didn’t have the requisite actual knowledge to be liable. (Prodigy got hit with publisher liability because it actively moderated, Compuserve only got distributor because it didn’t). In a case like Paul’s where he gave them notice, both would be a problem. The Covid stuff would be protected under 1A, the defamation would not be.
We can, it’s just that the article doesn’t actually bother to. It’s a fine argument, but the article just kind of blows by it entirely for some reason.
Re: Re: Re:
And the Prodigy case was extensively discussed during and after the creation of Section 230. Traditionally everyone assumed the Compuserve case held true, that even if a platform moderated content it couldn’t be held liable for content it didn’t actually know about (imperfect moderation). The Prodigy case raised the spectre that any moderation would make the platform liable for all occurrences whether they actually knew about them or not. That, as was pointed out by the authors of Section 230, would force platforms to either a) pre-screen all content against the most stringent standards to insure nothing got through that could even remotely leave them liable regardless of how much valid content got swept up as well, or b) not moderate at all so there would be no possibility of claiming they knew about content they hadn’t seen. The first option isn’t feasible at scale, and the second option is undesirable. Section 230 was written to foreclose the outcome foreseen for the Prodigy case, effectively making official the pre-Prodigy status quo. The incentive was simply knowing that, as a platform, you couldn’t be held liable for things you hadn’t actually seen and had a chance to act on, nor could you be forced into the role of judge and jury in an argument between some party and one of your users over what that user had posted unless the content crossed some pretty distinct lines into “clearly illegal, nobody argues that” territory.
Nowhere in any of the discussion surrounding Section 230 was there any discussion of foreclosing the option of not moderating content. That’s why the law itself doesn’t speak of creating incentives, but of removing disincentives (the Prodigy case, in particular).
Re: Re: Re:2
The law itself doesn’t discuss it, but in the discussion around it, it does come up. For instance, from Mike: So there are many, many incentives for nearly all websites to moderate: namely to keep users happy, and (in many cases) to keep advertisers or other supporters happy…sites actually have a very strong incentive provided by 230 to moderate.. See also e.g. EFF The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content etc. Wyden himself described it as: If we’re going to really make sense for society, let’s tell those platforms we’ll give them the sword and they better use it, and they better it to moderate content and get rid of the filth and the slime and all this misogyny that people regret.… He’s even more explicit here: the big internet companies have utterly failed to live up to the responsibility they were handed two decades ago…In years of hiding behind their shields, the companies have left their swords to rust. The companies have become bloated, uninterested in the larger good, and feckless.
That’s not foreclosing the option of not moderating content, but there’s a clear intent there, and I don’t think it’s unfair for Paul to point at stuff like that, as long as he’s clear it’s implicit rather than explicit.
Re: Re: Re:
To be fair conservatism’s bread and butter for bad faith legal and cultural debate in favor of the first amendment and why private companies are obligated at least in spirit to adhere as closely as possible to the government’s track record IS not seeing a distinction between those two very sentences.
So the fact that Rand finally sees the problem but misses the forest for the trees barely counts as news despite his status as a Senator (I am not criticizing TechDirt so much as pointing out that libertarianism in all its flavors is a childish and dangerous intellectual understanding of the world)
Re:
That’s not the point. The point is that if Paul is still mad that this happened, he should sue the actual responsible party instead of lambasting a law that is ultimately irrelevant to the matter at hand.
It’s fairly clear that Paul is calling it hypocritical as a criticism of Section 230 not being applied “consistently”.
Pointing out the consequence of removing Section 230 isn’t the same as claiming that the law requires companies to moderate.
That’s a consequence of the internet being the internet. Changing/repealing Section 230 would have no impact on that.
Not sure how you’re reading the article this way.
The overall point of it is to say “Paul’s understanding/argument regarding section 230 is erroneous, and if he wants to punish someone for defamation, he should go after the responsible party with the laws made for that.
That sounds like an issue moreso with the general cost of lawsuits, not Section 230.
Changing Section 230 certainly wouldn’t make a difference for marginalized people getting defamed.
Re: Re:
Libertarianism regardless of whether it’s left or right leaning is a childish and dangerous understanding of policy and politics so it’s no surprise he lashed out at Section 230 when he suddenly realized the Internet is basically impossible to moderate at scale in a satisfactory manner.
What he doesn’t care about now is that repealing Section 230 won’t prevent a determined enough user to set up seven proxies as the kids say and make discovery too onerous to consider when spewing libel or slander maliciously.
The question then becomes not whether Section 230 should get repealed but whether companies should be regulated not to grow, and cap maximum user bases, beyond sizes that salaried (NOT contracted) human moderation teams can feasibly moderate. That alone would kneecap every social media platform and YouTube.
You’d be right back to internet forums curating login registrations behind a portal wall exclusively to comment on news articles for instance.
Re: Re:
Yeah the wording is just very strange, as if Paul didn’t already go after the person and settle it to his satisfaction. That said, as far as relevancy, it’s not irrelevant, because 230 is what is protecting YT from liability.
Yes, but Paul used the word implicit specifically to talk about the consistency. He’s explicitly (pun unintended) not making an argument about the legal requirements of the law.
He’s erroneous, but it’s clear how he’s arriving to that wrong conclusion, and I think the article is kind of missing the part where it explains why he’s erroneous. It mentions concerns about overmoderation/undermoderation/government overreach etc. But Paul is aware of those, and mentions them. So the final step to showing Paul incorrect would be the argument for why the over/undermoderation concerns trump Paul’s concerns. And that’s just kind of missing, it’s more framed as if Paul hadn’t mentioned things like overmoderation.
(Another way to argue why Paul is wrong would be assert that only the person who posted it is the responsible party, but the article doesn’t seem to be making that argument)
The internet will always be the internet, but being able to get it removed faster, and being able to go directly to sites to have it taken down would have an impact. Indeed, that’s part of what leads to concerns about overmoderation. It’s not worth the massive downsides it would unleash, but that’s different than no impact.
Yep. It’s not caused by 230, but it is something that would need to be addressed if defamation suits are going to be a reasonable answer.