An 18-Million-Subscriber YouTuber Just Explained Section 230 Better Than Every Politician In Washington
from the the-kids-are-alright dept
Over the years, we’ve written approximately one million words explaining why Section 230 of the Communications Decency Act is essential to how the internet functions. We’ve corrected politicians who lie about it. We’ve debunked myths spread by mainstream media outlets that should know better. We’ve explained, re-explained, and then explained again why gutting this law would be catastrophic for online speech.
And now I find myself in the somewhat surreal position of saying: you know who nailed this explanation better than most policy experts, pundits, and certainly better than any sitting member of Congress? A YouTuber named Cr1TiKaL.
If you’re not familiar with Charles “Cr1TiKaL” White Jr., he runs the penguinz0 YouTube channel with nearly 18 million subscribers and over 12 billion total views. He’s known for deadpan commentary on internet culture and video games. He’s not a policy wonk. He’s not a lawyer. He’s just a guy who apparently bothered to actually understand what Section 230 says and does—something that puts him leagues ahead of the United States Congress.
In this 13-minute video responding to actor Joseph Gordon-Levitt’s call to “sunset” Section 230, Cr1TiKaL laid out the case for why 230 matters with a clarity that most mainstream coverage hasn’t managed in a decade:
Dismantling section 230 would fundamentally change the internet as you know it. And that’s not an exaggeration to say it. Put it even more simply, section 230 allows goobers like me to post whatever they want, saying whatever they want, and the platform itself is not liable for whatever I’ve made or said.
That is on me personally.
The platform isn’t going to be, you know, fucking dragged through the streets with legs spread like a goddamn Thanksgiving turkey for it and getting blasted by lawsuits or whatever. Now, of course, there are limitations in place when it comes to illegal content, things that actually break the law. That is, of course, a very different set of circumstances. That’s a different can of worms, and that’s handled differently. But it should be obvious why section 230 is so important because if these platforms were held liable for every single thing people post on their platforms, they would get into a lot of hot water and they would just not allow people to post things. Full stop. because it would be too dangerous to do so. They would need to micromanage and control every single thing that hits the platform in order to protect themselves. No matter how you spin it, this would ruin the internet. It’s a pile of dogshit. No matter how much perfume gets sprayed on it or how they want to repackage it, it still stinks.
Yes, the metaphors are colorful. But the underlying point is exactly correct. Section 230 places liability where it belongs: on the person who actually created the content. Not on the platform that hosts it. This is how the entire internet works. Every comment section, every social media post, every forum—all of it depends on this basic principle.
Also, he actually reads the 26 words in the video! This is something that so many other critics of 230 skip over, because then they can pretend it says things it doesn’t say.
And unlike the politicians who keep pretending this is some kind of special gift to “Big Tech,” Cr1TiKaL correctly notes that 230 protects everyone:
This would affect literally every platform that has anything user submitted in any capacity at all.
Every. Single. One. Your local newspaper’s comment section. The neighborhood Facebook group. The subreddit for your favorite hobby. The Discord server where you talk about video games. The email you forward. All of it.
He’s also refreshingly clear-eyed about why politicians from both parties keep attacking 230:
Since the advent of the internet, section 230 has been a target for people that want to control your speech and infringe on your First Amendment rights.
This observation tracks with what we’ve pointed out repeatedly: the bipartisan hatred of Section 230 is one of the most remarkable examples of political unity in modern American governance—and it’s driven largely by politicians who want platforms to moderate content in ways that favor their particular political preferences.
Democrats have attacked 230 claiming it enables “misinformation” and hate speech. Republicans have attacked it claiming it enables “censorship” of conservative voices. Both cannot simultaneously be true, and yet both parties have introduced legislation to gut the law. Cr1TiKaL captures this perfectly:
When Democrats were in charge, it caught a lot of scrutiny, claiming that it was enabling the spread of racism and harming children. With Republicans in power, they’re claiming that it’s spreading misinformation and anti-semitism. This is a bipartisan punching bag that they desperately want to just beat down.
The critics always trot out the same tired arguments about algorithms and echo chambers and extremism. As if removing 230 would somehow make speech better rather than making it disappear entirely or become heavily controlled by whoever has the most money and lawyers. Cr1TiKaL cuts right through this:
There are people that are paying a lot of money to try and plant this idea in your brain that section 230 is a bad thing. It only leads to things like extremism and conspiracy theories and demonization and that kind of thing. That’s not true.
Anyone who stops and thinks about this for even just a moment, firing on a few neurons, should be able to recognize how outrageous this proposal is. How would shutting down conversation and shutting down the ability to express thoughts and opinions somehow help combat the rise of extremism and conspiracies? that would only exacerbate the problem. Censorship doesn’t solve these issues. It makes them worse.
He even anticipates the point we’ve made countless times about what the internet would look like without 230:
Platforms would not allow just completely unfiltered usage of normal people expressing their thoughts because those thoughts might go against the official narrative from the curated source and then the curated source might go after the platform saying this is defamatory. These people have just said something hosted on your platform and we’re coming after you with lawsuits. So they just wouldn’t allow it.
This is a point we keep repeating and you never hear in the actual policy debates, because supporters of a 230 repeal have no answer for it beyond “nuh-uh.”
The people who most want to control online speech are exactly the people you’d expect: governments and powerful interests who don’t like being criticized. Section 230 is one of the things standing in their way.
And when critics inevitably dust off the “think of the children” argument, Cr1TiKaL delivers the response that shouldn’t be controversial but apparently is:
Be a parent. It is not the internet’s job to cater to your lack of parenting by just letting your kid online. Fucking lazy trash ass parents just sit a kid in front of a computer or an iPad and then are stunned when apparently they find bad shit. Be a parent. Be involved in your kids’ life. Raise your children. Don’t make it the internet’s job to do that for you.
Is this delivered with the diplomatic nuance of a congressional hearing? No. Is it correct? Absolutely. The “protect the children” argument for dismantling 230 has always been a dodge—a way to make critics of the bill seem heartless while ignoring that Section 230 doesn’t protect illegal content and maybe, just maybe, the primary responsibility for what media children consume should rest with the adults responsible for those children.
We’ve been writing about Section 230 for years, trying to explain to policymakers and the general public why it matters. And most of the time, it feels like shouting into the void. Politicians keep lying about it. Journalists keep getting it wrong. The mythology around 230 persists no matter how many times it gets corrected.
And we’ve heard from plenty of younger people who now believe that 230 is bad. I recently guest taught a college class where students were split into two groups—one to argue in favor of 230 and one against—and I was genuinely dismayed when the group told to argue in favor of 230 argue that 230 “once made sense” but doesn’t any more.
So there’s something genuinely hopeful about seeing a young creator with an audience of nearly 18 million people—an audience that skews young and is probably not spending a lot of time reading policy papers—get it right. Not just right in a general sense, but right in the specifics. He read the law. He understood what it does. He correctly identified why it matters and who benefits from dismantling it.
Maybe the generation that grew up on the internet actually understands what’s at stake when politicians threaten to fundamentally reshape how it works. Maybe they’re not buying the moral panic narratives that have been trotted out to justify every bad piece of tech legislation for the past decade.
Or maybe I’m being optimistic. Either way, Cr1TiKaL’s video is worth watching. It’s profane, it’s casual, and it’s more correct about Section 230 than anything you’ll hear from the halls of Congress.
Filed Under: cr1tikal, section 230


Comments on “An 18-Million-Subscriber YouTuber Just Explained Section 230 Better Than Every Politician In Washington”
Hear me out…
What if we get rid of section 230, but when a site gets sued, they just pay the government a protection fee to make the suit go away? That would work for EVERYONE!
Re:
That’s extortion.
Re: Re:
That’s the joke….
Re: Re: Re:
Yeah I realize that post-post. 🤡
Maybe they CAN both be right?
Democrats have attacked 230 claiming it enables “misinformation” and hate speech. Republicans have attacked it claiming it enables “censorship” of conservative voices. Both cannot simultaneously be true, and yet both parties have introduced legislation to gut the law. Cr1TiKaL captures this perfectly:
It seems like they CAN both be right since it is so often the “conservative voices” that are responsible for the misinformation and hate speech that they seem so desperate want to spread.
Re:
The above statement is false. Misinformation exists even in the presence of censorship.
Re:
They can both be true, in a myriad of different combinations. Mike was trying to riff on a line from 2 days ago and flubbed the wording a bit this time.
“colorful” carrying a lot of weight there…. 🙂
This comment has been flagged by the community. Click here to show it.
What AI Said About Section 230 Repeal
Obviously AI just scrapes sources, so I don’t know if anything you ask it is answered faintly reliably, but Grok says the chances of any Section 230 repeal are quite low as politicians may want reform it, not scrap it completely. The current sunsetting act (while it has bipartisan backers) hasn’t progressed yet. It is more intended to bring stakeholders to the negotiating table (though it is a risky strategy if nothing is passed to succeed it). Reform is more likely than repeal, though we’ll see. Plus current lawsuits might be more significant in chipping bits off Section 230.
Re:
Look I’m worried about S230 too and maintain a contrary stance about the law’s chances from everyone else on this site. But please, for the foreseeable future, do not use Grok for answers. It’s a propaganda machine that is also ethically dubious and you will be laughed out of a room for admitting you used Grok.
Isn’t it about who you can sue?
Isn’t killing section 230 really about money and who you can sue? Why sue an individual when you could sue a Facebook or a Google? Section 230 stops that. It seems to me that freedom of speech/censorship is a cover story to keep people onside.
This is a silly talking point to begin with, but technically, yes it can. That’s just how editorial discretion works.
This is just an argument against secondary liability as a concept.
Re:
Making the platform liable for their users speech will just cause the platform to either censor all speech to fit what the current administration wants, or end all ability for the public to use it.
Basically become propaganda or be shut down. I’m sure that will work in the way you want it to.
Re: Re:
Depends entirely on the details of how you do it. If you were to full repeal it? Yes, and that is why repeal is bad. Any sort of generic liability would have that problem.
But that isn’t true of any secondary liability. There are already notable exceptions like criminal activity (which is exempt from 230 protections), or DMCA copyright liability. Those are both examples where publishers can have secondary liability, under certain conditions. It doesn’t turn them into propaganda mills or end the public’s ability to use them. Also, if it were that simple and secondary liability automatically led to administration censorship, you would see every non-internet publisher that doesn’t have 230 protections becoming unusable propaganda mills. They haven’t.
There are a lot of ways that secondary liability can screw over sites, but the details matter. It’s not a blanket thing you can say for any form of liability. If it were, secondary liability literally would not exist in the first place.
Where you say “Also, he actually reads the 26 words in the video!” you haven’t introduced what “the 26 words” are. Not gonna watch the video so I’ll just have to comfort myself with the fact that there are 26 words somewhere. You like them so that’s good enough for me.
But also, in case this is starting to happen, let’s not make rallying cries in the form of “the [number] words”.
Re:
He’s talking about §230(c)(1), which is 26 words:
That’s the core of Section 230.
Re:
The fact that you have somehow avoided the very short text of the law all this time is a you problem.
An Alternative View
I think section 230 should not apply to sites that yuse an algorithm to personalize what you see. It is akin to the editorial board of a newspaper deciding what to print. If you see the same front page as everyone else, the same posts in the same order for any subs you directly subscribe to, then section 230 is satistfied – you are seeing what others have posted, same as everyone else.
If the site selects only certain posts, and does not prioritize others, only for you then it is editorializing (and creating silos). It should not be protected by section 230.
Re:
This is backwards:
You’re effectively arguing that those sites which do exercise editorial control shouldn’t be held responsible for the content.
Re:
All sites personalize what you see, every single one.
Re:
Right, right, right. So i train the algorithm to give me what i want, and the site is therefore responsible. Cool, cool.
You don’t know what an algorithm is, do you?
I would like to see responses to these articles from Tech Policy Press:
https://www.techpolicy.press/establishing-legal-incentives-to-hold-big-tech-accountable/
https://www.techpolicy.press/a-new-section-230-why-ai-preemption-would-let-tech-off-the-hook-again/
Re:
Tech Policy Press sucks and shouldn’t be treated seriously.
Hope you enjoy your answer. 🙂
Re:
The first article:
The cost of proposals like this is that many more cases survive the motion-to-dismiss stage, which is exactly the protective value 230 currently provides.
The article explicitly emphasizes getting cases to discovery. For Big Tech, discovery costs are negligible, but for forums, hobby communities, open-source projects, new entrants, etc., discovery costs can be existential.
There are better approaches.
“Think of the children” the talking heads regurgitate, even as the Trumpstein files have been punted across 2 decades of “leadership”. Sure.
Yeah I realized that post-post.🤡
i thought parenting was all about letting your child be exposed to whatever, and if you decide something in that exposure was bad, you get to sue someone.
i love our after-the-fact society.