In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by First Amendment lawyer Ari Cohn. Together they discuss:
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by Jess Miers, law professor at University of Akron School of Law. Together, they discuss:
In this special episode, Mike and Ben reflect on 100 episodes of the podcast, followed by an important announcement: we’re launching a Patreon and making some changes to Ctrl-Alt-Speech!
Starting on May 28th, Patreon members will get early access to extended weekly episodes with in-depth coverage of an extra major story. The free episodes will continue here on this feed, just slightly shorter and released one day later.
You can become a member now at one of two levels: Supporters get early access to the extended episodes, and for a limited time Founders get that plus the opportunity to send us news stories that you think we should cover each week. After the new episodes begin at the end of May, the Founder tier will become the Insider tier with all the same benefits at a slightly higher price, so act now if you don’t want to miss out (you’ll also get bragging rights as a founding member!)
We’re immensely grateful to the incredible audience we’ve found over these past 100 episodes, and this is our way of helping make the podcast sustainable for the next 100!
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Fadzai Madzingira, a digital policy expert with a decade of experience at Meta, Salesforce, Ofcom and currently Twitch, where she leads the policy, outreach and education teams. Together, they discuss:
Back in October, Meta announced that its new Instagram Teen Accounts would feature content moderation “guided by the PG-13 rating.” On its face, this made a certain kind of sense as a communication strategy: parents know what PG-13 means (or at least think they do), and Meta was clearly trying to borrow that cultural familiarity to signal that it was taking teen safety seriously.
The Motion Picture Association, however, was not amused. Within hours of the announcement, MPA Chairman Charles Rivkin fired off a statement. Then came a cease-and-desist letter. Then a Washington Post op-ed whining about the threat to its precious brand. The MPA was very protective of its trademark, and very unhappy that Meta was freeloading off the supposed credibility of its widely mocked rating system.
And now, this week, the two sides have announced a formal resolution in which Meta has agreed to “substantially reduce” its references to PG-13 and include a rather remarkable disclaimer:
“There are lots of differences between social media and movies. We didn’t work with the MPA when updating our content settings, and they’re not rating any content on Instagram, and they’re not endorsing or approving our content settings in any way. Rather, we drew inspiration from the MPA’s public guidelines, which are already familiar to parents. Our content moderation systems are not the same as a movie ratings board, so the experience may not be exactly the same.”
In Meta’s official response, you can practically hear the PR team gritting their teeth:
“We’re pleased to have reached an agreement with the MPA. By taking inspiration from a framework families know, our goal was to help parents better understand our teen content policies. We rigorously reviewed those policies against 13+ movie ratings criteria and parent feedback, updated them, and applied them to Teen Accounts by default. While that’s not changing, we’ve taken the MPA’s feedback on how we talk about that work. We’ll keep working to support parents and provide age-appropriate experiences for teens,” said a Meta spokesperson.
Translation: we’re still doing the same thing, we’re just no longer allowed to call it what we were calling it.
There are several layers of nonsense worth unpacking here. First, there’s the MPA getting all high and mighty about its rating system. Let’s remember how the MPA’s film rating system came into existence in the first place: it was a voluntary self-regulation scheme created in the late 1960s specifically to head off government regulation after the government started making noises about the harm Hollywood was doing to children with the content it platformed. Sound familiar? The studios decided that if they rated their own content, maybe Congress would leave them alone. As the MPA explains in their own boilerplate:
For nearly 60 years, the MPA’s Classification and Rating Administration’s (CARA) voluntary film rating system has helped American parents make informed decisions about what movies their children can watch… CARA does not rate user-generated content. CARA-rated films are professionally produced and reviewed under a human-centered system, while user-generated posts on platforms like Instagram are not subject to the same rating process.
Sure, there’s a trademark issue here, but let’s be real: no one thought Instagram was letting a panel of Hollywood parents rate the latest influencer videos.
Next, the PG-13 analogy never actually made much sense for social media. As we discussed on Ctrl-Alt-Speech back when this whole thing started, the context and scale are just completely different. At the time, I pointed out that a system designed to rate a 90-minute professionally produced film — reviewed in its entirety by a panel of parents — is a wholly different beast than moderating hundreds of millions of short-form posts generated by individuals (and AI) every single day.
So, yes, calling the system “PG-13” was a marketing gimmick, meant to trade on a familiar brand while obscuring how differently social media actually works — but the idea that this somehow dilutes the MPA’s marks is still pretty silly.
Then there’s the rating system’s well-documented arbitrariness. The MPA’s ratings have been criticized for decades for their seemingly incoherent standards. On that same podcast, I noted that the rating system is famous for its selective prudishness — nudity gets you an R rating, but two hours of violence can skate by with a PG-13.
There was a whole documentary about this — This Film Is Not Yet Rated — that exposed just how subjective and inconsistent the whole process was. Meta was effectively borrowing credibility from a system that was itself created as a regulatory dodge, is famously inconsistent, and was designed for an entirely different medium. And the MPA’s response was essentially: “Hey, that’s our famously inconsistent regulatory dodge, and you can’t have it.”
The whole thing was silly. And now it’s been formally resolved with Meta agreeing to stop doing the thing it had already mostly stopped doing back in December. So even the resolution is anticlimactic.
But there’s a more substantive point buried under all this trademark squabbling: the whole approach reflects a flawed assumption that one company can set a universal standard for every teen on the planet.
As I argued on the podcast, the deeper issue is that the whole framework is wrong for the medium. The MPA’s rating system was built to evaluate a single 90-minute film, reviewed in its entirety by a panel of parents. Applying that logic to hundreds of millions of short-form posts generated by people across wildly different cultural contexts — a kid in rural Kansas, a teenager in Berlin, a twelve-year-old in Lagos — was never going to produce anything coherent. Different kids, different families, different communities have different standards, and no single company should be setting a universal threshold for all of them. The smarter approach is giving parents and users real controls with customizable defaults, rather than having Zuckerberg (or a Hollywood trade association) decide what counts as age-appropriate for every teenager on the planet.
This whole dispute was silly from start to finish.
On January 10th, 2025, Mark Zuckerberg sat down with Joe Rogan and put on quite a performance. He talked about how the Biden administration had pressured Meta to take down content. He detailed how the Biden administration had apparently pressured Meta to take down content — how officials called and screamed and cursed — and how, going forward, he was a changed man. A champion of free expression, done forever with government demands to remove content. And a whole bunch of people (especially MAGA folks) cheered all this on. Zuckerberg was a protector of free speech against government suppression!
Twenty-four days later, he texted Elon Musk — a senior government official at the time — to volunteer to remove content the government wouldn’t like. Unprompted.
As I wrote at the time, the whole Rogan interview was an exercise in misdirection. The “pressure” Zuck kept describing was the kind of thing the Supreme Court explicitly found, in the Murthy case, was standard-issue government communication — the kind of thing Justice Kagan said happens “literally thousands of times a day in the federal government.” The Court called the lower court’s findings of “censorship” clearly erroneous. And Zuck himself kept admitting, over and over, that Meta’s response to the Biden administration was to tell them no. He said so explicitly:
And basically it just got to this point where we were like, no we’re not going to. We’re not going to take down things that are true. That’s ridiculous…
In other words, the Biden administration asked, Meta said “nah,” and that was that. The Supreme Court agreed this fell well short of coercion. Indeed, the only documented instance of the Biden administration making an actual specific takedown request to a social media platform was to flag an account impersonating one of Biden’s grandchildren. That was it. That was the “massive government censorship operation.”
But Zuck milked it beautifully on the podcast, and Rogan ate it up. The narrative was established: Zuckerberg, defender of free expression, standing tall against the censorial government, vowing to never again let officials dictate what stays up and what comes down on his platforms.
That was January 10th.
On February 3rd, Zuckerberg texted Elon Musk:
Looks like DOGE is making progress. I’ve got our teams on alert to take down content doxxing or threatening the people on your team. Let me know if there’s anything else I can do to help.
So the man who spent three hours performing righteous indignation about government censorship proactively reached out to a senior government official to let him know Meta was already taking action to remove content on behalf of that official’s government operation — including truthful information like the names of public servants working for the federal government.
“Let me know if there’s anything else I can do to help.”
A guy who spent three hours on the biggest podcast in the world performing righteous indignation about government censorship pressure — then, weeks later, volunteered exactly that kind of service, unprompted, to the same government. Just with a different party in power.
The Biden administration’s alleged “coercion” amounted to strongly worded emails that Meta freely ignored, and its only documented specific takedown request was for an account literally pretending to be the president’s grandchild. Zuckerberg’s response to that: three hours on the world’s biggest podcast denouncing government censorship. His response to Musk’s DOGE operation: a proactive late-night text offering to suppress information identifying the federal employees doing the dismantling.
And Zuck’s framing of “doxxing” is doing a lot of work here. The DOGE staffers whose identities were being shared on social media were federal employees exercising enormous government power — canceling grants, accessing sensitive government databases, making decisions that affected millions of Americans. The administration went to great lengths to hide who these people were, precisely because what they were doing was controversial and, in many cases, potentially illegal. Identifying who is wielding government power on your behalf has a name, and that name is accountability, not “doxxing.”
Notably, the Zuckerberg text came the day after Wired started naming DOGE bros. Which is reporting. Not doxxing. Doxxing is revealing private info, such as an address. A federal employee’s name is not private info. It’s just journalism.
Also notice how Zuckerberg bundles “doxxing or threatening” — conflating two very different things. Removing credible threats of violence is something every platform already does; it’s in every terms of service. But by packaging the identification of public servants alongside actual threats, Zuck makes the whole thing sound like a routine trust-and-safety operation rather than what it actually was: volunteering to help the government hide its own employees from public scrutiny.
Compare the two scenarios directly. The Biden administration flagged a fake account impersonating a minor family member of the president — a clear-cut case of impersonation that every platform’s rules already cover. In other cases, they simply asked Facebook to explain its policies for dealing with potential health misinformation in the middle of a pandemic. Zuckerberg’s response, per his Rogan narrative, was to tell them to pound sand, and then go on a podcast to brag about it. Meanwhile, when it came to Musk and DOGE, it looks like Zuck didn’t wait to be asked. He texted Elon Musk at 10 PM on a Monday night to let him know the teams were already mobilized. He closed with “let me know if there’s anything else I can do to help,” which is really more “eager intern” energy than “principled defender of free expression” energy.
It’s also worth noting the broader context of the relationship here. These two were, at least publicly, supposed to be rivals. Remember the whole cage match fiasco? The very public trash-talking? And yet here’s Zuck texting Musk late at night, opening with flattery (“Looks like DOGE is making progress”), offering content suppression as a gift, and then — in literally the next breath in the text exchange — Musk pivots to asking Zuck if he wants to join a bid to buy OpenAI’s intellectual property.
“Are you open to the idea of bidding on the OpenAI IP with me and some others?” Musk asked. Zuck suggested they discuss it live. Just a couple of billionaires doing billionaire things at 10:30 PM after one of them volunteered censorship services to the other’s government operation.
We only know about any of this, by the way, because of Musk’s quixotic lawsuit against OpenAI. These texts were designated as a trial exhibit by OpenAI’s lawyers. Musk’s team is now trying to get them excluded from evidence. The motion seeking to suppress this evidence opens with one of the more entertaining paragraphs you’ll find in a legal filing:
President Trump. Burning Man. Rhino ketamine. These are all inflammatory and highly irrelevant topics that Defendants are trying to improperly make the subject of this litigation. Throughout fact discovery, Defendants have gratuitously probed these topics, and their trial evidence disclosures make clear that they intend to use the same scandalizing tactics at trial. Defendants should not be allowed to exploit Musk’s political involvement, social or recreational choices, or gratuitous details of his personal life at trial. As detailed below, Musk is the subject of daily, often-fabricated media scrutiny.
The filing goes on to argue that the Zuckerberg text exchange has “nothing to do with Musk’s claims” and amounts to an attempt to “stoke negative sentiments toward Musk because of his association with Zuckerberg.” Which is a fun way to describe a text message in which a tech CEO volunteers content moderation favors to a government official. Musk’s lawyers aren’t wrong that it’s embarrassing — just not for the reasons they think.
The hypocrisy, though, is almost beside the point. The entire Rogan performance was designed to establish a narrative: that the Biden administration engaged in some kind of unprecedented censorship campaign, and that Zuckerberg was bravely standing up to it. That narrative was then used to justify Meta’s decision to end its fact-checking programs and loosen its content policies — framed as a return to “free expression” principles.
But the Zuck-Musk texts show what those “free expression” principles actually look like in practice. Zuck is more than happy to suppress speech when he supports the person in the White House. It’s only when he doesn’t like the person in the White House that he gets to pretend he’s a free speech warrior.
This has nothing to do with free expression. It’s about power. Who has it, who Zuckerberg thinks he needs to stay on the right side of, and who he thinks he can safely perform outrage against. The Biden administration was on its way out the door when Zuck did the Rogan interview, making them a perfectly safe target for his “never again” act. Musk was ascendant, running a government operation backed by a president who had directly threatened to throw Zuckerberg in prison.
So the principled free speech stance lasted less than a month before Zuck was back to volunteering content suppression — this time without even being asked, for the people who actually had the power to hurt him. And that’s just the text message that surfaced in an unrelated lawsuit. The rest of the ledger isn’t public.