Here was a fun surprise last night. John Oliver just delivered what might be the most accessible and accurate mainstream takedown of content moderation myths we’ve seen yet. The latest episode of “Last Week Tonight” tackled content moderation head-on, while systematically dismantling Mark Zuckerberg’s increasingly dubious justifications for Meta’s policy changes. In this era where most mainstream coverage of content moderation is a total mess, Oliver somehow manages to both be hilarious and (surprisingly) get basically everything right about this impossibly thorny issue.
It’s worth watching, if only to see someone explain in 30 minutes what we’ve been trying to hammer home for years. (And no, I’m not just saying that because he mentions Masnick’s Impossibility Theorem — though that certainly doesn’t hurt.)
The segment hits on several key points:
First, there’s what you might call the fundamentals of content moderation (or “why the internet isn’t just porn and diet pills 101”):
Section 230 made it possible to moderate content online. Without it, websites would basically have two choices: let everything in (hello, spam!) or shut everything down. Neither is great for business, or users, or… well, anyone really.
Content moderation is an intractable issue. This isn’t just my opinion — it’s mathematics. Every platform that allows user content either moderates or dies trying. There’s no third option. (Unless you count “becoming a wasteland of porn and diet pill ads” as an option, which, fair enough, some do.)
The dirty secret is that social media companies have actually put a fair bit of effort into this problem. They’ve drawn lines, redrawn them, hired thousands of moderators, built AI systems, and… people still hate where those lines end up. Because of course they do. That’s the “impossible” part of my theorem.
Then, he debunks the false claims of political manipulation:
Oliver points out how MAGA Republicans insisting that content moderation is some sort of vast left-wing conspiracy targeting conservatives turns out to be complete nonsense.
He also does an excellent job debunking the misleading narrative around “Hunter Biden laptop” story. As we’ve written, that story has been blown totally out of proportion. The narrative says it was suppressed. It wasn’t. The narrative says the details were damning. It wasn’t that either. What it was, mainly, was a masterclass in how to turn routine content moderation decisions into political theater. And Oliver shows that clearly.
Then there’s Zuck’s latest performance piece about how the Biden administration supposedly forced him to censor content. Oliver absolutely nails why this claim is ridiculous. (Pro tip: When the government “pressures” you to do something and you just… tell them no and nothing happens in response, that’s not exactly censorship.)
And then the kicker: Oliver highlights (as we have multiple times) that even the very conservative Supreme Court has said these claims are nonsense. Though I suppose when reality conflicts with your preferred narrative, you can always just pretend the Supreme Court doesn’t exist… or that Amy Coney Barrett is too woke.
And here’s where Oliver really sticks the landing, showing where all of this is heading:
Remember all those “simple fixes” politicians keep proposing for Section 230? Oliver explains how every single one would basically hand the government (and specifically, the Musk/Trump administration) a shiny new tool to silence speech they dislike. Because nothing says “free speech” quite like giving the government more power to control online speech, right?
Finally, Oliver exposes the Zuckerberg two-step: Zuck loves to brag about how he stood up to the Biden administration’s requests, but conveniently leaves out the part where he completely rolled over for Trump’s actual threats. (You know it’s bad when Trump himself is bragging about how effectively he bullied Zuck, which Oliver points out, shows that it doesn’t take a genius to realize what really happened.)
In the end, what Oliver has given us is basically a greatest hits album of Techdirt’s content moderation coverage from the last few years, except with better production values and more jokes about Mark Zuckerberg’s new look. And the finale? A pitch-perfect “advertisement” for Facebook’s new content moderation philosophy that can be summed up in two words: Fuck It.
At this point, we have built up enough stories on bad YouTube takedowns to fill up a small library. Because of a combination of automated systems that are spectacularly imperfect, the desire by some bad actors to abuse and commit fraud using the notice and takedown system, and a general deference towards the alleged copyright holder over the accused, we have so many stories about this sort of thing that Tim Cushing was able to write the following paragraph on a post tangentially related to all of this thirteen years ago.
Were we to try to duplicate that brilliant bit of writing, and update it to include all of the bad takedowns we’ve written about in the intervening thirteen years, we might find ourselves having constructed a fifty-thousand word paragraph. The point is that the problem with these bad takedowns has continued and, if anything, has gotten worse. Add to that the general lack of transparency by Google in many of these cases and you have people reliant on the platform with channels that appear to survive at the pleasure of a techno-politburo operating behind a silicon curtain.
Take what just happened to the operator of one channel, which was disappeared and then reinstated in days, all without a scintilla of detail as to what the hell happened.
Artemiy Pavlov, the founder of a small but mighty music software brand called Sinevibes, spent more than 15 years building a YouTube channel with all original content to promote his business’ products. Over all those years, he never had any issues with YouTube’s automated content removal system—until Monday, when YouTube, without issuing a single warning, abruptly deleted his entire channel.
“What a ‘nice’ way to start a week!” Pavlov posted on Bluesky. “Our channel on YouTube has been deleted due to ‘spam and deceptive policies.’ Which is the biggest WTF moment in our brand’s history on social platforms. We have only posted demos of our own original products, never anything else….”
There had been no warnings before the channel was shut down. There were no details presented to Pavlov as to what the channel had done to violate the policy described. Google might as well have said: “We’re shutting your channel down because we can.” No chance at corrective action. Just, poof, your channel is gone.
Then, as too often happens with this sort of thing, journalists reached out and, like magic, the channel was reinstated.
Ars saw Pavolov’s post and reached out to YouTube to find out why the channel was targeted for takedown. About three hours later, the channel was suddenly restored. That’s remarkably fast, as YouTube can sometimes take days or weeks to review an appeal. A YouTube spokesperson later confirmed that the Sinevibes channel was reinstated due to the regular appeals process, indicating perhaps that YouTube could see that Sinevibes’ removal was an obvious mistake.
In the email sent to Pavlov notifying him of his channel ban, YouTube admits that it sometimes makes mistakes, while apologizing for the “very upsetting news.” Similarly, in the email confirming his channel had been reinstated, YouTube would only explain that in trying to make YouTube a safe space, “sometimes we make mistakes trying to get it right. We’re sorry for any frustration our mistake caused you.”
That’s simply not good enough. There is a lot of power in the hands of a platform like YouTube, particularly as it relates to small businesses that incorporate the platform into their corporate strategies. Sinevibes is one such company and Pavlov is already pondering decoupling his business from any kind of reliance on YouTube, given the site’s recent demonstration of its own unreliability.
Will Pavlov ever know what actually happened here? Unlikely, I would say, when even outfits like ArsTechnica can’t get a straight answer.
YouTube’s spokesperson, Boot Bullwinkle, did not respond when Ars asked if it was possible to know what content triggered the mistaken channel ban or confirm that Sinevibes had no strikes on record prior to the ban. Bullwinkle would only confirm that YouTube considered this case resolved, then stopped responding.
This isn’t a freaking witch-hunt, people. Were YouTube to disclose what actually happened, it would give some confidence to the rest of the platform’s community that a problem had been identified and that work would be done to limit it from reoccurring. It would also give Pavlov and everyone else an opportunity to understand what occurred and, potentially, take actions that would protect against it happening again. And if that sounds like I’m just spit-balling in the dark, well, what the fuck other choice do I have, given Google’s obfuscation here?
Transparency and good communication go a long way. The silicon curtain approach, on the other hand, will only breed distrust, confusion, and anger.
In what looks increasingly like a protection racket, Meta has agreed to pay Donald Trump $25 million to settle a lawsuit that multiple courts had already indicated was completely meritless. The settlement, which directs $22 million toward Trump’s presidential library, comes after a dinner at Mar-a-Lago where Trump reportedly told Zuckerberg this needed to be resolved before the Meta CEO could be “brought into the tent.”
And this was all being negotiated at the same time Zuckerberg made a public appearance on Joe Rogan to complain about how unfair it was that Joe Biden was mean to him. At the very same time that Trump was literally demanding money from him.
The story behind this shakedown begins four years ago, when major internet platforms banned Trump following January 6th, citing clear violations of their policies against inciting violence. Most platforms eventually reinstated him, with Meta bringing him back in 2023 as his GOP nomination became inevitable.
Rather than accept that private companies have every right to moderate their platforms, Trump responded in 2021 with what can only be described as legal performance art: suing Meta (and Mark Zuckerberg), Twitter (and Jack Dorsey), and Google (and Sundar Pichai), claiming that their moderation decisions violated the First Amendment. As we pointed out at the time, everything about the case was backwards. The First Amendment only restricts the government (which at the time of the supposed violation was run by Trump himself), not private companies.
In the lawsuit, Trump tried to blame the Biden administration (which did not exist at the time of the banning!) for stripping his rights, even though they were not the government and had nothing to do with the decisions of the private companies.
The lawsuits did not go well. After being transferred out of Florida (where Trump brought them) to California, the case against Twitter/Dorsey moved forward the fastest, where a judge absolutely trashed it as frivolous.
Plaintiffs’ main claim is that defendants have “censor[ed]” plaintiffs’ Twitter accounts in violation of their right to free speech under the First Amendment to the United States Constitution… Plaintiffs are not starting from a position of strength. Twitter is a private company, and “the First Amendment applies only to governmental abridgements of speech, and not to alleged abridgements by private companies.”
That case was appealed to the Ninth Circuit, which held oral arguments (which did not go well for Trump). But before the Ninth Circuit could rule, there was that flurry of internet content moderation cases that went to the Supreme Court last year (including Murthy and Moody), so the Ninth Circuit decided to wait until those cases were ruled on, and then asked the parties for additional briefing in light of those rulings.
As for the two other cases, against Google and Meta, those were put on hold while the Twitter appeal played out on the (reasonable) assumption that how the Ninth Circuit ruled would impact those cases.
Then came an interesting development that initially flew under the radar: just two weeks after the election, ExTwitter quietly filed a notice with the appeals court, suggesting they were about to reach a settlement.
We represent the appellants and appellees in the above-captioned appeal, in which the Court held argument on October 4, 2023. In accordance with Ninth Circuit rules, we write to advise the Court that the parties are actively discussing a potential settlement. See Ninth Cir. R. p. xix. In light of those discussions, we respectfully suggest that the Court withdraw submission and stay this appeal.
Because, of course, in the interim between the lawsuit being filed and November, Elon Musk had purchased Twitter, renamed it to X, then become a super fan of Donald Trump and his biggest political backer. So it must have been awkward that the two of them were literally suing each other (and Musk was obviously going to win if the Ninth were allowed to decide).
Now the Wall Street Journal is reporting that when Zuckerberg flew to Mar-A-Lago to have dinner with Trump right after the election, the President (who just months earlier had threatened to put Zuck in prison for life), apparently brought up the case unprompted during the dinner, and said that for Zuck to make amends and be “brought into the tent” he had to pay up:
Serious talks about the suit, which had seen little activity since the fall of 2023, began after Meta Chief Executive Mark Zuckerberg flew to Trump’s Mar-a-Lago club in Florida to dine with him in November, according to the people familiar with the discussions. The dinner was one of several efforts by Zuckerberg and Meta to soften the relationship with Trump and the incoming administration. Meta also donated $1 million to Trump’s inaugural fund. Last year, Trump warned that Zuckerberg could go to prison if he tried to rig the election against him.
Toward the end of the November dinner, Trump raised the matter of the lawsuit, the people said.The president signaled that the litigation had to be resolved before Zuckerberg could be “brought into the tent,”one of the people said.
Weeks later, in early January, Zuckerberg returned to Mar-a-Lago for a full day of mediation. Trump was present for part of the session, though he stepped out at one point to be sentenced—appearing virtually—for covering up hush money paid to a porn star, one of the people said. He also golfed, reappearing in golf clothes and talking about the round he had just played, the person said.
Let’s call this what it is: a protection racket that would make Tony Soprano proud. The playbook is classic: file a meritless lawsuit, make veiled threats (like suggesting prison time), then offer “protection” in exchange for payment. The only difference is that instead of a local restaurant owner paying to keep their windows intact, we’re watching a tech giant hand over $25 million to avoid future “problems.” The case was legally DOA – but that was never the point.
And Zuck is now using Meta’s money to fund what is effectively a $25 million gift to Trump.
President Trump has signed settlement papers that are expected to require Meta Platforms to pay roughly $25 million to resolve a 2021 lawsuit Trump brought after the company suspended his accounts following the attacks on the U.S. Capitol that year, according to people familiar with the agreement.
Of that, $22 million will go toward a fund for Trump’s presidential library, with the rest going to legal fees and the other plaintiffs who signed onto the case. Meta won’t admit wrongdoing, the people said. Trump signed the settlement agreement Wednesday in the Oval Office.
Some might draw parallels to ABC’s settlement in the Stephanopoulos case, but that comparison misses a key distinction: ABC faced at least plausible arguments about actual malice standards in defamation law. While it still does look like ABC caved to a blatant threat about a winnable case, it still would have been costly to litigate. Here, we’re talking about a case so devoid of legal merit that even Trump-appointed judges would have struggled to keep straight faces.
The cases against Meta, Twitter, and Google were losers from the start, and the courts seemed pretty clear on that. But both Meta and soon (if not already) ExTwitter will “settle” the cases funneling many millions of dollars directly to Trump.
It’s hard to see this as anything other than a pathway to corruption. Presidents can just sue media properties for not handling things the way they want, and then the companies all “settle” the cases, funneling millions of dollars to the President.
This settlement doesn’t just erode trust — it weaponizes distrust. By framing platform moderation as political favors rather than policy decisions, it undermines the very concept of content governance. The real free speech threat here isn’t the initial ban, but the creation of a system where access to digital public squares depends on paying political tribute.
The implications here are staggering. Even if you charitably view this as mere appearance of corruption rather than the real thing, we’re watching the creation of a dangerous new playbook: Presidents can now use frivolous lawsuits as leverage to extract millions from tech companies, while those companies can effectively purchase political protection through “settlements.” The next time you hear Silicon Valley leaders talk about defending democratic institutions, remember that Meta just showed exactly how much those principles are worth: $25 million, paid directly to a presidential library fund.
And for other tech companies watching this unfold? The message is clear: better start saving up for your own “settlement” fund. The protection racket is going digital.
I know that Mark Zuckerberg no longer likes fact-checking, but it’s not going to stop me from continuing to fact-check him. I’m going to rate his claimed plans of moving trust & safety and content moderation teams away from California to Texas as not just an obnoxiously stupid political suck-up, but also something that increasingly appears to be just a flat out lie.
As you may recall, as part of Mark Zuckerberg’s decision to do away with fact-checking, enable more hatred, and just generally suck up to the Trump administration, there was the weird promise that because California content moderation and trust & safety teams were too “biased,” they would be moved to Texas.
Texas is, apparently, famous for its unbiased, neutral residents, as compared to California, where it is constitutionally impossible to be unbiased. Or something.
Former Facebook employees say, however, that the move-to-Texas announcement rings hollow. That’s because Meta already has major content moderation and trust and safety operations in the state. They say the move is nothing more than a blatant appeal to Donald Trump. Facebook’s former head of content standards said he helped set up those teams in Texas more than a decade ago.
“They made a lot of hay of: ‘Oh, we’re worried about bias, we’re moving all these content moderation teams to other places,’” Dave Willner said during a Lawfare panel last week. “As far as I’ve been able to figure out, that is mostly fake.”
Three other former Facebook employees who worked on the trust and safety teams in Texas told the Guardian the same. One said many people across Meta’s various divisions did trust and safety work in the company’s Austin offices. Another said that many content moderators, including those allocated to the trust and safety teams, have been in Austin for a long time.
So many of the people were already in Texas. What about the folks in California who were told they’d have to move? According to Wired, most have been told the mandate doesn’t actually apply to them.
Last Thursday during a town hall call for Meta employees working under Guy Rosen, the company’s chief information security officer, executives said that no one in Rosen’s organization would have to move to Texas, according to two people in attendance. This exempts from relocation employees who work on Meta’s safety, operations, and integrity teams, which collectively help enforce the company’s content policies.
The changes also do not affect a portion of Meta’s US-based content policy team, which works under chief global affairs officer Joel Kaplan, because many members are already located outside of California, including in Washington, DC, New York City, and Austin, Texas, the employees say. That includes key decisionmakers such as Neil Potts, vice president of public policy. Many of the company’s content moderators are contractors based out of hubs beyond California such as San Antonio, Texas.
So it sure sounds like the big announcement of how content moderation and trust & safety were moving to Texas was a load of garbage. Many of those people are already there.
The whole thing, as expected, was about making a fake public concession to Donald Trump in an attempt to curry political favor.
While Zuckerberg’s motivations here seem transparently political, the broader implications remain concerning. It’s especially worrying given how a ton of people are going around falsely claiming Zuckerberg caved to pressure from Biden, while everyone seems to be ignoring the much more blatant act of him actually caving to Trump.
Moving critical trust & safety functions to appease partisan interests sets a troubling precedent. It’s a short-sighted move that prioritizes political expediency over principled policymaking. But that’s the world Mark Zuckerberg has chosen to embrace.
On December 14, James Harr, the owner of an online store called ComradeWorkwear, announced on social media that he planned to sell a deck of “Most Wanted CEO” playing cards, satirizing the infamous “Most-wanted Iraqi playing cards” introduced by the U.S. Defense Intelligence Agency in 2003. Per the ComradeWorkwear website, the Most Wanted CEO cards would offer “a critique of the capitalist machine that sacrifices people and planet for profit,” and “Unmask the oligarchs, CEOs, and profiteers who rule our world… From real estate moguls to weapons manufacturers.”
But within a day of posting his plans for the card deck to his combined 100,000 followers on Instagram and TikTok, the New York Post ran a front page story on Harr, calling the cards “disturbing.” Less than 5 hours later, officers from the New York City Police Department came to Harr’s door to interview him. They gave no indication he had done anything illegal or would receive any further scrutiny, but the next day the New York police commissioner held the New York Post story up during a press conference after announcing charges against Luigi Mangione, the alleged assassin of UnitedHealth Group CEO Brian Thompson. Shortly thereafter, platforms from TikTok to Shopify disabled both the company’s accounts and Harr’s personal accounts, simply because he used the moment to highlight what he saw as the harms that large corporations and their CEOs cause.
Harr was not alone. After the assassination, thousands of people took to social media to express their negative experiences with the healthcare industry, to speculate about who was behind the murder, and to show their sympathy for either the victim or the shooter—if social media platforms allowed them to do so. Many users reported having their accounts banned and content removed after sharing comments about Luigi Mangione, Thompson’s alleged assassin. TikTok, for example reportedly removed comments that simply said, “Free Luigi.” Even seemingly benign content, such as a post about Mangione’s astrological sign or a video montage of him set to music, was deleted from Threads, according to users.
The Most Wanted CEO playing cards did not reference Mangione, and the cards—which have not been released—would not include personal information about any CEO. In his initial posts about the cards, Harr said he planned to include QR codes with more information about each company and, in his view, what dangers the companies present. Each suit would represent a different industry, and the back of each card would include a generic shooting-range style silhouette. As Harr put it in his now-removed video, the cards would include “the person, what they’re a part of, and a QR code that goes to dedicated pages that explain why they’re evil. So you could be like, ‘Why is the CEO of Walmart evil? Why is the CEO of Northrop Grumman evil?’”
A design for the Most Wanted CEO playing cards
Many have riffed on the military’s tradition of using playing cards to help troops learn about the enemy. You can currently find “Gaza’s Most Wanted” playing cards on Instagram, purportedly depicting “leaders and commanders of various groups such as the IRGC, Hezbollah, Hamas, Houthis, and numerous leaders within Iran-backed militias.” A Shopify store selling “Covid’s Most Wanted” playing cards, displaying figures like Bill Gates and Anthony Fauci, and including QR codes linking to a website “where all the crimes and evidence are listed,” is available as of this writing. Hero Decks, which sells novelty playing cards generally showing sports figures, even produced a deck of “Wall Street Most Wanted” cards in 2003 (popular enough to have a second edition).
Aswe’vesaidmanytimes, content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well. Companies often get it wrong and remove content or whole accounts that those affected by the content would agree do not violate the platform’s terms of service or community guidelines. Conversely, they allow speech that could arguably be seen to violate those terms and guidelines. That has been especiallytrue for speech related to divisive topics and during heated national discussions. These mistakes often remove important voices, perspectives, and context, regularly impacting not just everyday users but journalists, human rights defenders, artists, sex worker advocacy groups, LGBTQ+ advocates, pro-Palestinian activists, and political groups. In some instances, this even harms people’s livelihoods.
Instagram disabled the ComradeWorkwear account for “not following community standards,” with no further information provided. Harr’s personal account was also banned. Meta has a policy against the “glorification” of dangerous organizations and people, which it defines as “legitimizing or defending the violent or hateful acts of a designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.” Meta’s Oversight Board has overturnedmultiplemoderation decisions by the company regarding its application of this policy. While Harr had posted to Instagram that “the CEO must die” after Thompson’s assassination, he included an explanation that, “When we say the ceo must die, we mean the structure of capitalism must be broken.” (Compare this to a series of Instagram story posts from musician Ethel Cain, whose account is still available, which used the hashtag #KillMoreCEOs, for one of many examples of how moderation affects some people and not others.)
TikTok reported that Harr violated the platform’s community guidelines with no additional information. The platform has a policy against “promoting (including any praise, celebration, or sharing of manifestos) or providing material support” to violent extremists or people who cause serial or mass violence. TikTok gave Harr no opportunity for appeal, and continued to remove additional accounts Harr only created to update his followers on his life. TikTok did not point to any specific piece of content that violated its guidelines.
On December 20, PayPal informed Harr it could no longer continue processing payments for ComradeWorkwear, with no information about why. Shopify informed Harr that his store was selling “offensive content,” and his Shopify and Apple Pay accounts would both be disabled. In a follow-up email, Shopify told Harr the decision to close his account “was made by our banking partners who power the payment gateway.”
Harr’s situation is not unique. Financial and social media platforms have an enormous amount of control over our online expression, and we’ve long been critical of their over-moderation, uneven enforcement, lack of transparency, and failure to offer reasonable appeals. This is why EFF co-created The Santa Clara Principles on transparency and accountability in content moderation, along with a broad coalition of organizations, advocates, and academic experts. These platforms have the resources to set the standard for content moderation, but clearly don’t apply their moderation evenly, and in many instances, aren’t even doing the basics—like offering clear notices and opportunities for appeal.
Harr was one of many who expressed frustration online with the growing power of corporations. These voices shouldn’t be silenced into submission simply for drawing attention to the influence that they have. These are exactly the kinds of actions that Harr intended to highlight. If the Most Wanted CEO deck is ever released, it shouldn’t be a surprise for the CEOs of these platforms to find themselves in the lineup.
If you only remember two things about the government pressure campaign to influence Mark Zuckerberg’s content moderation decisions, make it these: Donald Trump directly threatened to throw Zuck in prison for the rest of his life, and just a couple months ago FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.
Two months later — what do you know? — Zuckerberg ended all fact-checking on Meta. But when he went on Joe Rogan, rather than blaming those actual obvious threats, he instead blamed the Biden administration, because some admin officials sent angry emails… which Zuck repeatedly admits had zero impact on Meta’s actual policies.
Indeed, this very fact check may be a good example of what I talked about regarding Zuckerberg’s decision to end fact-checking, which is that it’s not as straightforward as some people think, as layers of bullshit may be presented misleadingly around a kernel of truth, and peeling back the layers is important for understanding.
Indeed, this is my second attempt at writing this article. I killed the first version soon after it hit 10,000 words and I realized no one was going to read all that. So this is a more simplified version of what happened, which can be summarized as: the actual threats came from the GOP, to which Zuckerberg quickly caved. The supposed threats from the Biden admin were overhyped, exaggerated, and misrepresented, and Zuck directly admits he was able to easily refuse those requests.
All the rest is noise.
I know that people who dislike Rogan dismiss him out of hand, but I actually think he’s often a good interviewer for certain kinds of conversations. He’s willing to speak to all sorts of people and even ask dumb questions, taking on the role of listeners/viewers. And that’s actually really useful (and enlightening) in certain circumstances.
Where it goes off the rails, such as here, is where (1) nuance and detail matter and (2) where the person he is interviewing has an agenda to push with a message that he knows Rogan will eat up, and knows Rogan does not understand enough to pick apart what really happened.
This is not the first time that Zuckerberg has gone on Rogan and launched a narrative by saying things that are technically true in a manner that is misleading, likely knowing that Rogan and his fans wouldn’t understand the nuances, and would run with a misleading story.
Two and a half years ago, he went on Joe Rogan and said that the FBI had warned the company about the potential for hack and leak efforts put forth by the Russians, which Rogan and a whole bunch of people, including the mainstream media, falsely interpreted as “the FBI told us to block the Hunter Biden laptop story.”
Except that’s not what he said. He was asked about the NY Post story (which Facebook never actually blocked, they only — briefly — blocked it from “trending”), and Zuckerberg very carefully worded his answer to say something that was already known, but which people not listening carefully might think revealed something new:
The background here is that the FBI came to us – some folks on our team – and was like ‘hey, just so you know, you should be on high alert. We thought there was a lot of Russian propaganda in the 2016 election, we have it on notice that basically there’s about to be some kind of dump that’s similar to that’.
But the fact that the FBI had sent out a general warning to all of social media to be on the lookout for disinfo campaigns like that was widely known and reported on way earlier. The FBI did not comment specifically on the Hunter Biden laptop story, nor did they tell Facebook (or anyone) to take anything down.
Still, that turned into a big thing, and a bunch of folks thought it was a big revelation. In part because when Zuck told that story to Rogan, Rogan acted like it was big reveal, because Rogan doesn’t know the background or the details or the fact that this had been widely reported. He also doesn’t realize there’s a huge difference between a general “be on the lookout” warning and a “hey, take this down!” demand, with the former being standard and the latter being likely unconstitutional.
In other words, Zuck has a history of using Rogan’s platform to spread dubious narratives, knowing that Rogan lacks the background knowledge to push back in the moment.
After that happened, I was at least open to the idea that Zuck just spoke in generalities and didn’t realize how Rogan and audience would take what he said and run with it, believing a very misleading story. But now that he’s done it again, it seems quite likely that this is deliberate. When Zuckerberg wants to get a misleading story out to a MAGA-friendly audience, he can reliably dupe Rogan’s listeners.
Indeed, this interview was, in many ways, similar to what happened two years ago. He was relating things that were already widely known in a misleading way, and Rogan was reacting like something big was being revealed. And then the media runs with it because they don’t know the details and nuances either.
This time, Zuckerberg talks about the supposed pressure from the Biden administration as a reason for his problematic announcement last week:
Rogan:What do you think started the pathway towards increasing censorship? Because clearly we were going in that direction for the last few years. It seemed like uh we really found out about it when Elon bought Twitter and we got the Twitter Files and when you came on here and when you were explaining the relationship with FBI where they were trying to get you to take down certain things that were true and real and certain things they tried to get you to limit the exposure to them. So it’s these kind of conversations. Like when did all that start?
So first off, note the framing of this question. It’s not accurate at all. Social media websites have always had content moderation/content policy efforts. Indeed, Facebook was historically way more aggressive than most. If you don’t, your platform fills up with spam, scams, abuse, and porn.
That’s just how it works. And, indeed, Facebook in the early days was aggressively paternalistic about what was — and what was not — allowed on its site. Remember its famously prudish “no nudity” policy? Hell, there was an entire Radiolab podcast about how difficult that was to implement in practice.
So, first, calling it “censorship” is misleading, because it’s just how you handle violations of your rules, which is why moderation is always a better term for it. Rogan has never invited me on his podcast. Is that censorship? Of course not. He has rules (and standards!) for who he platforms. So does Meta. Rejecting some speech is not “censorship”, it’s just enforcing your own rules on your own private property.
Second, Rogan himself is already misrepresenting what Zuckerberg told him two years ago about the FBI. Zuck did not say that the FBI was trying to get Facebook to “take down certain things that were true and real” and “limit the exposure to them.” They only said to be on the lookout for potential attempts by foreign governments to interfere with an election, leaving it up to the platforms to decide how to handle that.
On top of that, the idea that the simple fact of how content moderation works only became public with the Twitter Files is false. The Twitter Files revealed… a whole bunch of nothing interesting that idiots have misinterpreted badly. Indeed we know this because (1) we paid attention, and (2) Elon’s own legal team admitted in court that what people were misleadingly claiming about the Twitter Files wasn’t what was actually said.
From there, Zuck starts his misleading but technically accurate-ish response:
Zuck: Yeah, well, look, I think going back to the beginning, or like I was saying, I think you start one of these if you care about giving people a voice, you know? I wasn’t too deep on our content policies for like the first 10 years of the company. It was just kind of well known across the company that, um, we were trying to give people the ability to share as much as possible.
And, issues would come up, practical issues, right? So if someone’s getting bullied, for example, we deal with that, right? We put in place systems to fight bullying, you know? If someone is saying hey um you know someone’s pirating copyrighted content on on the service, it’s like okay we’ll build controls to make it so we’ll find IP protected content.
But it was really in the last 10 years that people started pushing for like ideological-based censorship and I think it was two main events that really triggered this. In 2016 there was the election of President Trump, also coincided with basically Brexit in the EU and sort of the fragmentation of the EU. And then you know in 2020 there was COVID. And I think that those were basically these two events where for the first time we just faced this massive massive institutional pressure to basically start censoring content on ideological grounds….
So this part is fundamentally, sorta, kinda accurate, which sets up the kernel of truth around which much bullshit will be built. It’s true that Zuck didn’t pay much attention to content policies on the site early on, but it’s nonsense that it was about “giving people a voice.” That’s Zuck retconning the history of Facebook. Remember, they only added things like the Newsfeed (which was more about letting people talk) when Twitter came about and Zuck freaked out that Twitter would destroy Facebook.
Second, he then admits that the company has always moderated, though he’s wrong that it was so reactive. From quite early on (as mentioned above) the company had decently strict content policies regarding how the site was moderated. And, really, much of that was based around wanting to make sure that users had a good experience on the site. So yes, things like bullying were blocked.
But what is bullying is a very subjective thing, and so much of content moderation is just teams trying to tell you to stop being such a jackass.
It is true that there was pressure on Facebook to take moderation challenges more seriously starting in 2016, and (perhaps?!?) if he had actually spent more time understanding trust & safety at that time, he would have a better understanding of the issues. But he didn’t, which meant that he made a mess of things, and then tried to “fix it” with weird programs like the Oversight Board.
But it also meant that he’s never, ever been good at explaining the inherent tradeoffs in trust & safety, and how some people are always going to dislike the choices you make. A good leader of a social network understands and can explain those tradeoffs. But that’s not Zuck.
Also, and this is important, Zuckerberg’s claims about pressure to moderate on “ideological” grounds are incredibly misleading. Yes, I’m sure some people were putting pressure on him around that, but it was far from mainstream and easy to ignore. People were asking him to stop potentially dangerous misinformation that was causing harm. For example, the genocide in Myanmar. Or information around COVID that was potentially legitimately dangerous.
In other words, it was really (like so much of trust & safety) an extension of the “no bullying” rule. The same was true of protecting marginalized groups like LGBTQ+ users or on issues like Black Lives Matter. The demands from users (not the government in those cases) were about protecting more marginalized communities from harassment and bullying.
I’m going to jump ahead because Zuck and Rogan say a lot of stupid shit here, but this article will get too long if I go through all of it. So let’s jump forward a couple of minutes, to where Zuckerberg really flubs his First Amendment 101 in embarrassing ways while trying to describe how Meta chose to handle moderation of COVID misinformation.
Zuckerberg: Covid was the other big one. Where that was also very tricky because you know at the beginning it was, you know, it’s like a legitimate “public health crisis,” you know, in the beginning.
And it’s… even people who are like the most ardent First Amendment defenders… that the Supreme Court has this clear precedent, that’s like all rightyou can’t yell fire in a crowded theater. There are times when if there’s an emergency your ability to speak can temporarily be curtailed in order to get an emergency under control.
So I was sympathetic to that at the beginning of Covid, it seemed like, okay you have this virus, seems like it’s killing a lot of people. I don’t know like we didn’t know at the time how dangerous it was going to be. So, at the beginning, it kind of seemed like okay we should give a little bit of deference to the government and the health authorities on how we should play this.
But when it went from, you know, two weeks to flatten the curve to… in like in the beginning it was like okay there aren’t enough masks, masks aren’t that important to, then, it’s like oh no you have to wear a mask. And you know all the, like everything, was shifting around. It just became very difficult to kind of follow.
In trying to defend Meta’s approach to COVID misinformation, Zuck manages to mangle First Amendment law in a way that’s both legally inaccurate and irrelevant to the actual issues at play.
There’s so much to unpack here. First off, he totally should have someone explain the First Amendment to him. He not only got it wrong, he even got it wrong in a way that is different than how most people get it wrong. We’ve covered the whole “fire in a crowded theater” thing so many times here on Techdirt, so we’ll do the abbreviated version:
It’s not a “clear precedent.” It’s not a precedent at all. It was an offhand comment (in legal terms: dicta, so not precedential) in a case about jailing someone for handing out anti-war literature (something most people today would recognize as pretty clearly a First Amendment problem).
The Justice who said it, Oliver Wendell Holmes, appeared to regret it almost immediately, and in a similar case very shortly thereafter changed his tune and became a much more “ardent First Amendment defender.”
Most courts and lawyers (though there are a few holdouts) insist that whatever precedent there was in Schenck (which again, did not include that line) was effectively overruled a half century later in a different case that rejected the test in Schenck and moved to the “incitement to imminent lawless action” test.
So, quoting “fire in a crowded theater” these days is generally used as a (very bad, misguided) defense of saying “well, there’s some speech that’s so bad it’s obviously unprotected,” but without being able to explain why this particular speech is unprotected.
But Zuck isn’t even using it in that way. He seems to have missed that the whole point of the Holmes dicta (again, not precedent) was to talk about falsely yelling fire. Zuck implies that the (not actual) test is “can we restrict speech if there’s an actual fire, an actual emergency.” And, that’s also wrong.
But, the wrongness goes one layer deeper as well, because the First Amendment only applies to restrictions the government can put on speakers, not what a private entity like Meta (or the Joe Rogan Experience) can do on their own private property.
And then, even once you get past that, Zuck isn’t wrong that there was a lot of confusion about COVID and health in the early days, including lots of false information that came under the imprimatur of “official” sources, but… dude, Meta deliberately made the decision to effectively let the CDC decide what was acceptable even after many people (us included!) pointed out how stupid it was for platforms to outsource their decisions on “COVID misinfo” to government agencies which almost certainly would get stuff wrong as the science was still unclear.
But it wasn’t the White House that pressured Zuck into following the CDC position. Meta (alone among the major tech platforms) publicly declared early in the pandemic (for what it’s worth, when Trump was still President) that its approach to handling COVID misinformation would be based on “guidance” from official authorities like the CDC and WHO. Many of us felt that this was actually Meta abdicating its role and giving way too much power to government entities in the midst of an unclear scientific environment.
But for him to now blame the Biden admin is just blatantly ahistorical.
And from there, it gets worse:
Zuckerberg: This really hit… the most extreme, I’d say, during it was during the Biden Administration, when they were trying to roll out um the vaccine program and… Now I’m generally, like, pretty pro rolling out vaccines. I think on balance the vaccines are more positive than negative.
But I think that while they’re trying to push that program, they also tried to censor anyone who was basically arguing against it. And they pushed us super hard to take down things that were honestly were true. Right, I mean they they basically pushed us and and said, you know, anything that says that vaccines might have side effects, you basically need to take down.
And I was just like,well we’re not going to do that. Like,we’re clearly not going to do that.
Rogan then jumps in here to ask “who is they” but this is where he’s showing his own ignorance. The key point is the last line. Zuckerberg says he told them “we’re not going to do that… we’re clearly not going to do that.”
That’s it. That’s the ballgame.
The case law on this issue is clear: the government is allowed to try to persuade companies to do something. That’s known as using the bully pulpit. What it cannot do is coerce a company into taking action on speech. And if Zuckerberg and Meta felt totally comfortable saying “we’re not going to do that, we’re clearly not going to do that,” then end of story. They didn’t feel coerced.
Indeed, this is partly what the Murthy case last year was about. And during oral arguments, Justices Kavanaugh and Kagan (both of whom had been lawyers in the White House in previous lives) completely laughed off the idea that White House officials couldn’t call up media entities and try to convince them to do stuff, even with mean language.
Here was Justice Kavanaugh:
JUSTICE KAVANAUGH: Do you think on the anger point, I guess I had assumed, thought, experienced government press people throughout the federal government who regularly call up the media and — and berate them. Is that — I mean, is that not —
MR. FLETCHER: I — I — I don’t want
JUSTICE KAVANAUGH: — your understanding? You said the anger here was unusual. I guess I wasn’t —
MR. FLETCHER: So that —
JUSTICE KAVANAUGH: — wasn’t entirely clear on that from my own experience.
Later on, he said more:
JUSTICE KAVANAUGH: You’re speaking on behalf of the United States. Again, my experience is the United States, in all its manifestations, has regular communications with the media to talk about things they don’t like or don’t want to see or are complaining about factual inaccuracies.
Justice Kagan felt similarly:
JUSTICE KAGAN: I mean, can I just understand because it seems like an extremely expansive argument, I must say, encouraging people basically to suppress their own speech. So, like Justice Kavanaugh, I’ve had some experience encouraging press to suppress their own speech.
You just wrote about editorial. Here are the five reasons you shouldn’t write another one. You just wrote a story that’s filled with factual errors. Here are the 10 reasons why you shouldn’t do that again.
I mean, this happens literally thousands of times a day in the federal government.
“Literally thousands of times a day in the federal government.” What happened was not even that interesting or unique. The only issue, and the only time it creates a potential First Amendment problem, is if there is coercion.
This is why the Supreme Court rejected the argument in the Murthy case that this kind of activity was coercive and violated the First Amendment. The opinion, written by Justice Coney Barrett, makes it pretty clear that the White House didn’t even apply that much pressure towards Facebook on COVID info beyond some public statements, and instead most of the communication was Facebook sending info to the government (both admin officials and the CDC) and asking for feedback.
The Supreme Court notes that Facebook changed its policies to restrict more COVID info before it had even spoken to people in the White House.
In fact, the platforms, acting independently, had strengthened their pre-existing content moderation policies before the Government defendants got involved. For instance, Facebook announced an expansion of its COVID–19 misinformation policies in early February 2021, before White House officials began communicating with the platform. And the platforms continued to exercise their independent judgment even after communications with the defendants began. For example, on several occasions, various platforms explained that White House officials had flagged content that did not violate company policy. Moreover, the platforms did not speak only with the defendants about content moderation; they also regularly consulted with outside experts.
All of this info is public. It was in the court case. It’s in the Supreme Court transcript of oral arguments. It’s in the ruling in the Supreme Court.
Yet Rogan acts like this is some giant bombshell story. And Zuckerberg just lets him run with it. And then, the media ran with it as well, even though it’s a total non-story. As Kagan said, attempts to persuade the media happen literally thousands of times a day.
It only violates the First Amendment if they move over into coercion, threatening retaliation for not listening. And the fact that Meta felt free to say no and didn’t change its policies makes it pretty clear this wasn’t coercion.
But, Zuckerberg now knows he’s got Rogan caught on his line and starts to play it up. Rogan first asks who was “telling you to take down things” and Zuckerberg then admits that he wasn’t actually involved in any of this:
Rogan: Who is they? Who’s telling you to take down things that talk about vaccine side effects?
Zuckerberg:It was people in the um in the Biden Administration I think it was um…you know I wasn’t involved in those conversations directly…
Ah, so you’re just relaying the information that was publicly available all along and which we already know about.
Rogan then does a pretty good job of basically explaining my Impossibility Theorem (he doesn’t call it that, of course), noting the sheer scale of Meta properties, and how most people can’t even comprehend the scale, and that mistakes are obviously going to happen. Honestly, it’s one of the better “mainstream” explanations of the impossibility of content moderation at scale
Rogan: You’re moderating at scale that’s beyond the imagination. The number of human beings you’re moderating is fucking insane. Like what is… what’s Facebook… what how many people use it on a daily basis? Forget about how many overall. Like how many people use it regularly?
Zuck: It’s 3.2 billion people use one of our services every day
Rogan: (rolls around) That’s…!
Zuck: Yeah, it’s, no, it’s wild
Rogan: That’s more than a third of the planet! That’s so crazy and it’s almost half of Earth!
Zuck: Well on a monthly basis it is probably.
Rogan: UGGH!
But just I want I want to say that though for there’s a lot of like hypercritical people that are conspiracy theorists and think that everybody is a part of some cabal to control them. I want you to understand that, whether it’s YouTube or all these and whatever place that you think is doing something that’s awful, it’s good that you speak because this is how things get changed and this is how people find out that people are upset about content moderation and and censorship.
But moderating at scale is insane. It’s insane. What we were talking the other day about the number of videos that go up every hour on YouTube and it’s banana. It’s bananas. That’s like to try to get a human being that is reasonable, logical and objective, that’s going to analyze every video? It’s virtually impossible. It’s not possible. So you got to use a bunch of tools. You got to get a bunch of things wrong.
And you have also people reporting things. And how how much is that going to affect things there. You could have mass reporting because you have bad actors. You have some corporation that decides we’re going to attack this video cuz it’s bad for us. Get it taken down.
There’s so much going on. I just want to put that in people’s heads before we go on. Like understand the kind of numbers that we’re talking about here.
Like… that’s a decent enough explanation of the impossibility of moderating content at scale. If Zuckerberg wanted to lean into that, and point out that this impossibility and the tradeoffs it creates makes all of this a subjective guessing game, where mistakes often get made and everyone has opinions, that would have been interesting.
But he’s tossed out the line where he wants to blame the Biden administration (even though the evidence on this has already been deemed unproblematic by the Supreme Court just months ago) and he’s going to feed Rogan some more chum to create a misleading picture:
Zuckerberg: So I mean like you’re saying I mean this is… it’s so complicated this system that I could spend every minute of all of my time doing this and not actually focused on building any of the things that we’re trying to do. AI glasses, like the future of social media, all that stuff.
So I get involved in this stuff, but in general we we have a policy team. There are people who I trust there. The people are kind of working on this on a day-to-day basis. And the interactions that um that I was just referring to, I mean a lot of this is documented… I mean because uh you know Jim Jordan and the the House had this whole investigation and committee into into the the kind of government censorship around stuff like this and we produced all these documents and it’s all in the public domain…
I mean basically these people from the Biden Administration would call up our team and like scream at them and curse. And it’s like these documents are… it’s all kind of out there!
Rogan: Gah! Did you record any of those phone calls? God!
Zuckerberg: I don’t no… I don’t think… I don’t think we… but but… I think… I want listen… I mean, there are emails. The emails are published. It’s all… it’s all kind of out there and um and they’re like… and basically it just got to this point where we were like, no we’re not going to. We’re not going to take down things that are true. That’s ridiculous…
Parsing what he’s saying here is important. Again, we already established above a few important facts that Rogan doesn’t understand, and either Zuck doesn’t understand or is deliberately being coy in his explanation: (1) government actors are constantly trying to persuade media companies regarding their editorial discretion and that’s not against the law in any way, unless it crosses the line into coercion, and Zuck is (once again) admitting there was no coercion and they had no problem saying no. (2) He’s basing this not on actual firsthand knowledge but on stuff that is “all kind of out there” because “the emails are published” and “it’s all in the public domain.”
Now, because I’m not that busy creating AI glasses (though I am perhaps working on the future of social media), I actually did pay pretty close attention to what happened with those published emails and the documents in the public domain, and Zuckerberg is misrepresenting things, either on purpose or because the false narrative filtered back to him.
The reason I followed it closely is because I was worried that the Biden administration might cross the First Amendment line. This is not the case of me being a fan of the Biden administration, whose tech policies I thought were pretty bad almost across the board. The public statements that the White House made, whether from then press secretary Jen Psaki or Joe Biden himself, struck me as stupid things to say, but they did not appear to cross the First Amendment line, though they came uncomfortably close.
So I followed this case closely, in part, because if there was evidence that they crossed the line, I would be screaming from the Techdirt rooftops about it.
But, over and over again, it became clear that while they may have walked up to the line, they didn’t seem to cross it. That’s also what the Supreme Court found in the Murthy case.
So when Zuckerberg says that there are published emails, referencing the “screaming and cursing,” I know exactly what he’s talking about. Because it was a highlight of the district court ruling that claimed the White House had violated the First Amendment (which was later overturned by the Supreme Court).
Indeed, in my write-up of that District Court ruling, I even called out the “cursing” email as an example that struck me as one of the only things that might actually be a pretty clear violation of the First Amendment. Here’s what I wrote two years ago when that ruling came out:
Most of the worst emails seemed to come from one guy, Rob Flaherty, the former “Director of Digital Strategy,” who seemed to believe his job in the White House made it fine for him to be a total jackass to the companies, constantly berating them for moderation choices he disliked.
I mean, this is just totally inappropriate for a government official to say to a private company:
Things apparently became tense between the White House and Facebook after that, culminating in Flaherty’s July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”
But then I dug deeper and saw the filing where that quote actually comes from, realizing that the judge in the district court was taking it totally out of context. The ruling made it sound like Flaherty’s cursing outburst was in response to Facebook/Zuck refusing to go along with a content moderation demand.
If that were actually the case, then that would absolutely violate the First Amendment. The problem is that it’s not what happened. It was still inappropriate in general, but not an unconstitutional attack on speech.
What had happened was that Instagram had a bug that prevented the Biden account from getting more followers, and the White House was annoyed by that. Someone from Meta responded to a query, saying basically “oops, it was a bug, our bad, but it’s fixed now” and that response was forwarded to Flaherty, who acted like a total power-mad jackass with the “Are you guys fucking serious? I want an answer on what happened here and I want it today” response.
So here’s the key thing: that heated exchange had absolutely nothing to do with pressuring Facebook on its content moderation policies. That “public domain” “cursing” email is entirely about a bug that prevented the Biden account from getting more followers, and Rob throwing a bit of a shit fit about it.
As Zuck says (but notably no one on the Rogan team actually looks up), this is all “out there” in “the public domain.” Rogan didn’t look it up. It’s unclear if Zuckerberg looked it up.
But I did:
We can still find that response wholly inappropriate and asshole-ish. But it’s not because Facebook refused to take down information on vaccine side effects, as is clearly implied (and how Rogan takes it).
Indeed, Zuckerberg (again!) points out that the company’s response to requests to remove anti-vax memes was to tell the White House no:
Zuck: They wanted us to take down this meme of Leonardo DiCaprio looking at a TV talking about how 10 years from now or something um you know you’re going to see an ad that says okay if you took a Covid vaccine you’re um eligible you you know like uh for for this kind of payment like this sort of like class action lawsuit type meme.
And they’re like, “No, you have to take that down.” We just said, ‘No, we’re not going to take down humor and satire. We’re not going to take down things that are true.“
He then does talk about the stupid Biden “they’re killing people” comment, but leaves out the fact that Biden walked that back days later, admitting “Facebook isn’t killing people” and instead blaming people on the platform spreading misinformation and saying “that’s what I meant.”
But it didn’t change the fact that Facebook refused to take action on those accounts.
So even after he’s said multiple times that Facebook’s response to whatever comments came in from the White House was to tell them “no,” which is exactly what the Supreme Court made clear showed there was no coercion, Rogan goes on a rant as if Zuckerberg had just told him that they did, in fact, suppress the content the White House requested (something Zuck directly denied to Rogan multiple times, even right before this rant):
Rogan: Wow. [sigh] Yeah, it’s just a massive overstepping. Also, you weren’t killing people. This is the thing about all of this. It’s like they suppressed so much information about things that people should be doing regardless of whether or not you believe in the vaccine, regardless… put that aside. Metabolic health is of the utmost importance in your everyday life whether there’s a pandemic or there’s not and there’s a lot of things that you can do that can help you recover from illness.
It prevents illnesses. It makes your body more robust and healthy. It strengthens your immune system. And they were suppressing all that information and that’s just crazy. You can’t say you’re one of the good guys if you’re suppressing information that would help people recover from all kinds of diseases. Not just Covid. The flu, common cold, all sorts of different things. High doses of Vitamin C, D3 with K2 and magnesium. They were suppressing this stuff because they didn’t want people to think that you could get away with not taking a vaccine.
Dude, Zuck literally told you over and over again that they said no to the White House and didn’t suppress that content.
But Zuck doesn’t step in to correct Rogan’s misrepresentations, because he’s not here for that. He’s here to get this narrative out, and Rogan is biting hard on the narrative. Hilariously, he then follows it up by saying how the thing that Zuck just said didn’t happen, but which Rogan is chortling along as if it did happen, proves the evils of “distortion of facts” and…. where the hell is my irony font?
Rogan: This is a crazy overstep, but scared the shit out of a lot of people… redpilled as it were. A lot of people, because they realized like, oh, 1984 is like an instruction manual…
Zuck: Yeah, yeah.
Rogan: It’s like this is it shows you how things can go that way with wrong speak and withbizarre distortion of facts.
I mean, you would know, wouldn’t you, Joe?
From there, they pivot to a different discussion, though again, it’s Zuckerberg feeding Rogan lines about how the US ought to “protect” the US tech industry from foreign governments, rather than trying to regulate them.
A bit later on, there actually is a good discussion about the kinds of errors that are made in content moderation and why. Rogan (after spending so much time whining about the evils of censorship) suddenly turns around and says that, well, of course, Facebook should be blocking “misinformation” and “outright lies” and “propaganda”:
Rogan: But you do have to be careful about misinformation! And you have to be careful about just outright lies and propaganda complaints, or propaganda campaigns rather. And how do you differentiate?
Dude, like that’s the whole point of the challenge here. You yourself talked about the billions of people and how mistakes are made because so much of this is automated. But then you were misleadingly claiming that this info was taken down over demands from the government (which Zuckerberg clearly denied multiple times), and for you to then wrap back around to “but you gotta take down misinformation and lies and propaganda campaigns” is one hell of a swing.
But, as I said, it does lead to Zuck explaining how confidence levels matter, and how where you set those levels will cover both how much “bad” content gets removed, but also how much is left up and how much innocent content gets accidentally caught:
Zuck: Okay, you have some classifier that’s it’s trying to find say like drug content, right? People decide okay, it’s like the opioid epidemic is a big deal, we need to do a better job of cracking down on drugs and drug sales. Right, I don’t I don’t want people dealing drugs on our networks.
So we build a bunch of systems that basically go out and try to automate finding people who are who are dealing with dealing drugs. And then you basically have this question, which is how precise do you want to set the classifier? So do you want to make it so that the system needs to be 99% sure that someone is dealing drugs before taking them down? Do you want to to be 90% confident? 80% confident?
And then those correspond to amounts of… I guess the the statistics term would be “recall.” What percent of the bad stuff are you finding? So if you require 99% confidence then maybe you only actually end up taking down 20% of the bad content. Whereas if you reduce it and you say, okay, we’re only going to require 90% confidence now maybe you can take down 60% of the bad content.
But let’s say you say, no we really need to find everyone who’s doing this bad thing… and it doesn’t need to be as as severe as as dealing drugs. It could just be um I mean it could be any any kind of content of uh any kind of category of harmful content. You start getting to some of these classifiers might have you know 80, 85% Precision in order to get 90% of the bad stuff down.
But the problem is if you’re at, you know, 90% precision that means one out of 10 things that the classifier takes down is not actually problematic. And if you filter… if you if you kind of multiply that across the billions of people who use our services every day that is millions and millions of posts that are basically being taken down that are innocent.
And upon review we’re going to look at and be like this is ridiculous that this thing got taken down. Which, I mean, I think you’ve had that experience and we’ve talked about this for for a bunch of stuff over time.
But it really just comes down to this question of where do you want to set the classifiers so one of the things that we’re going to do is basically set them to… require more confidence. Which is this trade-off.
It’s going to mean that we will maybe take down a smaller amount of the harmful content. But it will also mean that we’ll dramatically reduce the amount of people who whose accounts were taken off for a mistake, which is just a terrible experience.
And that’s all a good and fascinating fundamental explanation of why the Masnick Impossibility Theorem remains in effect. There are always going to be different kinds of false positives and false negatives, and that’s going to always happen because of how you set the confidence levels of the classifiers.
Zuck could have explained that many of the other things that Rogan was whining about regarding the “suppression” of content around COVID (which, again, everyone but Rogan has admitted was based on Facebook’s own decision-making, not the US government), was quite often a similar sort of situation, where the confidence levels on the classifiers may have caught information it shouldn’t have, but which the company (at the time) felt had to be set at that level to make sure enough of the “bad” content (which Rogan himself says they should take down) gets caught.
But there is no recognition of how this part of the conversation impacts the earlier conversation at all.
There’s more in there, but this post is already insanely long, so I’ll close out with this: as mentioned in my opening, Donald Trump directly threatened to throw Zuck in prison for the rest of his life if Facebook didn’t moderate the way he wanted. And just a couple months ago, FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.
None of that came up in this discussion. The only “government pressure” that Zuck talks about is from the Biden admin with “cursing,” which he readily admits they weren’t intimidated by.
So we have Biden officials who were, perhaps, mean, but not so threatening that Meta felt the need to bow down to them. And then we have Trump himself and leading members of his incoming administration who sent direct and obvious threats, which Zuck almost immediately bowed down to and caved.
And yet Rogan (and much of the media covering this podcast) claims he “revealed” how the Biden admin violated the First Amendment. Hell, the NY Post even ran an editorial pretending that Zuck didn’t go far enough because he didn’t reveal all of this in time for the Murthy case. And that’s only because the author doesn’t realize he literally is talking about the documents in the Murthy case.
The real story here is that Zuckerberg caved to Trump’s threats and felt fine pushing back on the Biden admin. Rogan at one point rants about how Trump will now protect Zuck because Trump “uniquely has felt the impact of not being able to have free speech.” That seems particularly ironic given the real story: Zuckerberg caved to Trump’s threats while pushing back on the Biden admin.
Zuckerberg knew how this would play to Rogan and Rogan’s audience, and he got exactly what he needed out of it. But the reality is that all of this is Zuck caving to threats from Trump and Trump officials, while feeling no coercion from the Biden admin. As social media continues to grapple with content moderation challenges, it would be nice if leaders like Zuckerberg were actually transparent about the real pressures they face, rather than fueling misleading narratives.
But that’s not the world we live in.
Strip away all the spin and misdirection, and the truth is inescapable: Zuckerberg folded like a cheap suit in the face of direct threats from Trump and his lackeys, while barely batting an eye at some sternly worded emails from Biden officials.
Posts with LGBTQ+ hashtags including #lesbian, #bisexual, #gay, #trans, #queer, #nonbinary, #pansexial, #transwomen, #Tgirl, #Tboy, #Tgirlsarebeautiful, #bisexualpride, #lesbianpride, and dozens of others were hidden for any users who had their sensitive content filter turned on. Teenagers have the sensitive content filter turned on by default.
When teen users attempted to search LGBTQ terms they were shown a blank page and a prompt from Meta to review the platform’s “sensitive content” restrictions, which discuss why the app hides “sexually explicit” content.
This is notable because, despite the moral panic around “kids and social media,” even the most ardent critics usually (reluctantly) admit social media has been incredibly useful for LGBTQ youth seeking information and community, often benefiting their health and wellbeing.
I had started to write up this article about that, planning to focus on two points. First, contrary to the popular (but false) belief that content moderation targets traditionally “conservative” speech, it very often targets traditionally “progressive” speech. We see these stories all the time, but the MAGA world either doesn’t know or doesn’t care.
Second, this seemed like a pretty strong reminder of how LGBTQ content will be on the chopping block if KOSA becomes law. Indeed, the very existence of the “sensitive content” restrictions on Meta’s platforms (including Facebook, Instagram, and Threads) was actually the company trying to comply-in-advance with KOSA, forcing all teenagers to have the “sensitive content filter” on by default.
In other words, Meta effectively revealed that, yes, of course the easiest way to abide by KOSA’s restrictions will be to restrict access to any pro-LGBTQ content.
In response to Lorenz’s story, Meta said (as it always does when one of these kinds of stories pops up) that it was “a mistake” and promised to correct it. But, as Lorenz notes, the suppression happened for quite some time, and users who tried to raise the alarm found their own posts hidden.
Some LGBTQ teenagers and content creators attempted to sound the alarm about the issue, but their posts failed to get traction. For years, LGBTQ creators on Instagram havesuffered shadow bansand had theircontent labeled as “non-recommendable.”The restrictions on searches, however, are more recent, coming into effect in the past few months. Meta said it was investigating to find out when the error began.
“A responsible and inclusive company would not build an algorithm that classifies some LGBTQ hashtags as ‘sensitive content,’ hiding helpful and age-appropriate content from young people by default,” a spokesperson for GLAAD said. “Regardless of if this was an unintended error, Meta should… test significant product updates before launch.”
Of course, just as I was initially working on this post on Tuesday, Mark Zuckerberg dropped his whole “hey we’re kissing up to Trump by cutting back on how much we moderate” thing, which certainly changed the way I was looking at this particular story.
While I wrote more about that announcement yesterday, I didn’t cover the specific changes to the policies, as those weren’t made as clear in the initial announcement, which was more about the philosophy behind the policy changes. Kate Knibbs, at Wired, had the scoop on the specific changes within the policies, which makes it clear that Meta’s new view of “non-biased” moderation is basically “hateful people are now welcome.”
In a notable shift, the company now says it allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
In other words, Meta now appears to permit users to accuse transgender or gay people of being mentally ill because of their gender expression and sexual orientation. The company did not respond to requests for clarification on the policy.
Again, Meta is absolutely free to do what it wants with its policies. That’s part of its own free speech rights. And, yesterday, I explained why some of the underlying reasons for the policy changes made sense, but here they’re not just saying “hey, we’re going to be less aggressive in pulling down content,” they’re explicitly signaling “hate has a home here!”
I mean, what the fuck is this?
We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.”
That’s in a section saying users are not allowed to post about others’ “mental characteristics” including mental illness, but then they create that new exception to that policy.
If it wasn’t already clear that Meta’s new policies are deliberately bending over backwards to write in exceptions for MAGA culture war favorites, just take a look at the other changes Wired highlighted:
Removing language prohibiting content targeting people based on the basis of their “protected characteristics,” which include race, ethnicity, and gender identity, when they are combined with “claims that they have or spread the coronavirus.” Without this provision, it may now be within bounds to accuse, for example, Chinese people of bearing responsibility for the Covid-19 pandemic.
A new addition appears to carve out room for people who want to post about how, for example, women shouldn’t be allowed to serve in the military or men shouldn’t be allowed to teach math because of their gender. Meta now permits content that argues for “gender-based limitations of military, law enforcement, and teaching jobs. We also allow the same content based on sexual orientation, when the content is based on religious beliefs.”
Another update elaborates on what Meta permits in conversations about social exclusion. It now states that “people sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups.” Previously, this carve-out was only available for discussions about keeping health and support groups limited to one gender.
We noted yesterday that the larger change in direction was clearly political. The specifics here make that even clearer. As I noted, there are some legitimate rationales for cleaning up how Meta handles enforcement of its rules, as that has been a total mess. But all of these changes are not in how they handle enforcement. They’re literally all about creating exceptions to their (still in existence) hateful conduct policy to create space for the exact kinds of bigotry and hatred favored by MAGA provocateurs.
This is just confirming that Meta’s about-face is not actually about fixing a broken trust & safety enforcement program writ large, but to just rewrite the rules to allow for more cruelty and hatred towards marginalized groups disfavored by the MAGA world.
It seems like quite a choice. We’ve discussed at great length the whole “Nazi bar” concept, and this is very much a Nazi bar moment for Zuckerberg. This is not calling him a Nazi (as some will inevitably, misleadingly, whine). The whole point of the “Nazi bar” idea is that if the owner of a private space makes it clear that Nazis are welcome, then everyone else will come to realize that it’s a Nazi bar. It doesn’t matter whether or not the owners are Nazis themselves. All that matters is the public perception.
And these specific changes are very much Zuckerberg yelling “Nazis welcome!”
A couple of years ago, when Substack more or less made the same decision, my main complaint was that the company wanted to signal that it was the Nazi bar by dog whistling without coming out and admitting it outright. It’s your private property. You can run it as a Nazi bar if you want to, No one’s stopping you from doing it.
But fucking own it.
Don’t give some bullshit line about “free speech” when it’s not true. Just own what you’re doing: “we’re making a space for bigots to feel comfortable, by changing our rules to expressly cater to them, while expressly harming the marginalized groups they hate.”
That would be the honest admission. But just like Substack, Meta won’t do this, because it’s run by cowards.
Indeed, the most incredible thing in all of this is that these changes show how successful the “working the refs” aspect of the MAGA movement has been over the last few years. It was always designed to get social media companies to create special rules for their own hot button topics, and now they’ve got them. They’re literally getting special treatment by having Meta write rules that say “your bigotry, and just your bigotry, is favored here” while at the very same time suppressing speech around LGBTQ or other progressive issues.
It’s not “freedom of speech” that Zuck is bringing here. It’s “we’re taking one side in the culture war.”
In altering their policies to appease extremists, Meta is directly endangering the well-being and safety of LGBTQ users on their platforms.
As mentioned, he’s free to do that, but no one should be under any illusion that it’s a move having to do with free speech. It’s a political move to say “Nazis welcome” at a moment when it looks like the rhetorical Nazis are about to return to power.
I had mentioned yesterday that this was Zuck trying to follow Musk’s path, which makes some amount of sense. Ever since Elon took over, it’s been pretty clear that Zuck was somewhat jealous of the way in which Musk basically told anyone who didn’t like how he was running ExTwitter to fuck off.
So, it makes sense in two dimensions: (1) trying to be more like Elon in not giving in to public pressure and (2) the spineless appeasement of the new political leaders.
But it doesn’t make much sense on the one other vector that kinda matters: business. Hell, Zuckerberg rushed out Threads as a competitor to ExTwitter because people at Meta recognized how Elon’s haphazard mess of moderation had driven not just users away, but advertisers too.
Zuck may be betting that, because a slim margin of voters put MAGA in charge, advertisers and users will fall in line. But I’m guessing it’s a bet that’s going to bust in a pretty embarrassing manner before too long.
When the NY Times declared in September that “Mark Zuckerberg is Done With Politics,” it was obvious this framing was utter nonsense. It was quite clear that Zuckerberg was in the process of sucking up to Republicans after Republican leaders spent the past decade using him as a punching bag on which they could blame all sorts of things (mostly unfairly).
Now, with Trump heading back to the White House and Republicans controlling Congress, Zuck’s desperate attempts to appease the GOP have reached new heights of absurdity. The threat from Trump that he wanted Zuckerberg to be jailed over a made-up myth that Zuckerberg helped get Biden elected only seemed to cement that the non-stop scapegoating of Zuck by the GOP had gotten to him.
Since the election, Zuckerberg has done everything he can possibly think of to kiss the Trump ring. He even flew all the way from his compound in Hawaii to have dinner at Mar-A-Lago with Trump, before turning around and flying right back to Hawaii. In the last few days, he also had GOP-whisperer Joel Kaplan replace Nick Clegg as the company’s head of global policy. On Monday it was announced that Zuckerberg had also appointed Dana White to Meta’s board. White is the CEO of UFC, but also (perhaps more importantly) a close friend of Trump’s.
Some of the negative reactions to the video are a bit crazy, as I doubt the changes are going to have that big of an impact. Some of them may even be sensible. But let’s break them down into three categories: the good, the bad, and the stupid.
The Good
Zuckerberg is exactly right that Meta has been really bad at content moderation, despite having the largest content moderation team out there. In just the last few months, we’ve talked about multiple stories showcasing really, really terrible content moderation systems at work on various Meta properties. There was the story of Threads banning anyone who mentioned Hitler, even to criticize him. Or banning anyone for using the word “cracker” as a potential slur.
It was all a great demonstration for me of Masnick’s Impossibility Theorem of dealing with content moderation at scale, and how mistakes are inevitable. I know that people within Meta are aware of my impossibility theorem, and have talked about it a fair bit. So, some of this appears to be them recognizing that it’s a good time to recalibrate how they handle such things:
In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do.
Leaving aside (for now) the use of the word “censored,” much of this isn’t wrong. For years it felt that Meta was easily pushed around on these issues and did a shit job of explaining why it did things, instead responding reactively to the controversy of the day.
And, in doing so, it’s no surprise that as the complexity of its setup got worse and worse, its systems kept banning people for very stupid reasons.
It actually is a good idea to seek to fix that, and especially if part of the plan is to be more cautious in issuing bans, it seems somewhat reasonable. As Zuckerberg announced in the video:
We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we’re going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So, by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms. We’re also going to tune our content filters to require much higher confidence before taking down content. The reality is that this is a trade-off. It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.
Zuckerberg’s announcement is a tacit admission that Meta’s much-hyped AI is simply not up to the task of nuanced content moderation at scale. But somehow that angle is getting lost amidst the political posturing.
Some of the other policy changes also don’t seem all that bad. We’ve been mocking Meta for its “we’re downplaying political content” stance from the last few years as being just inherently stupid, so it’s nice in some ways to see them backing off of that (though the timing and framing of this decision we’ll discuss in the latter sections of this post):
We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Finally, most of the attention people have given to the announcement has focused on the plan to end the fact-checking program, with a lot of people freaking out about it. I even had someone tell me on Bluesky that Meta ending its fact-checking program was an “existential threat” to truth. And that’s nonsense. The reality is that fact-checking has always been a weak and ineffective band-aid to larger issues. We called this out in the wake of the 2016 election.
This isn’t to say that fact-checking is useless. It’s helpful in a limited set of circumstances, but too many people (often in the media) put way too much weight on it. Reality is often messy, and the very setup of “fact checking” seems to presume there are “yes/no” answers to questions that require a lot more nuance and detail. Just as an example of this, during the run-up to the election, multiple fact checkers dinged Democrats for calling Project 2025 “Trump’s plan”, because Trump denied it and said he had nothing to do with it.
But, of course, since the election, Trump has hired on a bunch of the Project 2025 team, and they seem poised to enact much of the plan. Many things are complex. Many misleading statements start with a grain of truth and then build a tower of bullshit around it. Reality is not about “this is true” or “this is false,” but about understanding the degrees to which “this is accurate, but doesn’t cover all of the issues” or deal with the overall reality.
So, Zuck’s plan to kill the fact-checking effort isn’t really all that bad. I think too many people were too focused on it in the first place, despite how little impact it seemed to actually have. The people who wanted to believe false things weren’t being convinced by a fact check (and, indeed, started to falsely claim that fact checkers themselves were “biased.”)
Indeed, I’ve heard from folks at Meta that Zuck has wanted to kill the fact-checking program for a while. This just seemed like the opportune time to rip off the band-aid such that it also gains a little political capital with the incoming GOP team.
On top of that, adding in a feature like Community Notes (née Birdwatch from Twitter) is also not a bad idea. It’s a useful feature for what it does, but it’s never meant to be (nor could it ever be) a full replacement for other kinds of trust & safety efforts.
The Bad
So, if a lot of the functional policy changes here are actually more reasonable, what’s so bad about this? Well, first off, the framing of it all. Zuckerberg is trying to get away with the Elon Musk playbook of pretending this is all about free speech. Contrary to Zuckerberg’s claims, Facebook has never really been about free speech, and nothing announced on Tuesday really does much towards aiding in free speech.
I guess some people forget this, but in the earlier days, Facebook was way more aggressive than sites like Twitter in terms of what it would not allow. It very famously had a no nudity policy, which created a huge protest when breastfeeding images were removed. The idea that Facebook was ever designed to be a “free speech” platform is nonsense.
Indeed, if anything, it’s an admission of Meta’s own self-censorship. After all, the entire fact-checking program was an expression of Meta’s own position on things. It was “more speech.” Literally all fact-checking is doing is adding context and additional information, not removing content. By no stretch of the imagination is fact-checking “censorship.”
Of course, bad faith actors, particularly on the right, have long tried to paint fact-checking as “censorship.” But this talking point, which we’ve debunked before, is utter nonsense. Fact-checking is the epitome of “more speech”— exactly what the marketplace of ideas demands. By caving to those who want to silence fact-checkers, Meta is revealing how hollow its free speech rhetoric really is.
Also bad is Zuckerberg’s misleading use of the word “censorship” to describe content moderation policies. We’ve gone over this many, many times, but using censorship as a description for private property owners enforcing their own rules completely devalues the actual issue with censorship, in which it is the government suppressing speech. Every private property owner has rules for how you can and cannot interact in their space. We don’t call it “censorship” when you get tossed out of a bar for breaking their rules, nor should it be called censorship when a private company chooses to block or ban your content for violating its rules (even if you argue the rules are bad or were improperly enforced.)
The Stupid
The timing of all of this is obviously political. It is very clearly Zuckerberg caving to more threats from Republicans, something he’s been doing a lot of in the last few months, while insisting he was done caving to political pressure.
I mean, even Donald Trump is saying that Zuckerberg is doing this because of the threats that Trump and friends have leveled in his direction:
Q: Do you think Zuckerberg is responding to the threats you've made to him in the past?TRUMP: Probably. Yeah. Probably.
I raise this mainly to point out the ongoing hypocrisy of all of this. For years we’ve been told that the Biden campaign (pre-inauguration in 2020 and 2021) engaged in unconstitutional coercion to force social media platforms to remove content. And here we have the exact same thing, except that it’s much more egregious and Trump is even taking credit for it… and you won’t hear a damn peep from anyone who has spent the last four years screaming about the “censorship industrial complex” pushing social media to make changes to moderation practices in their favor.
Turns out none of those people really meant it. I know, not a surprise to regular readers here, but it should be called out.
Also incredibly stupid is this, which we’ll quote straight from Zuck’s Threads thread about all this:
That’s Zuck saying:
Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.
There’s a pretty big assumption in there which is both false and stupid: that people who live in California are inherently biased, while people who live in Texas are not. People who live in both places may, in fact, be biased, though often not in the ways people believe. As a few people have pointed out, more people in Texas voted for Kamala Harris (4.84 million) than did so in New York (4.62 million). Similarly, almost as many people voted for Donald Trump in California (6.08 million) as did so in Texas (6.39 million).
There are people with all different political views all over the country. The idea that everyone in one area believes one thing politically, or that you’ll get “less bias” in Texas than in California, is beyond stupid. All it really does is reinforce misguided stereotypes.
The whole statement is clearly for political show.
It also sucks for Meta employees who work in trust & safety, who want access to certain forms of healthcare or want net neutrality, or other policies that are super popular among voters across the political spectrum, but which Texas has decided are inherently not allowed.
Finally, there’s this stupid line in the announcement from Joel Kaplan:
We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.
I’m sure that sounded good to whoever wrote it, but it makes no sense at all. First off, thanks to the Speech and Debate Clause, literally anything is legal to say on the floor of Congress. It’s like the one spot in the world where there are no rules at all over what can be said. Why include that? Things could literally be said on the floor of Congress that would violate the law on Meta platforms.
Also, TV stations literally have restrictions known as “standards and practices” that are way, way, way more restrictive than any set of social media content moderation rules. Neither of these are relevant metrics to compare to social media. What jackass thought that using examples of (1) the least restricted place for speech and (2) a way more restrictive place for speech made this a reasonable argument to make here?
In the end, the reality here is that nothing announced this week will really change all that much for most users. Most users don’t run into content moderation all that often. Fact-checking happens but isn’t all that prominent. But all of this is a big signal that Zuckerberg, for all his talk of being “done with politics” and no longer giving in to political pressure on moderation, is very engaged in politics and a complete spineless pushover for modern Trumpist politicians.
The inevitable has happened and Elon has started banning and suppressing the speech of folks who were “on his team,” leading to many suddenly realizing that maybe he wasn’t such a free speech supporter after all.
And, of course, when it matters most for free speech, in pushing back against government attempts at suppression, Musk has shown that he’s a pushover for authoritarian demands, so long as he is supportive of the government in question. While he has occasionally stood up to when he ideologically disagrees with the government, these seem to be the exceptions that prove the rule.
Even Elon’s own ExTwitter transparency report admits that under his watch, account suspensions have tripled compared to what they were pre-Musk.
There is no measure under which you can say that Elon is a bigger supporter of free speech than the previous management of Twitter, except in the very, very narrow category of “allowing bigoted Elon Musk fans to be loudly disruptive on the platform.”
And now, even that is coming back to bite him a bit.
In the last week, a bunch of MAGA folks called out Elon for his support for H1B visas and other attempts to bring in high-skilled tech workers to the US. Given that many of the MAGA supporters have spent much of the last two years falsely claiming that Elon was “bringing free speech back,” it was almost amusing to watch them slowly realize that he’s willing to suspend them or to take away their premium features on the site when he gets angry with them.
The most prominent account was Laura Loomer, whose biggest claim to fame seems to be her ability to get banned from platforms.
Musk then used the favorite trick to justify account suppression not being an attack on free speech by redefining spam to mean something… totally unrelated to spam.
Musk’s explanation raises more questions than it answers. This is Elon retconning a justification for the suppression of certain accounts. First, he claims that the algorithm is set to “maximize unregretted user-seconds,” a made-up, impossible-to-calculate stat that he’s talked about for a while now. He then claims that the way the algorithm does this is by rating certain accounts based on how frequently other paying accounts mute or block them. But then he adds a caveat: if he discovers a brigading campaign by accounts to mute/block other accounts in an attempt to suppress their reach, ExTwitter can magically parse out the real mutes/blocks from the fake brigaded ones, and declare some accounts to be “spam.”
This is all a lot of nonsense for Elon to be able to suppress any speech he wants and try to justify it as spam (just like he’s done in the past by redefining “doxxing.”) Of course, as with Elon’s ever-changing definition of doxxing to justify his own actions, I imagine that his legion of fans will continue to buy into his nonsense definition of spam.
Well, except for those MAGA faithful who are now furious that their faces are being eaten by the Leopards Eating Faces Party they supported.
In other words, Musk reserves the right to unilaterally decide which blocks and mutes are “legitimate” and which are not, based on criteria known only to him. This arbitrary and opaque process is a far cry from a principled commitment to free speech.
(Also, I won’t even get into how his tweet misunderstands the whole “live by the sword/die by the sword” line, but will leave that as an exercise for readers).
The end result of this, though, came down to Musk pleading with people to stop being such assholes on his site he took over specifically to unban people for being assholes.
I mean, it’s not like we didn’t warn Elon exactly how this would go. And, it’s not like we haven’t written about how content moderation teams aren’t about ideology. They just wish everyone would stop being jerks, which is the key to any site that allows user-generated content.
I know that I’m banging the drum over this over and over again, but it’s because there are still a ton of people insisting, falsely, that Elon Musk has some sort of principled take on free speech, when it’s been made clear over and over and over and over again that his take is based entirely on his own whims of what he wants, and not any actual understandable conception of free speech.
No matter how many times Musk is caught red-handed suppressing speech he doesn’t like, a vocal contingent will likely continue to buy into the myth of him as a “free speech absolutist.” But for anyone willing to look objectively at his actions rather than his words, the reality is undeniable. Elon Musk’s “free speech” posture is nothing more than a flimsy rhetorical cover for his own desire to control the discourse.
Yes, he has every right to do this on his own platform, but so too did the operators of Twitter before him. Musk may draw the lines of content moderation slightly differently than the previous team, but he certainly seems to draw them much more arbitrarily according to his personal whims.
Katie Couric recently claimed that repealing Section 230 would help combat online misinformation. The problem is, she couldn’t be more wrong. Worse, as a prominent voice, she’s contributing to the widespread misinformation around Section 230 herself.
A few years ago, for reasons that are unclear to me, Katie Couric chaired a weird Aspen Institute “Commission on Disinformation,” which produced a report on how to tackle disinformation. The report was, well, not good. It was written by people with no real experience tackling issues related to disinformation and it shows. As we noted at the time, it took a “split the baby” approach to trying to deal with disinformation. It described how there were no good answers, that doing anything might make the problem worse, and then still suggested that maybe repealing Section 230 for certain kinds of content (not clearly defined) might help.
The report’s recommendations were a mix of unworkable and nonsensical ideas, betraying the authors’ lack of true expertise on the complex issues and, more importantly, the tradeoffs around online disinformation.
Repealing Section 230 would not magically solve misinformation online. In fact, it would likely make the problem worse. Section 230 is what allows websites to moderate content and experiment with anti-misinformation measures, without fear of lawsuits. Removing that protection would incentivize sites to take a hands-off approach, or shut down user content entirely. The end result would be fewer places for online discourse, dominated by a few tech giants – hardly a recipe for truth.
Still, it appears that Couric is now presenting herself as an expert on disinformation. The NY Times Dealbook has a series of “influential people” supposedly “sharing their insights” on big topics of the day, and they asked Couric about disinformation. Her response was that she was upset Section 230 won’t be repealed.
What is the best tool a person has to combat misinformation today?
There are many remedies for combating misinformation, but sadly getting rid of Section 230 and requiring more transparency by technology companies may not happen.
But again, that only raises serious questions about how little she actually understands the role of Section 230 and how it functions. The idea that repealing Section 230 would be a remedy for combating misinformation is misinformation itself.
Remember, Section 230 is what frees companies to try to respond to and combat misinformation. There are many market forces that push companies to respond to misinformation: the loss of users, the loss of advertisers, the rise of competition. Indeed, we’re seeing all three of those occurring these days as ExTwitter and Facebook have decided to drop any pretense of trying to combat misinformation.
But then you need Section 230 to allow websites that actually are trying to combat misinformation to apply whatever policies they can come up with. It’s what allows them to experiment and to adjust in the face of ever sneakier and ever more malicious users trying to push misinformation.
Without Section 230, each decision and each policy could potentially lead to liability. This means that instead of having moderation teams focused on what will make for the best community overall, you have legal teams focused on what will reduce liability or threats of litigation.
The underlying damning fact here is that the vast majority of misinformation is very much protected speech. And it needs to be if you want to have free speech. Otherwise, you have people like incoming President Trump declare any news that is critical of him as “fake news” and allowing him to take legal action over it.
On top of that, the standard under the First Amendment is that if there is violative content hosted by an intermediary (such as a bookseller), there needs to be actual knowledge not just that the content exists, but that it somehow violates the law.
The end result then is that if you repeal Section 230, you don’t end up with less misinformation. You almost certainly end up with way more. Because websites are encouraged to avoid making moderation decisions, because everything will need to be reviewed by an expensive legal team who will caution against most decisions. It also creates incentives to decrease even reviewing content, out of a fear that a court might deem any moderation effort to be “actual knowledge.”
Thus, the websites that continue to host third-party user-generated content are likely to do significantly less trust & safety work, because the law is saying that if they continue to do that work, they may face greater legal threats for it. That won’t lead to less misinformation, it will lead to more.
The main thing that repealing Section 230 would do is probably lead to many fewer places willing to host third-party content at all, because of that kind of legal liability. Many online forums that want to support communities in a safe and thoughtful way will realize that the risk of liability is too great, and will exit the market (or never enter at all).
So the end result is that you have basically wiped the market of upstarts, smaller spaces, and competitors and left the market to Mark Zuckerberg and Elon Musk. I’m curious if Katie Couric thinks that’s a better world.
Indeed, the only spaces that will remain are those that take the path described above, of limiting their moderation decisions to the legally required level. Only a few sites will do this, and they will quickly become garbage sites that users and advertisers won’t be as interested in participating in.
So we have more power given to Zuck and Musk, fewer competitive spaces, and the remaining sites are incentivized to do less content moderation. Plenty of experts have explained this, including those listed as advisors to Couric’s commission.
I can guarantee that she (or whoever the actual staffers who handled this issue) was told about this impact. But she seems to have internalized just the “repeal 230” part, which is just fundamentally backwards.
That said, I actually do think that the rest of her answer is a pretty good summary of what the real response needs to be: better education, better media literacy, and better teaching people how to fend for themselves against attempts to mislead and lie to them.
As a result, it’s mostly up to the individual to be vigilant about identifying misinformation and not sharing it. This will require intensive media literacy, which will help people understand the steps required to consider the source. That means investigating websites that may be disseminating inaccurate information and understanding their agendas, second-sourcing information, and if it’s an individual, learning more about that person’s background and expertise. Of course, this is all time-consuming and a lot to ask of consumers, but for now, I ascribe to the Sy Syms adage: “An educated consumer is our best customer.”
But, of course, the semi-ironic point in all of this is that having Section 230 around makes that more possible. Without Section 230, we have fewer useful resources to help teach media literacy. We have fewer ways of educating people on how to do things right.
For example, Wikipedia has made clear that it cannot exist without Section 230, and it has become a key tool in information literacy these days (which is ironic, given that in its early days it was widely accused of being a vector of misinformation).
Combating online misinformation is a complex challenge with no easy answers. But despite Couric’s claims, repealing Section 230 is the wrong solution. It would lead to less content moderation, more concentrated power in the hands of a few tech giants, and, ultimately, even more misinformation spreading unchecked online. Policymakers and thought leaders need to move beyond simplistic soundbites and engage with the real nuances of these issues.
Katie Couric is a big name with a big platform. Misinforming the public about these issues does a real disservice to the issue.
Now, maybe the NY Times can ask actual experts who understand the tradeoffs, rather than the famous talking head who doesn’t, next time they want to ask questions about complex and nuanced subjects? I mean, that would involve not spreading misinformation about Section 230, so probably not.