For years many press outlets (and contrarian engagement pundits like Matt Stoller) tried to argue that the Trump GOP was now “serious about antitrust reform,” “reining in corporate power,” or “holding Big Tech Accountable.” The argument was that because Trumpism claims to be “populist,” it could be convinced to implement serious anti-corporatist antitrust reform that would help the public.
Of course that’s a naïve, violent misread of how authoritarianism works; kleptocrats are only interested in leveraging government power against corporate power if it’s of specific benefit to them personally.
Case in point: Last October, Trump sued CBS claiming (falsely) that a 60 Minutes interview of Kamala Harris had been “deceitfully edited” to her benefit (they simply shortened some of her answers for brevity, as news outlets often do). As Mike explored, the lawsuit tramples the First Amendment and editorial discretion.
Trump’s pick for FCC boss Brendan Carr had already been threatening CBS with a blocked merger if it dared engage in the act of journalism, causing Libertarian outlets like Reason — who, let’s be clear, usually adore Carr’s dismantling of consumer protection standards — to suddenly discover he’s no friend of free speech or logic.
The right wing news ecosystem had been priming this particular pump since last fall, with outlets like the New York Post running articles like this one, claiming that Paramount and CBS’s merger with Skydance will be blocked because CBS simply has “too much liberal bias.”
The great joke here is that, as media critics like Parker Molloy have noted, CBS had been responding to authoritarianism by shifting their editorial slant ever rightward for years already (just like the LA Times, NPR, the Washington Post, and many other self serving companies). Their reward for becoming more feckless? More harassment by authoritarians, which is usually how these things work.
That’s going to be the thrust of Trump “antitrust reform”: kiss the ring and you might get what you want. Challenge Trump and you can expect the authority of the state (or what’s left of it after Trump 2.0 gets done gutting all regulatory independence and firing government workers randomly) to be leveraged against you.
Anybody telling you that Trumpism values free speech or wants to rein in corporate power are confused, bullshitting you, or selling you dodgy supplements. It’s not populism, it’s pseudo-populism to try and convince rubes to root against their own best self interests. It’s not “anti-corporatism” or “antitrust reform,” it’s the reckless, inconsistent weaponizing of government power to benefit kleptocrats personally.
For example, Trump and the GOP didn’t saber rattle against “Big Tech” because they genuinely care about corporate power or protecting free speech, they did so to bully tech companies away from moderating race-baiting right wing propaganda, a cornerstone of modern GOP party power (lying endlessly is necessary when your real world policies, like broad tax cuts for rich brats or the dismantling of female reproductive rights, are broadly unpopular).
Yet somehow you’ve got “progressive” folks like Matt Stoller, and plenty of other people who should know better, constantly insisting that Trumpism is genuine populism that can be leveraged for the greater good.
It’s nonsense; authoritarians are relentlessly self serving bullshit artists, collaboration with them is always a lose-lose scenario, and no matter how routinely companies obey in advance and fecklessly kiss the ring to gain daddy’s approval, it’s simply never going to be enough.
This post was written on Sunday. By the time you read it there may have been 12,492 further unconstitutional TikTok-related hijinks since then, but because this particular kind of unconstitutional violation might well rear its ugly head again, if not with respect to TikTok then with respect to something else, it’s still worth pointing out the problem, even if how it applies to TikTok may have been obviated by even stupider deviations from the Constitution since.
There was an argument left on the table in the TikTokbriefs at the Supreme Court: The ban, among its many unconstitutional flaws, was also unconstitutional jawboning. And Supreme Court precedent from just last year explained why.
In NRA v. Vullo the Court made clear that the government can’t go after a speaker it doesn’t like by pressuring an intermediary the speaker needs to deal with as a way of sticking it to the speaker. And yet, with the TikTok ban, that’s exactly what Congress did: impose liability on the intermediary services TikTok needs to deal with to run if they help TikTok run.
Just look at how the statute is written, and where the prohibition is. Right there, in its first main provision at Section 2(a) (and Section 1 is just the short title of the law), here’s what the law says:
It shall be unlawful for an entity to distribute, maintain, or update (or enable the distribution, maintenance, or updating of) a foreign adversary controlled application by carrying out, within the land or maritime borders of the United States, any of the following:
And then it describes what these other non-TikTok third parties cannot do, namely host the app in its app stores:
(A) Providing services to distribute, maintain, or update such foreign adversary controlled application (including any source code of such application) by means of a marketplace (including an online mobile application store) through which users within the land or maritime borders of the United States may access, maintain, or update such application.
Or provide any sort of server support:
(B) Providing internet hosting services to enable the distribution, maintenance, or updating of such foreign adversary controlled application for users within the land or maritime borders of the United States.
It is this unconstitutional statutory construction that, ironically, is why Trump can’t easily fix this mess without making a bigger one. Because even if he promises not to go after TikTok, he still hasn’t solved the problem because the law’s teeth are not just biting TikTok but anyone helping the app work. And they are sharp teeth, threatening billions in penalties:
An entity that violates subsection (a) shall be subject to pay a civil penalty in an amount not to exceed the amount that results from multiplying $5,000 by the number of users within the land or maritime borders of the United States determined to have accessed, maintained, or updated a foreign adversary controlled application as a result of such violation.
So in the cross-hairs of this law are Google and Apple, which host the app in their app stores,* but also anyone else who provides any sort of services, like perhaps Amazon, if the app is using their cloud services, and potentially CDNs that help handling the data load, and possibly services that help with transmission like backbone providers and wireless telcos if its services are used to connect end users to the service (even if this law omits them with its focus on “hosting,” and it’s not entirely clear that it does, the next law could easily catch them)… The degree of corrupt abdication of his obligation to enforce the law as Chief Executive of the United States needed to save TikTok is significantly greater than if he just needed to universally exempt TikTok from this law, because he’d have to exempt them all.
It does, of course, beg the question as to why any of these affected entities did not sue to challenge the law themselves, because the law is about them. And this sort of impermissible jawboning is going to keep affecting them as intermediaries, again and again, until there is finally enough pushback to take this unconstitutional weapon out of the government’s regulatory quiver.
But that they even needed to is another reason why jawboning is bad. The government put these companies in a position they were not supposed to find themselves in, where they couldn’t freely exercise their own rights as service providers because the government didn’t like a user of their services. And to vindicate their own right they would have to expend the costs associated with litigation as well as the risk of painting yourself as a target for a government that has shown itself to be vindictive to technology platforms it doesn’t like. It was probably a lot more expedient just to refuse service to TikTok and somehow hope that the government does not start to pick off, one by one, everyone else they provide service to with other laws later…
Of course, given the other constitutional problems facially manifest in the TikTok ban, they may have thought it unnecessary, as surely TikTok’s challenge should have been enough. And while they probably should have shown up as amici to help, and in doing so point out this jawboning problem, the rushed briefing during the holidays may have well made such participation in the litigation, at least at the Supreme Court, functionally impossible.
Perhaps TikTok should have raised the jawboning issue itself – as it is it doesn’t seem like the NRA v. Vullo case was even cited in its Supreme Court briefs – but in its briefs it only had so many words it could include and so much time to write them. And the arguments it did bring to bear should have been availing on their own.
But maybe it’s just as well: while it’s bad enough that the Court has backed off of supporting the First Amendment’s protections in all the ways it just did, it would be even worse if it had also backed off of its protective precedent in this context too.
* We also should be concerned about the cybersecurity risk that comes from pressuring app stores to disable distribution of app updates, leaving users to run only outdated software on their phones, but that’s a subject for another post…
As the we wrote in our amicus brief (which it appears the justices did not read – guess they didn’t have time…), if the TikTok ban is blessed, it provides a roadmap for how to avoid the Constitution’s prohibition to “make no law” abridging free expression. All the government needs to do is declare that what it is doing it is doing for national security purposes, or perhaps to address some other similar exigency, and to seal the deal include such an accelerated time for enforcement that it will be impossible for the courts to appropriately review what the government is doing. (In fact, simply either claiming a provocative reason, or rushing enforcement, might be enough alone to help the government get away with an unconstitutional attack on speech).
We need not determine the proper standard for mixed-justification cases or decide whether the Government’s foreign adversary control justification is content neutral. Even assuming that rationale turns on content, petitioners’ argument fails under the counterfactual analysis they propose: The record before us adequately supports the conclusion that Congress would have passed the challenged provisions based on the data collection justification alone.
Finding that the law effectively banning TikTok is somehow constitutional is a bad decision with all sorts of bad consequences, not the least of which being that it tells the world that we’re not really all that serious about protecting speech when the chips are down, and so maybe other governments need not care about it so much either. The consequence this post is focused on, however, is to what degree the First Amendment’s protection of speech has been undermined altogether here in America. In short: it’s been undermined, although possibly not as badly as it could have been.
But that there might be a glimmer of modest hope does not exonerate this otherwise inexcusable decision. This case should not have been hard: speech interests were affected by this law, whose terms failed to even address the most reasonable justification underpinning the law. (As TikTok pointed out, if data protection was the motivating concern, why were no other platforms targeted? Or even just other Chinese-owned platforms, like Temu?) Because speech interests were affected – those of the platform, as well as those of its users – strict scrutiny should have been applied to the law, at which point the Court should have seen that the lack of narrow tailoring (the law took out a whole platform!) put the law beyond anything that the Constitution would permit.
Yet the Supreme Court still somehow found otherwise.
The question now is whether the decision is indeed as narrow as the Court claims it is, and something that is truly exceptional that leaves untouched other, stronger First Amendment precedent. And there do seem to be a few bright spots. For instance, it basically leaves untouched a few important notions that it looks like the Court is accepting, namely that platforms do have First Amendment rights, and that algorithms implicate this protected editorial discretion. It is also good, perversely, that in finding that only intermediate scrutiny applied, it left untouched the stronger strict scrutiny standard. One concern with the decision at the DC Circuit was that if the TikTok law could survive strict scrutiny, then any unconstitutional action probably could. We would no longer have any robustly meaningful test to use to protect us against incursions on speech rights, or even any rights. So, at least, in the wake of this decision, strict scrutiny remains intact and useful.
On the other hand, what’s the point of it remaining a useful test if the Court can so easily find a basis not to use it. The fundamental problem with this decision is that it takes a law with huge impacts on speech interests and declares it to be a law that is not speech related. Technically it hinges on being “content neutral,” but the upshot is that the Court basically says, “La la la we can’t hear you,” to any speech concerns raised by TikTok or its users.
The challenged provisions are facially content neutral. They impose TikTok-specific prohibitions due to a foreign adversary’s control over the platform and make divestiture a prerequisite for the platform’s continued operation in the United States. They do not target particular speech based upon its content, contrast, e.g., Carey v. Brown, 447 U. S. 455, 465 (1980) (statute prohibiting all residential picketing except “peaceful labor picketing”), or regulate speech based on its function or purpose, contrast, e.g., Holder v. Humanitarian Law Project, 561 U. S. 1, 7, 27 (2010) (law prohibiting providing material support to terrorists). Nor do they impose a “restriction, penalty, or burden” by reason of content on TikTok—a conclusion confirmed by the fact that petitioners “cannot avoid or mitigate” the effects of the Act by altering their speech. Turner I, 512 U. S., at 644. As to petitioners, the Act thus does not facially regulate “particular speech because of the topic discussed or the idea or message expressed.” Reed, 576 U. S., at 163. [From page 10]
Instead, by ignoring those speech interests, and the more heightened scrutiny that should have applied as a result, the Court applied what essentially was little more than rational basis review, even though they called it intermediary scrutiny. In short, according to the Court, because the government had good reason to be concerned with how TikTok slurped up user data and shared it, the government was free to do whatever it wanted in response, no matter how unduly destructive to speech interests (and ineffective in support of its own intended ends) its actions were.
The problem here is that not only was this decision an avoidance of the normal constitutional rule that should have better protected the affected speech interests, but there’s little to keep this particular sort of cop-out limited to this particular case. It will be very easy for other government actions that impact speech to be forgiven in the future, just as this one was, because there’s nothing that actually justifies this one. The same flimsy reasoning could easily be applied in another case, despite the Court’s insistence to the contrary. We’ve seen it happen before*, when the court tries to take a baby step to walk back the First Amendment but ends up with a decision that gets stuck on the books as a giant leap backwards, leaving everyone much less protected than they were before.
* Holder v. Humanitarian Law Project, another case dealing with foreign pressure on First Amendment rights, comes to mind. There was language in that decision explaining how its reasoning curtailing those rights was allowable in that case’s context, and just that context. (“We conclude that the material-support statute is constitutional as applied to the particular activities plaintiffs have told us they wish to pursue. We do not, however, address the resolution of more difficult cases that may arise under the statute in the future.”) Yet that decision nevertheless reverberates in other contexts, including this case, as the Court rested part of its analysis regarding the TikTok ban on that earlier exception that it had somehow found itself Constitutionally able to make.
The TikTok decision is a bad decision, and the per curiam nature of the decision hints that even the Court knows it. It reads like a compromise decision – an attempt to sacrifice TikTok without sacrificing everything – in a situation where, in an extremely tight timeline, the Court needed at least five votes to do something, and there wasn’t enough agreement as to what that something should be. At oral argument, and later during the Free Speech Coalition v. Paxton argument earlier this week, it became clear that several justices were uncomfortable issuing a stay or an injunction to buy more time to adjudicate this case and the important issues implicated more carefully. And it seems there weren’t five votes to say the law was unconstitutional – probably, as oral argument also revealed, because some justices were extremely spooked by the national security implications related to data collection practices.
So if TikTok was going to lose – and it would have effectively lost even if the Court did nothing, given that the deadline for divestment was rapidly approaching – the compromise may have been to try to make it lose in a way that undermined protective First Amendment precedent in the least damaging way. As it was, both Justices Gorsuch and Sotomayor could, correctly, see that the law implicated speech interests, and that ability to recognize it will be important in the future when we need the Court to see them again. But as their concurring opinions made clear, they still would have found the law constitutional, despite its utter lack of narrow tailoring, which strict scrutiny requires. They would have left us with a decision no better than the DC Circuit had issued, where strict scrutiny would become all but useless to protect speech interests.
Under the circumstances, then, this decision may have been the least damaging one the Court could come up with, at least in the available time. But the hope that it wasn’t damaging at all seems naïve. The best we can hope for is that this decision somehow turns out to be the government’s one free bite at the apple, because if it happens again, where the government adopts this roadmap to act unconstitutionally against speech interests, even this Court might start to notice the constitutional problem with such laws and finally decide to do something about them.
It seemed pretty obvious from the way the Supreme Court’s oral arguments went regarding the TikTok ban that this would be the outcome: a 9-0 per curiam decision saying “eh, it’s fine to ban TikTok.”
There is no doubt that, for more than 170 million Americans, TikTok offers a distinctive and expansive outlet for expression, means of engagement, and source of community. But Congress has determined that divestiture is necessary to address its well-supported national security concerns regarding TikTok’s data collection practices and relationship with a foreign adversary. For the foregoing reasons, we conclude that the challenged provisions do not violate petitioners’ First Amendment rights.
The ruling is fundamentally problematic on a number of different levels, but it’s the new reality. This decision sets a dangerous precedent that could enable further government overreach and censorship, under the guise of national security concerns. We’ll have another post exploring the amount of absolute censorial fuckery that this ruling will create, if not in practice, at least among the eager-to-censor political class who will view this as an instruction manual.
But, the key thing to me is that (as was suggested earlier this week), the Biden admin responded to this ruling, on a law that he fought for, signed excitedly, and had his solicitor general strongly defend in front of the Supreme Court, by saying “eh, never mind.”
This whiplash-inducing reversal from the Biden administration, after championing the TikTok ban, underscores the arbitrary and politically motivated nature of this decision. It raises questions about whether there was ever a genuine national security justification, or if it was merely a convenient excuse for censorship.
To summarize: this was a grave national security threat because China could get access to all sorts of secret data (which they already have access to because we don’t have any comprehensive data privacy law) or maybe it was because they could manipulate the minds of children (which every other form of media also can legally do) or because “THIS IS DIFFERENT IT’S CHINA YOU DUM DUM” as people on social media keep trying to tell me. The lack of a clear, consistent justification for singling out TikTok, while other apps and platforms engage in similar data collection practices, reveals the arbitrary and capricious nature of this ban.
Indeed, it was such a grave threat that the Supreme Court felt they had to rush the briefing way out of line with normal briefing schedules, because it was just so so important to block this app that the kids like.
And then… when the Supreme Court blesses it, the Biden admin is just… not interested anymore.
President Joe Biden won’t enforce a ban on the social media app TikTok that is set to take effect a day before he leaves office on Monday, a U.S. official said Thursday, leaving its fate in the hands of President-elect Donald Trump.
Trump has also suggested he won’t enforce the ban because he wants to “negotiate” some sort of agreement to take credit for everything, even though he was the first to try to ban the app after getting angry that kids on the app made him look foolish. Trump’s desire to “negotiate” and take credit, rather than address any actual concerns, suggests (once again) that political grandstanding, not national security, is the true motivation.
Incredibly, TikTok’s CEO Shou Zi Chew (who is Singaporean, not Chinese) is expected to have “a prime seating location on the dais” at Trump’s inauguration on Monday, which seems like an odd thing if Congress, and now the Supreme Court, has made it clear that he’s the guy running a dastardly spying/manipulation app for our (apparently) biggest adversary.
This jarring juxtaposition—condemning TikTok as a national security threat one moment, then honoring its CEO the next—lays bare the incoherence and hypocrisy at the heart of the government’s stance.
All of this is just painfully stupid, which the kids on TikTok all seem to recognize with their satirical mocking of the ban by saying farewell to “my Chinese spy” and embracing the even more “connected-to-the-CCP” app RedNote.
As for the decision itself, it effectively ends what little moral high ground the US had left on internet openness and freedom. For the past two decades, across multiple administrations, the State Department had taken a fairly strong position that foreign countries banning apps (which they all claim they do for “national security purposes”) was a dangerous attack on internet openness and freedoms.
And now the US can no longer claim that with a straight face. This decision is a gift to authoritarian regimes around the world. It provides cover and legitimacy for censorship and digital protectionism, weakening America’s ability to advocate for internet freedom on the global stage.
I guarantee that Chinese officials will actually use this blundering mess against the US. They will claim that it is a vindication of the approach that they take with the Great Firewall of China, saying that they “protect national security through banning apps” and that the US has chosen to follow their lead in doing the same.
We’ve now said it’s okay to create a Great Firewall of America, further splintering the internet and effectively ending the global internet experience. There will be a price paid for that, though we’ll only learn more about it with time. This Balkanization of the internet into national silos is a tragic reversal of the promise of a borderless digital world that fosters free expression and connection.
As for the ruling itself, the fact that the entire process was rushed shows through very clearly. The reasoning is muddled, and big questions are punted. It basically says “well, if Congress strongly believes there’s a national security threat, then the First Amendment concerns probably aren’t that big a deal.” That seems pretty problematic, because Congress has a pretty long list of censorial ideas that they can pass with strong majorities.
The Court’s deference to Congress on matters of national security, at the expense of First Amendment scrutiny, is a troubling abdication of its constitutional role. It opens the door for the legislative branch to run roughshod over civil liberties, using national security as a convenient excuse.
The Supreme Court is supposed to protect against that kind of thing, but here is suddenly willing to give Congress great deference.
To start, the House Report focuses overwhelmingly on the Government’s data collection concerns, noting the “breadth” of TikTok’s data collection, “the difficulty in assessing precisely which categories of data” the platform collects, the “tight interlinkages” between TikTok and the Chinese Government, and the Chinese Government’s ability to “coerc[e]” companies in China to “provid[e] data.” H. R. Rep., at 3; see id., at 5–12 (recounting a five-year record of Government actions raising and attempting to address those very concerns). Indeed, it does not appear that any legislator disputed the national security risks associated with TikTok’s data collection practices, and nothing in the legislative record suggests that data collection was anything but an overriding congressional concern.We are especially wary of parsing Congress’s motives on this record with regard to an Act passed with striking bipartisan support.
If data privacy is truly the concern, then a comprehensive data protection law, rather than a piecemeal ban on a single platform, would be a more effective and less constitutionally problematic solution. By focusing on TikTok alone, Congress and the Court have enabled arbitrary censorship rather than addressing the underlying issue.
Furthermore, not parsing Congress’ motives seems especially problematic, given that many members of Congress directly cited impermissible (under the First Amendment) reasons for why they wanted this ban. Like, Mitt Romney directly said the ban was a good idea because kids on TikTok were too strongly pro-Palestine. The Court’s failure to grapple with the censorial motives animating the TikTok ban — as exemplified by Sen. Romney’s comments about suppressing pro-Palestinian views — is a dereliction of its duty to safeguard free expression against viewpoint discrimination.
It can’t just be that as long as Congress attaches a non-content censorship reason to a bill that many want for censorial purposes, it magically makes it okay. But that is what the Supreme Court is saying here.
About the only attempt by the Supreme Court to recognize the havoc they are wreaking is a weak “hey, we’re ruling narrowly, don’t read too much into this precedential ruling we are putting out”:
While we find that differential treatment was justified here, however, we emphasize the inherent narrowness of our holding. Data collection and analysis is a common practice in this digital age. But TikTok’s scale and susceptibility to foreign adversary control, together with the vast swaths of sensitive data the platform collects, justify differential treatment to address the Government’s national security concerns. A law targeting any other speaker would by necessity entail a distinct inquiry and separate considerations.
The Court’s attempt to cabin the reach of its decision is unconvincing. By opening the door to First Amendment exceptions based on “striking bipartisan support,” the Court has invited further challenges to free expression. Censorial politicians will surely seize upon this language to test the boundaries of what speech they can suppress in the name of national security.
This is a messy, rushed decision that the US is going to regret. Hell, Biden’s reaction to it suggests he already regrets it. But it’s going to live on and create future problems for a country that once at least tried to appear to hold the moral high ground on an open and free internet.
The TikTok ban, and the Court’s acquiescence to it, represent a low point for digital civil liberties in America. It’s a self-inflicted wound that will haunt us for years to come, as we grapple with the fallout of a fragmented internet and emboldened censors, both at home and abroad.
If you only remember two things about the government pressure campaign to influence Mark Zuckerberg’s content moderation decisions, make it these: Donald Trump directly threatened to throw Zuck in prison for the rest of his life, and just a couple months ago FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.
Two months later — what do you know? — Zuckerberg ended all fact-checking on Meta. But when he went on Joe Rogan, rather than blaming those actual obvious threats, he instead blamed the Biden administration, because some admin officials sent angry emails… which Zuck repeatedly admits had zero impact on Meta’s actual policies.
Indeed, this very fact check may be a good example of what I talked about regarding Zuckerberg’s decision to end fact-checking, which is that it’s not as straightforward as some people think, as layers of bullshit may be presented misleadingly around a kernel of truth, and peeling back the layers is important for understanding.
Indeed, this is my second attempt at writing this article. I killed the first version soon after it hit 10,000 words and I realized no one was going to read all that. So this is a more simplified version of what happened, which can be summarized as: the actual threats came from the GOP, to which Zuckerberg quickly caved. The supposed threats from the Biden admin were overhyped, exaggerated, and misrepresented, and Zuck directly admits he was able to easily refuse those requests.
All the rest is noise.
I know that people who dislike Rogan dismiss him out of hand, but I actually think he’s often a good interviewer for certain kinds of conversations. He’s willing to speak to all sorts of people and even ask dumb questions, taking on the role of listeners/viewers. And that’s actually really useful (and enlightening) in certain circumstances.
Where it goes off the rails, such as here, is where (1) nuance and detail matter and (2) where the person he is interviewing has an agenda to push with a message that he knows Rogan will eat up, and knows Rogan does not understand enough to pick apart what really happened.
This is not the first time that Zuckerberg has gone on Rogan and launched a narrative by saying things that are technically true in a manner that is misleading, likely knowing that Rogan and his fans wouldn’t understand the nuances, and would run with a misleading story.
Two and a half years ago, he went on Joe Rogan and said that the FBI had warned the company about the potential for hack and leak efforts put forth by the Russians, which Rogan and a whole bunch of people, including the mainstream media, falsely interpreted as “the FBI told us to block the Hunter Biden laptop story.”
Except that’s not what he said. He was asked about the NY Post story (which Facebook never actually blocked, they only — briefly — blocked it from “trending”), and Zuckerberg very carefully worded his answer to say something that was already known, but which people not listening carefully might think revealed something new:
The background here is that the FBI came to us – some folks on our team – and was like ‘hey, just so you know, you should be on high alert. We thought there was a lot of Russian propaganda in the 2016 election, we have it on notice that basically there’s about to be some kind of dump that’s similar to that’.
But the fact that the FBI had sent out a general warning to all of social media to be on the lookout for disinfo campaigns like that was widely known and reported on way earlier. The FBI did not comment specifically on the Hunter Biden laptop story, nor did they tell Facebook (or anyone) to take anything down.
Still, that turned into a big thing, and a bunch of folks thought it was a big revelation. In part because when Zuck told that story to Rogan, Rogan acted like it was big reveal, because Rogan doesn’t know the background or the details or the fact that this had been widely reported. He also doesn’t realize there’s a huge difference between a general “be on the lookout” warning and a “hey, take this down!” demand, with the former being standard and the latter being likely unconstitutional.
In other words, Zuck has a history of using Rogan’s platform to spread dubious narratives, knowing that Rogan lacks the background knowledge to push back in the moment.
After that happened, I was at least open to the idea that Zuck just spoke in generalities and didn’t realize how Rogan and audience would take what he said and run with it, believing a very misleading story. But now that he’s done it again, it seems quite likely that this is deliberate. When Zuckerberg wants to get a misleading story out to a MAGA-friendly audience, he can reliably dupe Rogan’s listeners.
Indeed, this interview was, in many ways, similar to what happened two years ago. He was relating things that were already widely known in a misleading way, and Rogan was reacting like something big was being revealed. And then the media runs with it because they don’t know the details and nuances either.
This time, Zuckerberg talks about the supposed pressure from the Biden administration as a reason for his problematic announcement last week:
Rogan:What do you think started the pathway towards increasing censorship? Because clearly we were going in that direction for the last few years. It seemed like uh we really found out about it when Elon bought Twitter and we got the Twitter Files and when you came on here and when you were explaining the relationship with FBI where they were trying to get you to take down certain things that were true and real and certain things they tried to get you to limit the exposure to them. So it’s these kind of conversations. Like when did all that start?
So first off, note the framing of this question. It’s not accurate at all. Social media websites have always had content moderation/content policy efforts. Indeed, Facebook was historically way more aggressive than most. If you don’t, your platform fills up with spam, scams, abuse, and porn.
That’s just how it works. And, indeed, Facebook in the early days was aggressively paternalistic about what was — and what was not — allowed on its site. Remember its famously prudish “no nudity” policy? Hell, there was an entire Radiolab podcast about how difficult that was to implement in practice.
So, first, calling it “censorship” is misleading, because it’s just how you handle violations of your rules, which is why moderation is always a better term for it. Rogan has never invited me on his podcast. Is that censorship? Of course not. He has rules (and standards!) for who he platforms. So does Meta. Rejecting some speech is not “censorship”, it’s just enforcing your own rules on your own private property.
Second, Rogan himself is already misrepresenting what Zuckerberg told him two years ago about the FBI. Zuck did not say that the FBI was trying to get Facebook to “take down certain things that were true and real” and “limit the exposure to them.” They only said to be on the lookout for potential attempts by foreign governments to interfere with an election, leaving it up to the platforms to decide how to handle that.
On top of that, the idea that the simple fact of how content moderation works only became public with the Twitter Files is false. The Twitter Files revealed… a whole bunch of nothing interesting that idiots have misinterpreted badly. Indeed we know this because (1) we paid attention, and (2) Elon’s own legal team admitted in court that what people were misleadingly claiming about the Twitter Files wasn’t what was actually said.
From there, Zuck starts his misleading but technically accurate-ish response:
Zuck: Yeah, well, look, I think going back to the beginning, or like I was saying, I think you start one of these if you care about giving people a voice, you know? I wasn’t too deep on our content policies for like the first 10 years of the company. It was just kind of well known across the company that, um, we were trying to give people the ability to share as much as possible.
And, issues would come up, practical issues, right? So if someone’s getting bullied, for example, we deal with that, right? We put in place systems to fight bullying, you know? If someone is saying hey um you know someone’s pirating copyrighted content on on the service, it’s like okay we’ll build controls to make it so we’ll find IP protected content.
But it was really in the last 10 years that people started pushing for like ideological-based censorship and I think it was two main events that really triggered this. In 2016 there was the election of President Trump, also coincided with basically Brexit in the EU and sort of the fragmentation of the EU. And then you know in 2020 there was COVID. And I think that those were basically these two events where for the first time we just faced this massive massive institutional pressure to basically start censoring content on ideological grounds….
So this part is fundamentally, sorta, kinda accurate, which sets up the kernel of truth around which much bullshit will be built. It’s true that Zuck didn’t pay much attention to content policies on the site early on, but it’s nonsense that it was about “giving people a voice.” That’s Zuck retconning the history of Facebook. Remember, they only added things like the Newsfeed (which was more about letting people talk) when Twitter came about and Zuck freaked out that Twitter would destroy Facebook.
Second, he then admits that the company has always moderated, though he’s wrong that it was so reactive. From quite early on (as mentioned above) the company had decently strict content policies regarding how the site was moderated. And, really, much of that was based around wanting to make sure that users had a good experience on the site. So yes, things like bullying were blocked.
But what is bullying is a very subjective thing, and so much of content moderation is just teams trying to tell you to stop being such a jackass.
It is true that there was pressure on Facebook to take moderation challenges more seriously starting in 2016, and (perhaps?!?) if he had actually spent more time understanding trust & safety at that time, he would have a better understanding of the issues. But he didn’t, which meant that he made a mess of things, and then tried to “fix it” with weird programs like the Oversight Board.
But it also meant that he’s never, ever been good at explaining the inherent tradeoffs in trust & safety, and how some people are always going to dislike the choices you make. A good leader of a social network understands and can explain those tradeoffs. But that’s not Zuck.
Also, and this is important, Zuckerberg’s claims about pressure to moderate on “ideological” grounds are incredibly misleading. Yes, I’m sure some people were putting pressure on him around that, but it was far from mainstream and easy to ignore. People were asking him to stop potentially dangerous misinformation that was causing harm. For example, the genocide in Myanmar. Or information around COVID that was potentially legitimately dangerous.
In other words, it was really (like so much of trust & safety) an extension of the “no bullying” rule. The same was true of protecting marginalized groups like LGBTQ+ users or on issues like Black Lives Matter. The demands from users (not the government in those cases) were about protecting more marginalized communities from harassment and bullying.
I’m going to jump ahead because Zuck and Rogan say a lot of stupid shit here, but this article will get too long if I go through all of it. So let’s jump forward a couple of minutes, to where Zuckerberg really flubs his First Amendment 101 in embarrassing ways while trying to describe how Meta chose to handle moderation of COVID misinformation.
Zuckerberg: Covid was the other big one. Where that was also very tricky because you know at the beginning it was, you know, it’s like a legitimate “public health crisis,” you know, in the beginning.
And it’s… even people who are like the most ardent First Amendment defenders… that the Supreme Court has this clear precedent, that’s like all rightyou can’t yell fire in a crowded theater. There are times when if there’s an emergency your ability to speak can temporarily be curtailed in order to get an emergency under control.
So I was sympathetic to that at the beginning of Covid, it seemed like, okay you have this virus, seems like it’s killing a lot of people. I don’t know like we didn’t know at the time how dangerous it was going to be. So, at the beginning, it kind of seemed like okay we should give a little bit of deference to the government and the health authorities on how we should play this.
But when it went from, you know, two weeks to flatten the curve to… in like in the beginning it was like okay there aren’t enough masks, masks aren’t that important to, then, it’s like oh no you have to wear a mask. And you know all the, like everything, was shifting around. It just became very difficult to kind of follow.
In trying to defend Meta’s approach to COVID misinformation, Zuck manages to mangle First Amendment law in a way that’s both legally inaccurate and irrelevant to the actual issues at play.
There’s so much to unpack here. First off, he totally should have someone explain the First Amendment to him. He not only got it wrong, he even got it wrong in a way that is different than how most people get it wrong. We’ve covered the whole “fire in a crowded theater” thing so many times here on Techdirt, so we’ll do the abbreviated version:
It’s not a “clear precedent.” It’s not a precedent at all. It was an offhand comment (in legal terms: dicta, so not precedential) in a case about jailing someone for handing out anti-war literature (something most people today would recognize as pretty clearly a First Amendment problem).
The Justice who said it, Oliver Wendell Holmes, appeared to regret it almost immediately, and in a similar case very shortly thereafter changed his tune and became a much more “ardent First Amendment defender.”
Most courts and lawyers (though there are a few holdouts) insist that whatever precedent there was in Schenck (which again, did not include that line) was effectively overruled a half century later in a different case that rejected the test in Schenck and moved to the “incitement to imminent lawless action” test.
So, quoting “fire in a crowded theater” these days is generally used as a (very bad, misguided) defense of saying “well, there’s some speech that’s so bad it’s obviously unprotected,” but without being able to explain why this particular speech is unprotected.
But Zuck isn’t even using it in that way. He seems to have missed that the whole point of the Holmes dicta (again, not precedent) was to talk about falsely yelling fire. Zuck implies that the (not actual) test is “can we restrict speech if there’s an actual fire, an actual emergency.” And, that’s also wrong.
But, the wrongness goes one layer deeper as well, because the First Amendment only applies to restrictions the government can put on speakers, not what a private entity like Meta (or the Joe Rogan Experience) can do on their own private property.
And then, even once you get past that, Zuck isn’t wrong that there was a lot of confusion about COVID and health in the early days, including lots of false information that came under the imprimatur of “official” sources, but… dude, Meta deliberately made the decision to effectively let the CDC decide what was acceptable even after many people (us included!) pointed out how stupid it was for platforms to outsource their decisions on “COVID misinfo” to government agencies which almost certainly would get stuff wrong as the science was still unclear.
But it wasn’t the White House that pressured Zuck into following the CDC position. Meta (alone among the major tech platforms) publicly declared early in the pandemic (for what it’s worth, when Trump was still President) that its approach to handling COVID misinformation would be based on “guidance” from official authorities like the CDC and WHO. Many of us felt that this was actually Meta abdicating its role and giving way too much power to government entities in the midst of an unclear scientific environment.
But for him to now blame the Biden admin is just blatantly ahistorical.
And from there, it gets worse:
Zuckerberg: This really hit… the most extreme, I’d say, during it was during the Biden Administration, when they were trying to roll out um the vaccine program and… Now I’m generally, like, pretty pro rolling out vaccines. I think on balance the vaccines are more positive than negative.
But I think that while they’re trying to push that program, they also tried to censor anyone who was basically arguing against it. And they pushed us super hard to take down things that were honestly were true. Right, I mean they they basically pushed us and and said, you know, anything that says that vaccines might have side effects, you basically need to take down.
And I was just like,well we’re not going to do that. Like,we’re clearly not going to do that.
Rogan then jumps in here to ask “who is they” but this is where he’s showing his own ignorance. The key point is the last line. Zuckerberg says he told them “we’re not going to do that… we’re clearly not going to do that.”
That’s it. That’s the ballgame.
The case law on this issue is clear: the government is allowed to try to persuade companies to do something. That’s known as using the bully pulpit. What it cannot do is coerce a company into taking action on speech. And if Zuckerberg and Meta felt totally comfortable saying “we’re not going to do that, we’re clearly not going to do that,” then end of story. They didn’t feel coerced.
Indeed, this is partly what the Murthy case last year was about. And during oral arguments, Justices Kavanaugh and Kagan (both of whom had been lawyers in the White House in previous lives) completely laughed off the idea that White House officials couldn’t call up media entities and try to convince them to do stuff, even with mean language.
Here was Justice Kavanaugh:
JUSTICE KAVANAUGH: Do you think on the anger point, I guess I had assumed, thought, experienced government press people throughout the federal government who regularly call up the media and — and berate them. Is that — I mean, is that not —
MR. FLETCHER: I — I — I don’t want
JUSTICE KAVANAUGH: — your understanding? You said the anger here was unusual. I guess I wasn’t —
MR. FLETCHER: So that —
JUSTICE KAVANAUGH: — wasn’t entirely clear on that from my own experience.
Later on, he said more:
JUSTICE KAVANAUGH: You’re speaking on behalf of the United States. Again, my experience is the United States, in all its manifestations, has regular communications with the media to talk about things they don’t like or don’t want to see or are complaining about factual inaccuracies.
Justice Kagan felt similarly:
JUSTICE KAGAN: I mean, can I just understand because it seems like an extremely expansive argument, I must say, encouraging people basically to suppress their own speech. So, like Justice Kavanaugh, I’ve had some experience encouraging press to suppress their own speech.
You just wrote about editorial. Here are the five reasons you shouldn’t write another one. You just wrote a story that’s filled with factual errors. Here are the 10 reasons why you shouldn’t do that again.
I mean, this happens literally thousands of times a day in the federal government.
“Literally thousands of times a day in the federal government.” What happened was not even that interesting or unique. The only issue, and the only time it creates a potential First Amendment problem, is if there is coercion.
This is why the Supreme Court rejected the argument in the Murthy case that this kind of activity was coercive and violated the First Amendment. The opinion, written by Justice Coney Barrett, makes it pretty clear that the White House didn’t even apply that much pressure towards Facebook on COVID info beyond some public statements, and instead most of the communication was Facebook sending info to the government (both admin officials and the CDC) and asking for feedback.
The Supreme Court notes that Facebook changed its policies to restrict more COVID info before it had even spoken to people in the White House.
In fact, the platforms, acting independently, had strengthened their pre-existing content moderation policies before the Government defendants got involved. For instance, Facebook announced an expansion of its COVID–19 misinformation policies in early February 2021, before White House officials began communicating with the platform. And the platforms continued to exercise their independent judgment even after communications with the defendants began. For example, on several occasions, various platforms explained that White House officials had flagged content that did not violate company policy. Moreover, the platforms did not speak only with the defendants about content moderation; they also regularly consulted with outside experts.
All of this info is public. It was in the court case. It’s in the Supreme Court transcript of oral arguments. It’s in the ruling in the Supreme Court.
Yet Rogan acts like this is some giant bombshell story. And Zuckerberg just lets him run with it. And then, the media ran with it as well, even though it’s a total non-story. As Kagan said, attempts to persuade the media happen literally thousands of times a day.
It only violates the First Amendment if they move over into coercion, threatening retaliation for not listening. And the fact that Meta felt free to say no and didn’t change its policies makes it pretty clear this wasn’t coercion.
But, Zuckerberg now knows he’s got Rogan caught on his line and starts to play it up. Rogan first asks who was “telling you to take down things” and Zuckerberg then admits that he wasn’t actually involved in any of this:
Rogan: Who is they? Who’s telling you to take down things that talk about vaccine side effects?
Zuckerberg:It was people in the um in the Biden Administration I think it was um…you know I wasn’t involved in those conversations directly…
Ah, so you’re just relaying the information that was publicly available all along and which we already know about.
Rogan then does a pretty good job of basically explaining my Impossibility Theorem (he doesn’t call it that, of course), noting the sheer scale of Meta properties, and how most people can’t even comprehend the scale, and that mistakes are obviously going to happen. Honestly, it’s one of the better “mainstream” explanations of the impossibility of content moderation at scale
Rogan: You’re moderating at scale that’s beyond the imagination. The number of human beings you’re moderating is fucking insane. Like what is… what’s Facebook… what how many people use it on a daily basis? Forget about how many overall. Like how many people use it regularly?
Zuck: It’s 3.2 billion people use one of our services every day
Rogan: (rolls around) That’s…!
Zuck: Yeah, it’s, no, it’s wild
Rogan: That’s more than a third of the planet! That’s so crazy and it’s almost half of Earth!
Zuck: Well on a monthly basis it is probably.
Rogan: UGGH!
But just I want I want to say that though for there’s a lot of like hypercritical people that are conspiracy theorists and think that everybody is a part of some cabal to control them. I want you to understand that, whether it’s YouTube or all these and whatever place that you think is doing something that’s awful, it’s good that you speak because this is how things get changed and this is how people find out that people are upset about content moderation and and censorship.
But moderating at scale is insane. It’s insane. What we were talking the other day about the number of videos that go up every hour on YouTube and it’s banana. It’s bananas. That’s like to try to get a human being that is reasonable, logical and objective, that’s going to analyze every video? It’s virtually impossible. It’s not possible. So you got to use a bunch of tools. You got to get a bunch of things wrong.
And you have also people reporting things. And how how much is that going to affect things there. You could have mass reporting because you have bad actors. You have some corporation that decides we’re going to attack this video cuz it’s bad for us. Get it taken down.
There’s so much going on. I just want to put that in people’s heads before we go on. Like understand the kind of numbers that we’re talking about here.
Like… that’s a decent enough explanation of the impossibility of moderating content at scale. If Zuckerberg wanted to lean into that, and point out that this impossibility and the tradeoffs it creates makes all of this a subjective guessing game, where mistakes often get made and everyone has opinions, that would have been interesting.
But he’s tossed out the line where he wants to blame the Biden administration (even though the evidence on this has already been deemed unproblematic by the Supreme Court just months ago) and he’s going to feed Rogan some more chum to create a misleading picture:
Zuckerberg: So I mean like you’re saying I mean this is… it’s so complicated this system that I could spend every minute of all of my time doing this and not actually focused on building any of the things that we’re trying to do. AI glasses, like the future of social media, all that stuff.
So I get involved in this stuff, but in general we we have a policy team. There are people who I trust there. The people are kind of working on this on a day-to-day basis. And the interactions that um that I was just referring to, I mean a lot of this is documented… I mean because uh you know Jim Jordan and the the House had this whole investigation and committee into into the the kind of government censorship around stuff like this and we produced all these documents and it’s all in the public domain…
I mean basically these people from the Biden Administration would call up our team and like scream at them and curse. And it’s like these documents are… it’s all kind of out there!
Rogan: Gah! Did you record any of those phone calls? God!
Zuckerberg: I don’t no… I don’t think… I don’t think we… but but… I think… I want listen… I mean, there are emails. The emails are published. It’s all… it’s all kind of out there and um and they’re like… and basically it just got to this point where we were like, no we’re not going to. We’re not going to take down things that are true. That’s ridiculous…
Parsing what he’s saying here is important. Again, we already established above a few important facts that Rogan doesn’t understand, and either Zuck doesn’t understand or is deliberately being coy in his explanation: (1) government actors are constantly trying to persuade media companies regarding their editorial discretion and that’s not against the law in any way, unless it crosses the line into coercion, and Zuck is (once again) admitting there was no coercion and they had no problem saying no. (2) He’s basing this not on actual firsthand knowledge but on stuff that is “all kind of out there” because “the emails are published” and “it’s all in the public domain.”
Now, because I’m not that busy creating AI glasses (though I am perhaps working on the future of social media), I actually did pay pretty close attention to what happened with those published emails and the documents in the public domain, and Zuckerberg is misrepresenting things, either on purpose or because the false narrative filtered back to him.
The reason I followed it closely is because I was worried that the Biden administration might cross the First Amendment line. This is not the case of me being a fan of the Biden administration, whose tech policies I thought were pretty bad almost across the board. The public statements that the White House made, whether from then press secretary Jen Psaki or Joe Biden himself, struck me as stupid things to say, but they did not appear to cross the First Amendment line, though they came uncomfortably close.
So I followed this case closely, in part, because if there was evidence that they crossed the line, I would be screaming from the Techdirt rooftops about it.
But, over and over again, it became clear that while they may have walked up to the line, they didn’t seem to cross it. That’s also what the Supreme Court found in the Murthy case.
So when Zuckerberg says that there are published emails, referencing the “screaming and cursing,” I know exactly what he’s talking about. Because it was a highlight of the district court ruling that claimed the White House had violated the First Amendment (which was later overturned by the Supreme Court).
Indeed, in my write-up of that District Court ruling, I even called out the “cursing” email as an example that struck me as one of the only things that might actually be a pretty clear violation of the First Amendment. Here’s what I wrote two years ago when that ruling came out:
Most of the worst emails seemed to come from one guy, Rob Flaherty, the former “Director of Digital Strategy,” who seemed to believe his job in the White House made it fine for him to be a total jackass to the companies, constantly berating them for moderation choices he disliked.
I mean, this is just totally inappropriate for a government official to say to a private company:
Things apparently became tense between the White House and Facebook after that, culminating in Flaherty’s July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”
But then I dug deeper and saw the filing where that quote actually comes from, realizing that the judge in the district court was taking it totally out of context. The ruling made it sound like Flaherty’s cursing outburst was in response to Facebook/Zuck refusing to go along with a content moderation demand.
If that were actually the case, then that would absolutely violate the First Amendment. The problem is that it’s not what happened. It was still inappropriate in general, but not an unconstitutional attack on speech.
What had happened was that Instagram had a bug that prevented the Biden account from getting more followers, and the White House was annoyed by that. Someone from Meta responded to a query, saying basically “oops, it was a bug, our bad, but it’s fixed now” and that response was forwarded to Flaherty, who acted like a total power-mad jackass with the “Are you guys fucking serious? I want an answer on what happened here and I want it today” response.
So here’s the key thing: that heated exchange had absolutely nothing to do with pressuring Facebook on its content moderation policies. That “public domain” “cursing” email is entirely about a bug that prevented the Biden account from getting more followers, and Rob throwing a bit of a shit fit about it.
As Zuck says (but notably no one on the Rogan team actually looks up), this is all “out there” in “the public domain.” Rogan didn’t look it up. It’s unclear if Zuckerberg looked it up.
But I did:
We can still find that response wholly inappropriate and asshole-ish. But it’s not because Facebook refused to take down information on vaccine side effects, as is clearly implied (and how Rogan takes it).
Indeed, Zuckerberg (again!) points out that the company’s response to requests to remove anti-vax memes was to tell the White House no:
Zuck: They wanted us to take down this meme of Leonardo DiCaprio looking at a TV talking about how 10 years from now or something um you know you’re going to see an ad that says okay if you took a Covid vaccine you’re um eligible you you know like uh for for this kind of payment like this sort of like class action lawsuit type meme.
And they’re like, “No, you have to take that down.” We just said, ‘No, we’re not going to take down humor and satire. We’re not going to take down things that are true.“
He then does talk about the stupid Biden “they’re killing people” comment, but leaves out the fact that Biden walked that back days later, admitting “Facebook isn’t killing people” and instead blaming people on the platform spreading misinformation and saying “that’s what I meant.”
But it didn’t change the fact that Facebook refused to take action on those accounts.
So even after he’s said multiple times that Facebook’s response to whatever comments came in from the White House was to tell them “no,” which is exactly what the Supreme Court made clear showed there was no coercion, Rogan goes on a rant as if Zuckerberg had just told him that they did, in fact, suppress the content the White House requested (something Zuck directly denied to Rogan multiple times, even right before this rant):
Rogan: Wow. [sigh] Yeah, it’s just a massive overstepping. Also, you weren’t killing people. This is the thing about all of this. It’s like they suppressed so much information about things that people should be doing regardless of whether or not you believe in the vaccine, regardless… put that aside. Metabolic health is of the utmost importance in your everyday life whether there’s a pandemic or there’s not and there’s a lot of things that you can do that can help you recover from illness.
It prevents illnesses. It makes your body more robust and healthy. It strengthens your immune system. And they were suppressing all that information and that’s just crazy. You can’t say you’re one of the good guys if you’re suppressing information that would help people recover from all kinds of diseases. Not just Covid. The flu, common cold, all sorts of different things. High doses of Vitamin C, D3 with K2 and magnesium. They were suppressing this stuff because they didn’t want people to think that you could get away with not taking a vaccine.
Dude, Zuck literally told you over and over again that they said no to the White House and didn’t suppress that content.
But Zuck doesn’t step in to correct Rogan’s misrepresentations, because he’s not here for that. He’s here to get this narrative out, and Rogan is biting hard on the narrative. Hilariously, he then follows it up by saying how the thing that Zuck just said didn’t happen, but which Rogan is chortling along as if it did happen, proves the evils of “distortion of facts” and…. where the hell is my irony font?
Rogan: This is a crazy overstep, but scared the shit out of a lot of people… redpilled as it were. A lot of people, because they realized like, oh, 1984 is like an instruction manual…
Zuck: Yeah, yeah.
Rogan: It’s like this is it shows you how things can go that way with wrong speak and withbizarre distortion of facts.
I mean, you would know, wouldn’t you, Joe?
From there, they pivot to a different discussion, though again, it’s Zuckerberg feeding Rogan lines about how the US ought to “protect” the US tech industry from foreign governments, rather than trying to regulate them.
A bit later on, there actually is a good discussion about the kinds of errors that are made in content moderation and why. Rogan (after spending so much time whining about the evils of censorship) suddenly turns around and says that, well, of course, Facebook should be blocking “misinformation” and “outright lies” and “propaganda”:
Rogan: But you do have to be careful about misinformation! And you have to be careful about just outright lies and propaganda complaints, or propaganda campaigns rather. And how do you differentiate?
Dude, like that’s the whole point of the challenge here. You yourself talked about the billions of people and how mistakes are made because so much of this is automated. But then you were misleadingly claiming that this info was taken down over demands from the government (which Zuckerberg clearly denied multiple times), and for you to then wrap back around to “but you gotta take down misinformation and lies and propaganda campaigns” is one hell of a swing.
But, as I said, it does lead to Zuck explaining how confidence levels matter, and how where you set those levels will cover both how much “bad” content gets removed, but also how much is left up and how much innocent content gets accidentally caught:
Zuck: Okay, you have some classifier that’s it’s trying to find say like drug content, right? People decide okay, it’s like the opioid epidemic is a big deal, we need to do a better job of cracking down on drugs and drug sales. Right, I don’t I don’t want people dealing drugs on our networks.
So we build a bunch of systems that basically go out and try to automate finding people who are who are dealing with dealing drugs. And then you basically have this question, which is how precise do you want to set the classifier? So do you want to make it so that the system needs to be 99% sure that someone is dealing drugs before taking them down? Do you want to to be 90% confident? 80% confident?
And then those correspond to amounts of… I guess the the statistics term would be “recall.” What percent of the bad stuff are you finding? So if you require 99% confidence then maybe you only actually end up taking down 20% of the bad content. Whereas if you reduce it and you say, okay, we’re only going to require 90% confidence now maybe you can take down 60% of the bad content.
But let’s say you say, no we really need to find everyone who’s doing this bad thing… and it doesn’t need to be as as severe as as dealing drugs. It could just be um I mean it could be any any kind of content of uh any kind of category of harmful content. You start getting to some of these classifiers might have you know 80, 85% Precision in order to get 90% of the bad stuff down.
But the problem is if you’re at, you know, 90% precision that means one out of 10 things that the classifier takes down is not actually problematic. And if you filter… if you if you kind of multiply that across the billions of people who use our services every day that is millions and millions of posts that are basically being taken down that are innocent.
And upon review we’re going to look at and be like this is ridiculous that this thing got taken down. Which, I mean, I think you’ve had that experience and we’ve talked about this for for a bunch of stuff over time.
But it really just comes down to this question of where do you want to set the classifiers so one of the things that we’re going to do is basically set them to… require more confidence. Which is this trade-off.
It’s going to mean that we will maybe take down a smaller amount of the harmful content. But it will also mean that we’ll dramatically reduce the amount of people who whose accounts were taken off for a mistake, which is just a terrible experience.
And that’s all a good and fascinating fundamental explanation of why the Masnick Impossibility Theorem remains in effect. There are always going to be different kinds of false positives and false negatives, and that’s going to always happen because of how you set the confidence levels of the classifiers.
Zuck could have explained that many of the other things that Rogan was whining about regarding the “suppression” of content around COVID (which, again, everyone but Rogan has admitted was based on Facebook’s own decision-making, not the US government), was quite often a similar sort of situation, where the confidence levels on the classifiers may have caught information it shouldn’t have, but which the company (at the time) felt had to be set at that level to make sure enough of the “bad” content (which Rogan himself says they should take down) gets caught.
But there is no recognition of how this part of the conversation impacts the earlier conversation at all.
There’s more in there, but this post is already insanely long, so I’ll close out with this: as mentioned in my opening, Donald Trump directly threatened to throw Zuck in prison for the rest of his life if Facebook didn’t moderate the way he wanted. And just a couple months ago, FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.
None of that came up in this discussion. The only “government pressure” that Zuck talks about is from the Biden admin with “cursing,” which he readily admits they weren’t intimidated by.
So we have Biden officials who were, perhaps, mean, but not so threatening that Meta felt the need to bow down to them. And then we have Trump himself and leading members of his incoming administration who sent direct and obvious threats, which Zuck almost immediately bowed down to and caved.
And yet Rogan (and much of the media covering this podcast) claims he “revealed” how the Biden admin violated the First Amendment. Hell, the NY Post even ran an editorial pretending that Zuck didn’t go far enough because he didn’t reveal all of this in time for the Murthy case. And that’s only because the author doesn’t realize he literally is talking about the documents in the Murthy case.
The real story here is that Zuckerberg caved to Trump’s threats and felt fine pushing back on the Biden admin. Rogan at one point rants about how Trump will now protect Zuck because Trump “uniquely has felt the impact of not being able to have free speech.” That seems particularly ironic given the real story: Zuckerberg caved to Trump’s threats while pushing back on the Biden admin.
Zuckerberg knew how this would play to Rogan and Rogan’s audience, and he got exactly what he needed out of it. But the reality is that all of this is Zuck caving to threats from Trump and Trump officials, while feeling no coercion from the Biden admin. As social media continues to grapple with content moderation challenges, it would be nice if leaders like Zuckerberg were actually transparent about the real pressures they face, rather than fueling misleading narratives.
But that’s not the world we live in.
Strip away all the spin and misdirection, and the truth is inescapable: Zuckerberg folded like a cheap suit in the face of direct threats from Trump and his lackeys, while barely batting an eye at some sternly worded emails from Biden officials.
As I noted when the Ninth Circuit Appeals Court handed down its original decision back in 2023, I didn’t care much for the plaintiff, but I did care quite a bit about the First Amendment. Less-than-ideal litigants make some pretty good caselaw, and that’s how it went here.
The plaintiff challenging Oregon’s surreptitious recording law was Project Veritas, a right-wing bunch of agitators that tends to rely on heavily edited recordings to prove whatever point it’s trying to make. But its tactics aren’t all that distinguishable from more credible forms of journalism. The same sort of thing is essential to whistleblowing. All that really separates Project Veritas from these other things is its general lack of ethics.
That being said, it raised a valid point in court. And, on appeal, the Ninth Circuit Appeals Court agreed with its allegations, finding that Oregon’s law against surreptitious recordings violated the Constitution.
Applying Animal Legal Def. Fund. v. Wasden, 878 F.3d 1184 (9th Cir. 2018), the panel held that section 165.540(1)(c) regulates protected speech (unannounced audiovisual recording) and is content based because it distinguishes between particular topics by restricting some subject matters (e.g., a state executive officer’s official activities) and not others (e.g., a police officer’s official activities). As a content-based restriction, the rule fails strict scrutiny review because the law is not narrowly tailored to achieving a compelling governmental interest in protecting conversational privacy with respect to each activity within the proscription’s scope, which necessarily includes its regulation of protected speech in places open to the public.
The dissent thought otherwise, claiming the law protected citizens’ right to “conversational privacy,” thus basically making Oregon a two-party consent state. More oddly, the dissent claimed the fact that surreptitious recordings could be shared more widely and quickly than when the law was first crafted was reason enough to ignore the obvious First Amendment implication of banning this form of subterfuge.
In other words, in Project Veritas’s view, having one’s oral communication secretly recorded imposes no greater burden on privacy than merely having the same comments heard—never mind that recorded comments can be forwarded to vast audiences, posted on the internet in perpetuity, selectively edited, presented devoid of context, or manipulated using modern technology.
But that’s always been the case with recordings. They can be manipulated, edited, and shared. That it happens more quickly now doesn’t really change anything.
Except that it apparently does. The Ninth Circuit reconvened for an en banc hearing and has flipped its own script. According to the court’s (extremely long) decision [PDF], the problem here isn’t any of these things necessarily. It’s the other thing: too much scrutiny.
That’s where the court decided it went wrong. It applied to high a level of scrutiny to something it has now chosen to portray as government speech, even though it’s really the opposite — a government incursion on free speech protections.
First, it says the law is content-neutral, in that it applies to the act of recording, rather than the contents of the recordings.
[The statute] does not “draw[ ] distinctions based on the message a speaker conveys,” and it was not adopted because of the government’s “disagreement with the [speaker’s] message.”
While that’s true, it kind of sidesteps the reality of imposing this limitation on people engaged in journalism or whistleblowing, who will once again find these actions illegal. So, it does affect the messages a speaker “conveys,” even if it doesn’t directly affect the person doing the literal speaking when the recording is being made.
Having decided that, the Appeals Court lowers the scrutiny bar and finds it’s much easier for the law to clear it.
To further its interest in preserving conversational privacy, Oregon adopted a relatively modest notice requirement. Absent an applicable exception, Project Veritas must inform participants in a conversation that they will be recorded before initiating a recording. Keeping the purpose of the statute in mind, section 165.540(1)(c) is exceptionally well tailored to protecting Oregonians’ private conversations. By requiring that participants in a conversation be informed before an audio recording begins, but not requiring that they consent to the recording, the statute minimizes the infringement upon Project Veritas’s journalistic efforts while still protecting the interviewees’ right to knowingly participate in Project Veritas’s speech—or not. Once a person is on notice that she will be recorded, she may choose to speak or remain silent. Either way, a noticed recording does not violate a privacy interest. Moreover, consistent with Oregon’s interest in conversational privacy, the statute does not sweep in photography or video recordings; it applies only to recordings of face-to-face oral communications.
Oregon’s statutory scheme is well tailored because it also accounts for some settings in which people cannot reasonably expect not to have their oral statements recorded…. These exceptions permit open recordings at public gatherings, including protests, and private meetings in which participants should reasonably expect that they will be recorded….
It also says journalists for years have engaged in journalism without secretly recording people. And since they’ve been able to publish exposes without this form of personal intrusion, apparently it shouldn’t be an option for anyone. If people choose not to speak after being informed they’re being recorded, no one’s rights are harmed. It’s perhaps better put in the court’s citations (the claim that surreptitious recordings allow people to speak for others without their consent), but the end result is something that’s just going to lend itself to abuse by government officials, even if the law specifically contains some small exceptions excluding (some) public employees from being covered by this law.
The dissent sounds more like the original opinion delivered by this court.
Oregon’s law is grossly overbroad and not narrowly tailored to advance the state’s interest in conversational privacy (even assuming intermediate scrutiny applies). Oregon prevents citizens from recording even in public areas if they do not announce that they are audiotaping. Oregon thus tramples on people’s ability to record and report on a large swath of public and newsworthy events. And because the law bans the taping of conversations where there is no reasonable expectation of privacy, Oregon’s statute is not narrowly tailored to further the state’s interest in conversational privacy.
In any event, Oregon’s law should be subject to strict scrutiny, not intermediate scrutiny, because the statute is not content-neutral. The statute has a law-enforcement exception that allows citizens to legally record law enforcement officials—but no one else—without announcing that they are recording them. Oregon has essentially carved out only law enforcement matters from its ban on unannounced recording. Because this is a content-based restriction, strict scrutiny applies—and Oregon’s law must fall to the wayside…
Unfortunately, this time around it’s the dissent and so it ultimately has no effect on Oregon’s law. Project Veritas is now back where it started. If it wants to challenge this, it will have to ask the Supreme Court to take a look at it. Given that court’s lack of interest in fielding cases, much less engaging in robust defense of certain constitutional rights, this might be a lost cause even with a plaintiff more than half the current justices probably approve of. After all, striking this law down just means some of their conservative buddies might be “victimized” by surreptitious recordings in the future. And that’s probably not a risk they’re willing to take.
Great job, US government. You went so overboard with your “TikTok is an evil Chinese app” and deciding to ban it that you’re pushing kids to go even deeper into the Chinese app ecosystem.
The US government’s ham-handed attempt to ban TikTok on national security grounds is not only a troubling attack on free speech and the open internet, but it’s already backfiring by sending users flocking to Chinese apps that pose even greater privacy concerns, as young people mock their misguided paternalism.
As I type this, the top two apps on the iPhone iOS app store are:
That first one is Rednote, a Chinese app that is sort of more like Instagram/Pinterest, but which was not particularly popular in the US. At least until this week. Lemon8 is also owned by TikTok/ByteDance and has been around for a bit, though has never been that popular. It was ByteDance’s attempt to create a Pinterest-like app.
Many people are pointing out that Xingin, the maker of Rednote, is even more closely tied to the Chinese government than TikTok ever was.
Meanwhile, the biggest (and perhaps final?) big trend on TikTok is kids saying “goodbye to my Chinese spy” as they expect the TikTok app to potentially go dark this weekend. If you go on TikTok and look up the #chinesespy hashtag, there’s a never-ending stream of people effectively mocking the US government / mourning the potential loss of an app they like.
It’s basically all just people mocking the out-of-touch, censorial US government as it sets up its very own “Great Firewall.”
There was a time when the US looked on the Great Firewall of China as evidence of how closed off and censorial China was, as opposed to the US’s approach of openness and freedom. But with this move, the US has not only made a mockery of its own support of free speech and an open internet, but given a huge gift to China, by suggesting their approach of banning foreign apps to “protect its citizens” is the right approach.
The Supreme Court will weigh in some time in the next few days, and there’s a decent chance it will mock the First Amendment and freedom, and fall for the moral panic about China.
But the kids who are using TikTok are all pretty clearly aware of just how stupid this all is and are commenting on it the best way they can, spitefully mocking out-of-touch politicians, judges, and the media.
Still, the end result of this nonsense will be an end of an era of American belief in free speech and an open internet. In trying to “protect” Americans from China, our gripped-by-moral-panic political class has made us just like China. The government has decided that the only way to combat China’s techno-authoritarian censorship model is to emulate it.
This was inevitable, ever since Donald Trump and the MAGA world freaked out when social media’s attempts to fact-check the President were deemed “censorship.” The reaction was both swift and entirely predictable. After all, how dare anyone question Dear Leader’s proclamations, even if they are demonstrably false? It wasn’t long before we started to see opinion pieces from MAGA folks breathlessly declaring that “fact-checking private speech is outrageous.” There were even politicians proposing laws to ban fact-checking.
In their view, the best way to protect free speech is apparently (?!?) to outlaw speech you don’t like.
With last week’s announcement by Mark Zuckerberg that Meta was ending its fact-checking program, the anti-fact-checking rhetoric hasn’t slowed down one bit.
So let’s be clear here: fact-checking is speech. Fact-checking is not censorship. It is protected by the First Amendment. Indeed, in olden times, when free speech supporters would talk about the “marketplace of ideas” and the “best response to bad speech is more speech,” they meant things like fact-checking. They meant that if someone were blathering on about utter nonsense, then a regime that enabled more speech could come along and fact-check folks.
There is no “censorship” involved in fact-checking. There is only a question of how others respond to the fact checks.
What the MAGA world is upset about is that, in some cases, private entities (who have every right to do this) would look at some fact checks and decide “maybe we shouldn’t promote utter fucking nonsense (or in some cases, potentially dangerous nonsense!) and spread it further”.
This is all still free speech. Some of it is speech about other speech and some of it is consequences from that speech.
But not one lick of it is “censorship.”
Yet this narrative has become so embedded in the MAGA world that the NY Post can write an entire article claiming that “fact-checking censors” exist without ever giving a single actual example of it happening.
There’s a really fun game that the Post Editorial Board is playing here, pretending that they’re just fine with fact-checking, unless it leads to “silencing.”
The real issue, that is, isn’t the checking, it’s the silencing.
But what “silencing” ever actually happened due to fact-checking? And when was it caused by the government (which would be necessary for it to violate the First Amendment)? The answer is none.
The piece whines about a few NY Post articles that had limited reach on Facebook, but that’s Facebook’s own free speech as well, not censorship. Also, it’s not at all clear that any of those issues had anything to do with “fact checking,” rather than a determination that the Post may have violated Facebook’s rules.
It does cite the supposed “censorship” of Trump’s NIH nominee Jay Bhattacharya for the Great Barrington Declaration:
Most notably,Dr. Jay Bhattacharya of Stanfordand his colleagues from Harvard and Oxford got silenced for recommending against mass lockdowns and instead for a focus on protecting only the elderly and other highly vulnerable populations.
Except, as we called out just recently, even Bhattacharya’s colleague who helped put together the Great Barrington Declaration (and who hosted the website) has said flat out that the reason the FB page was taken down had nothing to do with Facebook, but rather anti-vaxxers who brigaded the reporting system, claiming the Great Barrington Declaration was actually a pro-vaccination plot.
The Post goes on with this fun set of words:
Yes, the internet is packed with lies, misrepresentations and half-truths: So is all human conversation.
The only practical answer to false speech is and always been true speech; it doesn’t stop the liars or protect all the suckers, but most people figure it out well enough.
Shutting down debate in the name of “countering disinformation” only serves the liars with power or prestige or at least the right connections.
First off, the standard saying is that the response to false speech should be “more speech” not necessarily “true speech” but more to the point, uh, how do you get that “true speech”? Isn’t it… fact checking? And, if, as the NY Post suggests, the problem here is false speech in the fact checks, then shouldn’t the response be more speech in responserather than silencing the fact checkers?
I mean, their own argument isn’t even internally consistent.
They’re literally saying that we need more “truthful speech” and less “silencing of speech” while cheering on the silencing of organizations who try to provide more truthful speech. It’s a blatant contradiction.
The piece concludes with this bit of nonsense:
PolitiFact and all the rest are welcome to keep going, as long as they’re just equal voices in the conversation; we certainly mean to go on calling out what we see as lies.
Check all the facts you want, as long as you don’t get to silence anyone else.
But… that’s always been the case. Fact checkers have never had the power to “silence anyone else.” They just did their fact checking, provided more speech, and let others decide how to deal with that speech. The Post’s argument is a strawman, railing against a problem that doesn’t actually exist.
In the end, the Post’s piece inadvertently makes the case for more fact-checking, not less. In a world awash with misinformation, we need credible voices providing additional context and correcting the record. That’s the very essence of the free marketplace of ideas.
The Post seems to want a “free marketplace of ideas” where only ideas they agree with are allowed to be expressed. That’s not how free speech works.
Trying to silence voices calling out misinformation in the name of free speech is the height of hypocrisy. The Post should take its own advice – if you disagree with a fact check, respond with more speech, not by celebrating the active silencing of fact checkers you disagree with.
The 53-page report, titled “Delusion of Collusion: How the House Republican Majority Abused Oversight Powers to Protect Elon Musk and Silence His Critics,” exhaustively documents how Jordan launched a sham investigation in what appears to be a clear attempt to intimidate advertisers and bully them into subsidizing Musk’s ExTwitter, while falsely claiming it was about fighting “collusion.”
Because the Democrats tend to be inept and incompetent in explaining reality to people, Rep. Jerry Nadler released the report on New Year’s Eve where it basically got zero attention. As far as I can tell, the only news report to cover it was a small legal antitrust trade publication. By the time the ball dropped in Times Square hours after the report had been released, it had effectively disappeared.
However, it deserves way more attention for all of the nonsense it puts into the public record, specifically focusing on Jordan and Musk’s effort to attack GARM, a small non-profit that just worked with advertisers and social media platforms to encourage the platforms to protect the brand safety of advertisers. As we’ve covered, that attack was successful. Even though his ExTwitter had put out a press release talking about how excited they were to “rejoin” GARM just weeks earlier, Musk went on to sue GARM, which was almost immediately shut down by the World Federation of Advertisers.
The report breaks down how this was a clear case of Jordan and Musk weaponizing the government to silence critical speech.
By March 2023, Twitter’s value had fallen from $44 billion to $20 billion. The reason for this decline in value is no mystery, given the facts outlined above. Nevertheless, the Majority launched an investigation into the advertisers which have declined to spend money on the platform, accusing them of “colluding” to hurt the company’s profits. Since then, the Majority has spent countless dollars and hours of staff time trying to figure out why advertisers might be hesitant to risk their brands’ reputations on a platform whose owner told them, in November 2023, to “Go fuck yourself.”
Chairman Jordan’s so-called investigation culminated in a July 2024 “interim report” which used cherry picked documents and misleading transcript excerpts to suggest that the committee had uncovered evidence of “collusion” when in fact the very opposite is true. In fact, the complete and contextualized documents and testimony show that the Global Alliance for Responsible Media and its member companies were engaged in a pro-competitive effort to address the substantial brand risk that harmful online content poses to advertisers and to consumers.
Chairman Jordan’s report had an audience of one: Elon Musk. In fact, the entire report seems like pretext for a lawsuit Musk filed against various advertising entities and ultimately to silence the advertisers who expressed concern about content on his platform. The resources of this Committee should not be directed to further pad a billionaire’s bottom line.In contrast, this minority report is intended for the American public, who are entitled to the truth about this investigation and about Chairman Jordan’s true aims and abuse of congressional oversight power.
It’s hard to imagine a more blatant example of a powerful government official abusing his authority to carry water for a political ally and major GOP donor. The fact that Jordan is doing this while sanctimoniously claiming to be fighting the “weaponization” of government is beyond parody.
As the report calls out:
For the past 20 months, the Chairman of the House Judiciary Committee has abused his oversight power and the rule of law to push an agenda that would pervert the free market and undermine individual companies’ independent decisions as to where to place advertisements online. The spread of illegal, harmful, abusive, and false and misleading content online results in actual harm, both online and offline. We are left to conclude that the Majority’s ultimate goal was not to conduct antitrust oversight as they claim, but rather to silence criticism of harmful online content and those who promote it, deter content moderation, and protect the ability to use mis- and disinformation campaigns to achieve political ends.
Ya think? This was obvious from the beginning, but almost entirely ignored by the credulous media that uncritically amplified Jordan’s false claims.
The report thoroughly debunks Jordan’s flimsy antitrust pretext and exposes his true aim: strong-arming companies into boosting Musk and his political allies.
It also calls out the irony of the committee that claims to be fighting weaponization, actually being the chief party weaponizing the government against speech:
The Majority is engaging in a transparently political effort to use the antitrust laws to benefit their allies by conferring upon them outcomes that they could not otherwise achieve in the marketplace. This is not just a misuse of the antitrust laws, but fundamentally subverts the goals of those laws. The irony could not be greater. While spending most of this Congress attacking the Biden administration’s so-called weaponization of government, the Majority here is trying to weaponize the antitrust laws under a highly dubious theory to override legitimate market outcomes.
It also calls out the MAGA trend of falsely claiming that content moderation or boycotts could possibly violate the First Amendment:
Finally, the Majority bandies about words like censorship, in a misguided effort to evoke the First Amendment. But as the Majority well knows, the First Amendment only applies to government action. And in this case, the only governmental burdening of speech is the Majority’s onslaught against GARM and its members. It is an effort to bully the advertisers into subsidizing firms whose content moderation policies put brands and businesses at risk. It is an attempt to hijack free speech, as well as antitrust, for political purposes.
If reality mattered, this report would be a bombshell. But, again, everyone seems to be living in a fog of nonsense, where anything the MAGA world says it’s doing, no matter how obviously false, is treated as genuine. And any time anyone calls out the lack of clothes on the emperor, it’s dismissed as sour grapes or “derangement syndrome.”
The report is thorough and detailed. It explains why companies might not want to advertise on ExTwitter for totally legitimate business reasons, calling out examples of big brands having their ads show up next to “pro-Nazi” content, and noting that consumers (the marketplace again!) will often punish companies whose advertisements support such hatred:
Now consider the category of misinformation that the Majority alleges GARM’s members misapply to the detriment of conservative-voiced content. The GARM framework defines misinformation as “the presence of verifiably false or willfully misleading content that is directly connected to user or societal harm.” Consumer surveys suggested that inappropriate content, including misinformation, negatively affects brand trust and purchase behavior. These results explained, in part, why advertisers are concerned about the nexus between brand safety and misinformation. Additional studies examined this nexus in more detail. A 2024 article in NATURE reported the results of an experiment which demonstrated that consumers are likely to reduce purchases from firms that advertise on websites that publish misinformation compared to firms that do not. Unlike the surveys which measured intention to change purchase behavior, subjects in this experiment made actual economic choices. Additional research on consumer reaction to misinformation was provided by the IPG Mediabrands and Zefr MAGNA Media Trials Study which found that “advertising next to misinformation led to wasted dollars for brands, eroded brand perception, and negatively impacted KPIs [key performance indicators].”
The challenges of directing ad placement to trustworthy sites and away from misinformation sites continues to loom large. The 2024 NATURE study found that of the 100 most active advertisers, an astounding 79.8 percent that used digital advertising platforms had advertisements placed in online misinformation outlets in a given week. The authors attributed the problem to the use of such platform systems that allocate advertising to such websites. Another study, by the Pew Research Center, suggested that “for every $2.16 in digital ad revenue sent to legitimate newspapers, U.S. advertisers are sending $1 to misinformation websites.”
In sum, online advertising is very important for advertisers and for the websites that provide and host content, many of whose business models depend on it. But harmful content is challenging the business models of advertisers, content providers, and platforms alike. Consumers associate the online content with the brands that advertise there. When a brand is advertised near harmful content, its value is undermined because most consumers believe that the brand knowingly chose that content and site for its advertising.
In other words, there are completely and totally understandable business reasons for advertisers to stop advertising on ExTwitter.
And all GARM was trying to do was help advertisers make sure that they didn’t risk angering customers by having ads appear next to highly controversial content. And they did so in a way that everyone involved knew was just creating more information and allowing advertisers (and social media platforms) to make their own final decisions:
GARM’s voluntary frameworks, which the biggest social media platforms helped develop, provide structures for analysis and created a common lexicon. Much like the terms of art in marketing or expressions in mathematics, a shared terminology facilitates communication that is foundational for constructive working relationships across organizations. Such terminology enhances transparency, making market transactions more efficient. The buyer better understands what sellers are offering in terms of brand safety and the seller better understands what buyers want. Both advertisers and platforms benefit from this common approach and independent decision making is improved.
Crucially, the frameworks do not dictate advertising outcomes. Applying those frameworks is an inherently subjective exercise that includes tailoring to the specific requirements of the brands and leads to outcomes that vary across GARM’s members. Juhl described how GroupM customizes its work in ad placement to reflect the specific needs of their advertiser clients:
GroupM works to place our clients’ ads on media pursuant to their goals, preferences, and target audiences, and we continually engage with our clients to understand their particular risk tolerance levels. These risk tolerances shift due to our clients’ own business conditions and how they view the current political and social environments. Clients shift priorities very quickly and it is our job to execute their strategy with speed and precision. We always follow our client brand’s ad placement wishes.
It is also important to recognize that the application of the GARM frameworks usually operates within a firm’s set of marketing policies and hence was only one consideration among many. These marketing policies vary by firm. Most were created before the GARM frameworks and continue to shape online advertising choices.
But, Jim Jordan and Elon Musk bent over backwards to pretend that it was “illegal collusion” that violates antitrust law. And this report says that’s ridiculous to anyone who looked at all the facts.
The Majority’s July 2024 Interim Report offers no direct evidence of an agreement among GARM and its members. Mere status as a member of GARM would not, without more, support a finding of a conspiracy. Consistent with the key Supreme Court precedents Matsushita Elec. Industrial Co. v. Zenith Radio Corp.1 and Monsanto Co. v. Spray-Rite Service Corp., a plaintiff would have to “present evidence tending to show that association members, in their individual capacities, consciously committed themselves to a common scheme designed to achieve an unlawful objective.” In contrast, GARM and its members are absolutely clear that their advertising decisions are made independently. As Unilever USA President Patel testified during the hearing,
I want to be very clear on one crucially-important fact. Unilever and Unilever alone controls our advertising spending. No platform has the right to our advertising dollars. As we look across the available advertising inventory, recognizing we do not have unlimited money to spend on advertising, we choose the channels, the platforms, and the outlets that give us the greatest commercial benefit for our advertising investments.
During questioning Patel further confirmed that, “A hundred percent, Unilever makes its own decisions,” and does not follow any outside group’s direction to avoid any outlet. This sentiment is echoed by GARM’s Rakowitz during his transcribed interview:
Q: But just to nail down that point, GARM doesn’t tell individual members—
A: Absolutely not.
Q: —what to do?
A: No, we do not.
Q: Or where to place ads?
A: No, we do not.
Q: Or where to avoid placing ads?
A: We do not.
These comments are consistent with the advertiser decision making process discussed in Part IIB.
As the report highlights, nothing about this represents a serious antitrust inquiry.
A serious antitrust inquiry would need to address the ease of reaching and sustaining an agreement. Two major obstacles—large numbers of participants and participants with diverse interests—have long been recognized by antitrust law as making collusive schemes less likely. In the GARM setting, overcoming these obstacles would loom large.
The real reason companies stopped advertising on ExTwitter is no grand conspiracy to suppress free speech. It was a simple business calculation. Advertising there is bad for business:
The Majority focused on alleged harm caused by the demonetization of its favored conservative-voices. They assert that this loss of revenue is caused by a large conspiracy involving GARM and its 100 plus members to suppress conservative-voiced online platforms and outlets by stopping advertising support. But the most compelling explanation for this revenue decline is apolitical. Advertisers want to attract and retain customers. When their advertising is placed next to harmful content the advertisement instead repels customers. Not surprisingly, advertisers gravitate to outlets that pose less risk to their brands. Again, this isn’t rocket science.
Instead, the much more obvious conclusion is the one that we’ve been shouting from the rooftops for the past few years: that it’s Jordan who is weaponizing the government to silence speech:
As with other of this Committee’s recent investigations, we are left to conclude that its ultimate goal was not to “conduct[] oversight of the adequacy and enforcement of U.S. antitrust laws” as they claim, but rather to silence criticism of harmful online content and those who promote it, deter content moderation, and protect the ability to use mis- and disinformation campaigns to achieve political ends. The Majority’s desperate ploy to launder their failed censorship arguments through an antitrust framing itself fails. The Majority’s actions have intimidated organizations who call attention to the prevalence of hate, disinformation, and other harmful or unlawful content online. Fostering a more transparent, accountable, and responsible digital environment is not only lawful, it is good for businesses, consumers and the general public. Chairman Jordan’s investigation and others like it will undermine this work and lead to the further deterioration of our information ecosystem and will threaten free speech.
Antitrust is not about choosing winners and losers. It is about ensuring a fair fight. In this instance we see that the Majority is willing to condemn any outcome that they do not like as being unfair and the outcome appears to involve both a category of supposed victims as well as a particular victim—X. In fact, this investigation originated after the Speaker of the House Kevin McCarthy, Chairman Jordan, and Elon Musk were talking and Musk said, “‘by the way, there’s this organization GARM, because GARM is harm.’ [sic] I [Jordan] never forgot that sentence.” No he did not. Jordan embarked on an investigation whose outcome was a foregone conclusion and for which the resulting report’s title [GARM’s Harm] was effectively supplied by Musk himself. Despite all of the investigation’s shortcomings, it excelled in one regard—providing taxpayer funded discovery for the richest man in the world and one of Trump’s biggest donors. A lawsuit launched by X just days after the Majority’s interim report was released began by touting that the conduct was “the subject of an active investigation” by the House Judiciary Committee before reproducing the fruits of the subcommittee’s fishing expedition in the form of a document demand. Perhaps this assault on legitimate business activity seems worth it to the Majority.
It’s pretty scathing as Congressional reports go.
In the end, this sordid saga illustrates the dangerous way that accusations of “censorship” and “collusion” are being cynically weaponized to bully companies into amplifying favored political content. Jordan and Musk’s campaign against GARM sets a troubling precedent.
By abusing the power of Congressional oversight to intimidate advertisers and platforms, they are effectively arguing that companies have an obligation to subsidize and support any speech, no matter how hateful or harmful, or else be accused of “censorship.” It’s an attempt to pervert the free market to serve their political agenda.
But as this report makes clear, advertisers’ decisions on where to place their ads are driven by legitimate business considerations about brand safety and consumer sentiment, not some nefarious plot to silence conservatives. The real threat to free speech is not content moderation or advertiser boycotts – it’s government officials like Jim Jordan trying to use their power to dictate what speech must be subsidized and supported.
Sadly, given the current media and political environment, it’s unlikely this report will get the attention it deserves. But for anyone who cares about the future of online speech, platform governance, and the abuse of government authority, it’s essential reading. It shines a harsh light on Jordan and Musk’s cynical, dishonest campaign and the damage it has done to free speech and the free market.
Senator Richard Blumenthal is at it again. The long-time Connecticut Senator, who never met an internet regulation he didn’t like, is eager to reintroduce his Kids Online Safety Act (KOSA) — a bill that would trample all over the First Amendment in a misguided attempt to “protect the children.”
As we’ve explained countless times, KOSA is a dangerous and unconstitutional bill that would force online platforms to censor a wide swath of speech. But Blumenthal doesn’t seem to care. He’s more interested in grabbing headlines than crafting thoughtful policy.
Indeed, he’s so eager to bring back KOSA he admits that the point of the bill is to suppress content he dislikes.
The 119th Congress is underway, and U.S. Sen. Richard Blumenthal said one of his priorities is passing legislation to protect kids on social media.
He said he plans to reintroduce his Kids Online Safety Act legislation this session.
[….]
Supporters of the bill, including Blumenthal, have denied that it threatens the First Amendment.
“The dangers of social media are no less now than they were in the last session, and we need to pass the Kids Online Safety Act to give parents tools and young people control so that addictive, destructive content on bullying, eating disorders, and self-harm can be stopped,” Blumenthal told reporters at an unrelated event on Thursday
I mean, I guess it’s a choice for Blumenthal to first claim there are no First Amendment concerns and then straight up admit that he thinks KOSA can be used to “stop… destructive content.”
So he admits it’s a censorship bill.
We’ve spent years now explaining the problems with KOSA, including the fact that his co-author on the bill had admitted that she believes KOSA will be useful in silencing LGBTQ+ content that she believes is dangerous. And here, Blumenthal is admitting that, yes, of course the bill is designed to “stop” content that he finds “destructive” without realizing that what other people (including the bill’s co-author) find “destructive” is things like “trans people exist.”
Does Blumenthal not realize that censoring content in response to regulation is (1) a violation of the First Amendment he swore to uphold and protect, and (2) doesn’t stop the actual harms he’s complaining about?
The bill is inherently problematic. As Senator Rand Paul pointed out, censoring the internet doesn’t protect kids. Indeed, it doesn’t help prepare them for the modern world at all.
At best, the bill will simply lead companies to block all kinds of valuable speech to avoid having to fight about it in court. It would inevitably lead to overly cautious censorship in an attempt to avoid liability, doing real harm to free speech (including important speech around LGBTQ issues, health issues, and more).
Blumenthal, of course, doesn’t care. He did the same thing with FOSTA, and despite overwhelming evidence (as many of us warned!) that bill has resulted in real human suffering and made law enforcement’s job harder, Blumenthal still shamelessly insists that bill was a success.
Because that’s Blumenthal’s default posture. But the truth is clear. And it goes against Blumenthal.
Blumenthal cares not for good policy. He cares only about policy that makes him look good in the headlines. These are often not the same thing.
We’ll see what the bill says when it eventually gets reintroduced, but it is noteworthy that House Republicans were concerned enough about how it could be used for censorship that they refused to move it in the last Congress.