If there was anyone with any spine, honesty, or morality in the Trump administration, these astounding gaffes would have been headed off. But there’s no one left with any of these traits in the White House, so we get the sort of thing we’re now seeing with increasing frequency: Trump (deliberately or not) forgetting who was sitting in the Oval Office in 2020.
You know the old cartoon representation of the conscience — the devil on one shoulder and the angel on the other? When I imagine Trump occasionally having a second thought before speaking/posting, all I see are a bunch of little Trumps on both shoulders waving “GO DONNY!” flags.
President Donald Trump blamed the Capitol riot on former President Joe Biden, claiming that “THE BIDEN FBI” had placed agents in the crowd that had assembled in Washington, D.C. on Jan. 6. 2021.
Trump’s post came in the wee hours of Sunday, at 12:38 a.m.
“THE BIDEN FBI PLACED 274 AGENTS INTO THE CROWD ON JANUARY 6,” he wrote on Truth Social. “If this is so, which it is, a lot of very good people will be owed big apologies. What a SCAM – DO SOMETHING!!! President DJT”
This is obviously not so, which anyone should immediately know, since it was still Trump’s FBI up until he left office (unwillingly and one insurrection attempt later) two weeks later. Had he just said “the FBI” instead of “the Biden FBI,” he might have been able to make a point, however implausible that point might be.
Instead, he went the other way and blamed the guy who didn’t even have an FBI to call his own while Trump supporters raided the Capitol building in hopes of overturning an election.
But arguments that these statements are deliberate and evidence of 4-D chess fall apart the more often Trump does the same thing. Instead of looking like a mismanaged disinformation campaign, it just looks demented. And I don’t mean colloquially. I mean in the literal, medical sense of the word.
Just in: Documents show conclusively that Christopher Wray, Deranged Jack Smith, Merrick Garland, Lisa Monaco, and other crooked lowlifes from the failed Biden Administration, signed off on Operation Arctic Frost. They spied on Senators and Congressmen/women, and even taped their calls. They cheated and rigged the 2020 Presidential Election. These Radical Left Lunatics should be prosecuted for their illegal and highly unethical behavior!
At best, Trump has a legitimate complaint against Christopher Wray (who was heading the FBI in 2020). And by “legitimate,” I only mean he was actually employed by the FBI at the point in time referenced by Donald Trump.
But everyone else was appointed by Joe Biden after he took over as president in 2021. To claim they somehow “rigged” an election is literally insane. Merrick Garland was a federal judge while Trump was in office. Jack Smith was still at The Hague. Lisa Monaco was a Biden advisor during his presidential campaign. The only person who could have conceivably been part of an inside job was someone who was still working for Trump at the time: Christopher Wray.
Trump has already published an enemies list to the White House website. Stuff like this appears to be the venting of his private list of people he doesn’t like. He recognizes some names and gets angry, never bothering to consider relevant details like, say, who was actually in charge of the place when he was apparently getting screwed over by the democratic process.
Back in his first term, there were still enough people around him to help curb these impulses a bit. I mean, no one could really control him when he went off-script but he didn’t spend nearly as much time just blasting disjointed social media buckshot into the void. The people that surround him now perform only two tasks in response to things like these: (1) engaging in massive amounts of spin or (2) just pretending it didn’t happen.
This isn’t healthy. The GOP is almost entirely composed of people pretending to be mad about “woke” stuff while ensuring the garden hose aimed at the authoritarian kudzu never gets shut off. They’re doomed to repeat the past because they learned all the wrong lessons from it. Sooner or later, every authoritarian regime begins eating its own. At some point, they’ll be up against the wall, having sold their souls for the privilege of being executed by their compatriots. And the longer they pretend no one needs to tell Donald Trump “no,” the more inevitable this endpoint becomes.
The US attended the UN Cybercrime Treaty signing ceremony in Hanoi this weekend, where 72 countries signed on to a Russia-backed framework for global surveillance cooperation. Whether the US actually puts pen to paper (and all the reporting on this is kind of cagey) is almost beside the point—by showing up and legitimizing the proceeding, what the Biden administration last year blessed with “we’ll fix it from within” is now in the hands of the Trump administration. You know, the one that views criticism as crime and governmental power as something to maximize, not constrain. What could go wrong?
The Russian-initiated treaty, which we’ve warned aboutmultiple times over the past few years, creates a framework for cross-border law enforcement “cooperation” on “cybercrime” that’s defined so broadly it could cover basically any activity a government doesn’t like that happens to involve a computer.
As The Record notes, despite widespread concerns raised by plenty of people, the Biden administration felt it made more sense to sign onto this agreement in order to be able to fix it from within later:
The White House later told reporters they felt theyhad to back the treaty now in order to make changes to it later and shape how it was implementedglobally. They also said it would likely expand the number of countries that will respond to warrants issued by the U.S. related to cybercrime.
That’s… optimistic. And also completely backwards.
That reasoning might work for a trade agreement where you’re negotiating tariff rates or dispute resolution procedures. But when the treaty is about creating a legal framework that explicitly empowers any government to demand data on anyone accused of a “serious crime” under their domestic law, you can’t shape your way out of that. The mechanism itself is the problem.
You don’t fix a fundamentally broken treaty by signing it and hoping to shape implementation later. That’s not how it works. Once you’ve given your stamp of approval to a Russia-backed surveillance framework, you’ve already lost the game. The “we’ll fix it later” approach might have made some sense when there was at least a theoretical chance the US would push back against abuse. But now we’re handing these powers to an administration that has made clear it views any criticism as an attack, any dissent as a threat, and any check on its power as illegitimate.
We, the undersigned organizations, remain deeply concerned that the UN Convention Against Cybercrime (UNCC) will facilitate human rights abuses across borders. As some states head to Hanoi for the UNCC signing ceremony from October 25-26, we urge them to refrain from signing and ratifying the treaty and to use the occasion to highlight the importance of safeguarding human rights when implementing this Convention.
The problems with this treaty are extensive and we’ve covered them before, but they’re worth repeating given that they’re now officially going into effect. As Human Rights Watch details:
The Convention will obligate governments to collect electronic evidence and share it with foreign authorities for any “serious crime,” defined as an offense punishable by at least four years of imprisonment under domestic law. Many governments criminalize activities protected by international human rights law and impose sentences that would make them “serious offenses” under this framework, such as criticism of the government,peaceful protest,same-sex relationships,investigative journalism, andwhistleblowing.
Think about that for a second (or more, because the more you think about it, the worse it seems). In Thailand, criticizing the king can get you fifteen years. In Russia, criticizing the invasion of Ukraine is considered “violence against police” or “incitement of hatred.” Under this treaty, those become legitimate grounds for demanding data from US companies about US users. And the safeguards against this? Basically non-existent.
The treaty does include some language about respecting human rights in Article 6, but as we’ve noted before, it’s incredibly weak. It essentially says states should implement the treaty “consistent with their obligations under international human rights law” and that nothing in the treaty should be interpreted as “permitting suppression of human rights.”
Great. Except every authoritarian government on earth claims their repressive laws are consistent with human rights. They just have a different interpretation of what those rights mean and when they can be restricted. Thailand says its lèse-majesté law protects “public security.” Russia says its laws against criticizing the military are necessary to protect national defense. Saudi Arabia’s laws against “defaming” the government are framed as protecting social stability.
The treaty’s language does nothing to stop any of this. It’s a permission slip, not a constraint.
And the Biden administration knew this last year. As The Record notes, the current State Department said it is “still reviewing the treaty” when asked whether the US will be among the first to ratify it. Which raises the obvious question: if you’re still reviewing whether it’s safe to ratify, why the hell did you join the signing ceremony?
Now add a treaty that allows foreign governments to request surveillance data on anyone accused of a “serious crime” under their domestic law. And remember: the US appears to be supporting this framework. We’re telling the world that this is legitimate international cooperation.
So when Viktor Orban’s Hungary decides that criticizing the government is a serious crime worthy of five years imprisonment, or when India decides that reporting on government corruption qualifies as sedition, or when any of dozens of increasingly authoritarian governments decide they want to track down dissidents who happen to use American tech platforms—they now have a treaty-backed framework to request that data.
And what’s the Trump administration going to do? Push back on behalf of journalists and activists? Defend the principle that criticism of government shouldn’t be treated as a crime? Please.
The Human Rights Watch statement is particularly pointed about the location of the signing ceremony:
The signing ceremony in Hanoi is taking place against the backdrop of an intensified crackdown by the Vietnamese government on dissent to punish people simply for raising concerns or complaints about government policies or local officials, including online.
So countries are literally signing a surveillance treaty in a country that’s actively cracking down on online dissent.
The organizations opposing the treaty also make clear what should have been obvious from the start: you can’t fix this by signing it and hoping for the best. They urge states that are considering signing to withhold support unless they can guarantee meaningful safeguards will prevent abuse in practice.
States should refuse to sign or ratify the Convention. States that have already committed to signing should adopt concrete human rights safeguards and demonstrate how these enable them to implement the Convention’s terms in a manner that fully respects human rights.
Rights-respecting states that are considering signing the UNCC despite the significant threat it poses to human rights should withhold their support unless and until they can guarantee that certain conditions are in place, namely that they and other signatories will implement the treaty with meaningful safeguards and other legal protections that will prevent human rights abuses in practice.
And they lay out what those safeguards would need to include: extensive stakeholder consultation, national frameworks that meet international human rights standards, formal reservations to ensure dual criminality requirements, transparency into implementation, and making human rights compliance a prerequisite for any funding or capacity building.
None of that is in place. None of it is even being seriously discussed. Having the Biden administration bless this approach last year, and now the Trump admin show up to the signing ceremony basically takes away any and all leverage.
There is no “later” for shaping implementation when you’ve already legitimized the framework. And there’s definitely no hope of the Trump administration using this treaty responsibly when they’ve made clear they view governmental power as something to be maximized, not constrained.
The treaty required only 40 ratifications to enter into force, and it received 72, which sets off a 90-day clock until the treaty is official. In a better world, the US would make clear it will not be among those to sign or ratify the treaty. In theory, it still could. As various reports note, many of the big American tech companies also opposed this treaty.
Now would be a good time for the likes of Mark Zuckerberg, Elon Musk, or Marc Andreessen to use some of their connections with Trump to suggest he goes in another direction. But that seems unlikely.
The Biden administration fucked this up big time last year by coming out in support of this treaty with a “we’ll help fix it from within later” approach. That we’re now having to hope someone convinces Trump to push back on it is less than ideal.
This treaty is damaging and dangerous. The Biden administration gave it its initial blessing, and now the Trump administration is poised to help the world’s worst authoritarians abuse it.
I have a simple question for Senator Ted Cruz: Who was president in 2018? How about 2020?
I ask because Cruz just released a “bombshell” report claiming that the Biden administration “converted” CISA into “the Thought Police.” There’s just one tiny problem with this narrative: Cruz’s own report shows that everything he’s mad about started under Donald Trump, under whose leadership CISA was created. And also that Cruz’s researchers think responding to false information is censorship. Also, studying disinformation is, somehow, censorship.
But, most importantly, apparently Ted Cruz doesn’t seem to know how time works.
Look, we’ve been through this dance before. The Supreme Court, in a decision written by Justice Amy Coney Barrett, already examined these exact claims about government “censorship” and found them to be bullshit. Barrett’s decision mentions “no evidence” at least five times and includes a devastating footnote explaining how the “evidence” was “clearly erroneous.”
The Fifth Circuit relied on the District Court’s factual findings, many of which unfortunatelyappear to be clearly erroneous. The District Court found that the defendants and the platforms had an “efficient report-and-censor relationship.” Missouri v. Biden, 680 F. Supp. 3d 630, 715 (WD La. 2023).But much of its evidence is inapposite. For instance, the court says that Twitter set up a “streamlined process for censorship requests” after the White House “bombarded” it with such requests. Ibid., n. 662 (internal quotation marks omitted).The record it cites says nothing about “censorship requests.”See App. 639–642. Rather, in response to a White House official asking Twitter to remove an impersonation account of President Biden’s granddaughter, Twitter told the official about a portal that he could use to flag similar issues. Ibid. This has nothing to do with COVID–19 misinformation. The court also found that “[a] drastic increase in censorship . . . directly coincided with Defendants’ public calls for censorship and private demands for censorship.” 680 F. Supp. 3d, at 715. As to the “calls for censorship,” the court’s proof included statements from Members of Congress, who are not parties to this suit. Ibid., and n. 658. Some of the evidence of the “increase in censorship” reveals that Facebook worked with the CDC to update its list of removable false claims, but these examples do not suggest that the agency “demand[ed]” that it do so. Ibid. Finally, the court, echoing the plaintiffs’ proposed statement of facts, erroneously stated that Facebook agreed to censor content that did not violate its policies. Id., at 714, n. 655.Instead, on several occasions, Facebook explained that certain content did not qualify for removalunder its policies but did qualify for other forms of moderation.
Cruz and his team apparently missed all that. Or they know about it and have decided to misrepresent it. I’m not sure which is worse.
The centerpiece of Cruz’s report is CISA—the Cybersecurity & Infrastructure Security Agency. According to Cruz, this agency was created with pure intentions under Trump but was then “converted” by Biden into a censorship machine.
Cruz’s report repeatedly undermines its own thesis. He writes:
“Beginning in 2018, CISA organized and attended regular meetings with industry and government officials to push its censorship agenda”
2018, Ted. Who was president then, Ted? Do you know? I’ll give you a hint: it wasn’t Joe Biden.
Oh, and remember how Cruz claimed that dealing with misinformation wasn’t part of CISA’s original plan? Well, CISA was created on November 16, 2018. So if they started these “regular meetings” in 2018, that means… they started immediately. Under Trump. As part of the original plan.
Also, Cruz keeps calling this a “censorship agenda,” but his own report shows that CISA’s role was coordination and information sharing. You know, the thing they were explicitly created to do.
The supposed smoking gun in Cruz’s report is something called “switchboarding”—where CISA would pass along reports from state election officials to social media companies. Cruz presents this as evidence of censorship.
But here’s what actually happened: Election officials would flag potential election misinformation to CISA, specifically where such misinformation might undermine the integrity of the election system (i.e., telling people to vote in the wrong place or on the wrong day). CISA would forward it to platforms with a clear disclaimer that this was not a demand. Platforms would then review the content against their own policies.
Every single message CISA sent included this disclaimer:
The U.S. Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA)is not the originator of this information. CISA is forwarding this information, unedited, from its originating source-this information has not been originated or generated by CISA. This information may also be shared with law enforcement or intelligence agencies.
In the event that CISA follows up to request further information,such as a request is not a requirement or demand. Responding to this request is voluntary and CISA will not take any action, favorable or unfavorable, based on decisionsabout whether or not to respond to this follow-up request for information.
Throughout the report, Cruz makes baseless claims that his own sources immediately contradict. He says CISA “directly instructed social media companies to moderate specific content.” Then, in the very same paragraph, admits that what actually happened was CISA forwarding content with disclaimers, and platforms reviewing it “based on their policies.”
Take the following for example:
During the 2020 election, CISA directed state and local election officials to report supposed election-related MDM to CISA. CISA would then review the reports and forward them to social media companies so they could remove the content. This process is referred to as “switchboarding.” As Mr. Scully, who led the CISA team performing this work, explained, switchboarding “was essentially an [election] official…identify[ing] something on social media they deemed to be disinformation aimed at their jurisdiction. They could forward that to CISA, and CISA would share that with the appropriate social media companies.”
The emails below between Scully, the Maryland State Board of Electors, and Twitter illustrate how the switchboarding process worked. Step One: the Maryland Official emailed Scully a few tweets regarding mail-in ballots. Step Two: Scully forwarded that email to Twitter. Step Three: Twitter immediately responded that it would “escalate” the tweets and later confirmed that the “[t]weets have been actioned for violations of our policies.”
We’ve covered this before, but let’s cover it again. Anyone could (and still can!) flag any content on Twitter, saying that it violates their policies. It is true that most sites also did set up separate portals to handle such flags from government actors, in part because those might require extra legal scrutiny.
And, note what happened: Twitter reviewed the content to see if they violated its policies. They did not take them down because the government requested it, but because they violated policies.
And we know, for a fact, that Twitter (and other social media sites) actually rejected the vast majority of such flags. Hell, in Cruz’s own report he includes a quote from a CISA employee, Brian Scully, noting that they knew the companies would review it against their own policies:
According to Scully, CISA knew social media companies would apply their content moderation policies to “disinformation” if CISA alerted them to it. “The idea was,” he explained, that social media companies “would make [a] decision on the content that [CISA] forward[ed] to thembased on their policies.” He acknowledged that if the content had not been brought to social media companies’ attention, the platforms would not have otherwise moderated it.
Also, Scully couldn’t possibly know that last line is true. Because he has no idea what sort of monitoring the companies would do otherwise or who else might flag the same information to them. And, just the fact that Cruz’s report doesn’t quote Scully and only summarizes that he “acknowledged” such a claim is suspect.
So, I did what Cruz didn’t do for the readers of the report and looked up Scully’s full deposition to see how that conversation actually went down. And… Cruz is totally misrepresenting what was said. Scully notes that they only shared information for the companies to decide what to do with it.
Q. Switchboard work, what does that mean?
A. It was essentially an audit official to identify something on social media they deemed to be disinformation aimed at their jurisdiction. They could forward that to CISA and CISA would share that with the appropriate social media companies.
Q. And what was the purpose of sharing it with social media companies?
A. Mostlyfor informational awareness purposes, just to make sure that the social media companies were aware of potential disinformation.
Q. Was there an understanding that if the social media platforms were aware of disinformation that they might apply their content moderation policies to it?
A. Yes. So the idea was thatthey would make decision on the content that was forwarded to them based on their policies.
Q. Whereas, if it hadn’t been brought to their attention then they obviously wouldn’t have moderated it as content; correct?
A. Yeah, I suppose that’s true, as far as I’m aware of it.
Note the full consistency all along here. At no point was the idea here about censorship. It was always flagging content for the platforms to decide what to do with it (and later reports showed they took no action on over 60% of the URLs reported).
There’s also a subtle, but very important, nuance in that final question. The question was not about whether or not the moderation would or would not have happened if CISA called it out. It just says “if it hadn’t been brought to their attention.”
Cruz’s team pretends Scully was only asked about whether or not action would have occurred if CISA flagged it. Cruz’s report pretends CISA “acknowledges” here that the takedown would only occur because of CISA, which is not what Scully actually said.
It’s finally halfway through the report that Cruz tries to tie all this to the Biden administration, by suggesting that there was a change under Biden. But, as per usual, he is taking things out of context and presenting them in the most misleading light possible.
Cruz claims that CISA ramped up its “speech policing” efforts under Biden, while his own report shows they actually stopped switchboarding in 2022:
CISA told the Committee that it stopped switchboarding in 2022. Brian Scully testified that former CISA Director Jen Easterly apparently made the decision to forgo this work… as Scully explained, switchboarding “was not a role [CISA] necessarily wanted to play” any longer “because it is very resource intensive.”
So let me get this straight, Ted: Biden supposedly ramped up the censorship operation… by shutting it down? That’s some 4D chess there.
The most absurd part of Cruz’s report comes when he tries to explain how CISA supposedly “groomed” private organizations to continue the work after they stopped switchboarding. His smoking gun? CISA introduced two organizations doing similar work so they wouldn’t duplicate efforts.
That’s it. That’s the conspiracy.
“There was a point where one of the platforms was concerned about too much kind of duplicate reporting coming in, and so we did have some conversations with EIP and CIS on how to kind of better manage that activity to make sure we weren’t overwhelming the platforms.” Scully further testified that CISA “facilitated some meetings between Stanford folks, the Center for Internet Security, and election officials, where they had discussions about how they would work together.”
That doesn’t sound like “grooming” agencies for censorship. It sounds like CISA seeing that multiple private groups were duplicating efforts and living up to its coordination and information sharing mandate by… connecting them, so they could coordinate and share information.
Beyond not understanding linear time, it’s not clear if Ted Cruz understands what words mean.
Throughout this trainwreck of a report, Cruz consistently conflates “monitoring and responding to misinformation” with “censorship.” He quotes the Election Integrity Partnership saying that no government agency has the explicit mandate “to monitor and correct” election misinformation, then claims this proves they didn’t have authority “to censor.”
But “monitor and correct” is not “censor.” Correcting misinformation means responding to it with accurate information—you know, counter-speech. The thing the First Amendment actually protects.
The entire report boils down to this: Ted Cruz thinks that studying misinformation, sharing information about it, and responding to it with factual corrections constitutes “censorship.” By that logic, every fact-checker, every news organization, and every person who’s ever said “actually, that’s not true” is engaged in censorship.
Like here, Ted, I’m correcting your bullshit. Is that censorship?
Again, I have to remind you, because it’s important, anyone can flag any content for any social media website, and that website will review it against its policies, and if they find it violates those policies, they will take action.
And yet, Ted Cruz pretends that’s censorship:
The Committee found evidence indicating that CISA directly instructed social media companies to moderate specific content. For instance, in one document the Committee reviewed, a lawyer hired by Twitter reviewed Twitter’s communications with government entities and summarized the instances in which CISA had either raised its “direct concerns” with Twitter or forwarded an email from an election official about “inaccurate” information on the platform, and Twitter “took action.”125 Documents like these reinforced the Committee’s suspicion that CISA was hiding the true extent of its relationship with social media companies and its content moderation pressure campaign.
The first sentence claims that CISA “directly instructed social media companies to moderate specific content.” So you would think there would be evidence of that. Instead, what the rest of the paragraph shows is, as described above (and publicly throughout the past five years) CISA would pass along content—with a clear statement that it wasn’t from CISA and it wasn’t a demand—and platforms would independently review it to see if it violated their policies. And if it did violate the policies, they would take action.
Okay, but what about CISA work on “mis- and disinformation” through its “MDM subcommittee.” Again, it’s not clear Ted Cruz understands English. Because the report notes that this was a key recommendation of that group:
“[R]apidly respond—through transparency and communication—to emergent informational threats to critical infrastructure. . . . These response efforts can be actor-agnostic, but special attention should be paid to countering foreign threats.”
Yes. Rapidly responding, through transparency and communication.
Does that sound like “censorship” to anyone other than Ted Cruz?
Up is down, black is white, day is night. Ted Cruz is either a mendacious liar. Or an idiot.
Let’s be clear about what actually happened here. CISA was created by Donald Trump in November 2018. According to Cruz’s own timeline, it immediately began the work he’s now calling “censorship” or “speech policing” though anyone looking at the details would realize it was no such thing. That work continued through 2020—still under Trump. Biden took office, and then CISA scaled back these activities in 2022.
So Cruz is literally blaming Biden for things that didn’t happen, but the things he misinterpreted are things that started under Trump and were reduced under Biden.
This isn’t just wrong—it’s historically illiterate and spectacularly, embarrassingly wrong. And it’s part of a broader pattern of MAGA lies about government “censorship” that the Supreme Court has already debunked.
The goal here isn’t accuracy. It’s creating a false narrative to justify actual retaliation against platforms that don’t toe the line. Cruz knows that most people won’t read the actual documents or check his timeline. They’ll just see “Biden censorship” in the headlines and accept it as fact.
But the documents don’t lie, even when Ted Cruz does. His own report proves that everything he’s (misleadingly) mad about started under Trump, operated under the legal authorities Trump granted, were not actually about speech policing, and were scaled back under Biden.
Ted Cruz either doesn’t know who was president when, or he’s counting on you not knowing. He also either doesn’t know what actual censorship is, or he’s… counting on you not knowing. Either way, it’s a pretty damning indictment of a sitting U.S. Senator.
The entire 22-page report boils down to this: “How dare the agency Donald Trump created to coordinate information sharing… coordinate information sharing.”
And sadly, because Cruz put “censorship” and “Biden” in the same sentence, many people will now treat this nonsense as gospel truth.
Look, if you want to cut to the chase: the lawyers working for Google and Meta know that the MAGA world is very, very stupid and very, very gullible, and it’s very, very easy to tell them something that they know will be interpreted as a “victory” while actually signaling something very, very different. You could just reread my analysis of Meta and Mark Zuckerberg’s silly misleading caving to Rep. Jim Jordan last year, because this is more of the same.
This time it’s Google doing the caving in a manner they absolutely know doesn’t actually admit to things that Jordan and the MAGAverse will insist it does actually admit. If anything, it’s actually admitting the reverse. Specifically, it sent a letter replying to some Jim Jordan subpoenas, which Jim Jordan is claiming as a victory for free speech because Google said things he can misrepresent as such.
Lots of very silly people (including Jordan) have been running around all week falsely claiming that Google has “admitted” that the Biden administration illegally censored people, and in response, they’re now reinstating accounts of people who were “unfairly censored.”
To be fair, this is what Google wants Jim Jordan and MAGA people to believe because it feeds into their pathetic victim narrative.
But it’s not what Google actually said for people who can read (and comprehend basic English). I won’t go through the entire letter, but let’s cover the supposed admission of censorship from the Biden admin:
Senior Biden Administration officials, including White House officials, conducted repeated and sustained outreach to Alphabet and pressed the Company regarding certain user-generated content related to the COVID-19 pandemic that did not violate its policies. While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the Company to remove non-violative user-generated content.
It is not new, nor is it all that controversial, that the Biden administration did some outreach regarding COVID-19 content. But note what Google says here: “the Company continued to develop and enforce its policies independently.” In other words, Biden folks reached out, Google said “thanks, but that doesn’t violate our policies, so we’re not doing anything about it.”
Now, we can say that the government shouldn’t be in the business of telling private companies anything at all, but that’s a bit rich coming from the MAGA world that spent the last week focused on getting Disney to “moderate” Jimmy Kimmel out of a fucking job with actual threats of punishment if they failed to do so.
And that, once again, is the key issue: as the Supreme Court has long held, government officials are allowed to use “the bully pulpit” to seek to persuade companies as long as there is no implicit or explicit threat. Some will argue that the message here must have come with an implicit threat, and that’s an area where people can debate and differ on, though the fact that Google flat out admits that it basically told the Biden admin “no” seems to undermine that there was any threat included.
As online platforms, including Alphabet, grappled with these decisions, the Administration’s officials, including President Biden, created a political atmosphere that sought to influence the actions of platforms based on their concerns regarding misinformation.
Again, this is not new. The Biden admin did this publicly and many of us called them out for it. The question is whether or not they reached the level of coercion.
Meanwhile, this is either accidental irony, or Google’s lawyers know that Jim Jordan would totally miss the sarcasm included in this next bit:
It is unacceptable and wrongwhen any government, including the Biden Administration,attempts to dictate how the Company moderates content, and the Company has consistently fought against those efforts on First Amendment grounds.
Why do I say it’s ironic? Because Jim Jordan’s subpoenas and demands to Google are very much a government official attempting to dictate how Google moderates content (in that he wants them to not moderate content he favors).
Indeed, right after this, Google starts groveling about how it’s so, so sorry that YouTube took moderation actions on conspiracy theory and nonsense peddler accounts that Jordan likes and thus will begin to reinstate them.
Yes, in the very letter where Google tells Jim Jordan “it’s wrong for the government to tell us how to moderate,” it also says “thank you for telling us how to moderate, we are following your demands.” Absolutely incredible.
Perhaps even more incredible is the discussion of fact checking. The company mentions that it doesn’t employ third-party fact checkers for YouTube to review content for moderation purposes:
In contrast to other large platforms, YouTube has not operated a fact-checking program that identifies and compensates fact-checking partners to produce content to support moderation. YouTube has not and will not empower fact-checkers to take action on or label content across the Company’s services.
Which in turn led Jordan to crow about how this was a huge success:
If you can’t read that, it’s Jordan saying:
But that’s not all. YouTube is making changes to its platform to prevent future censorship. YouTube is committing to the American people that it will NEVER use outside so-called “fact-checkers” to censor speech. No more telling Americans what to believe and not believe.
But fact checking is not “censorship.” It’s literally “more speech.” It’s not telling anyone what to believe or what not to believe. It’s providing additional information. You know, that whole “marketplace of ideas” that they keep telling us is so important.
Then, Jordan crowed directly about how his own efforts caused YouTube to reinstate people. In other words, in the same letter that he insists supports him and which says it is “unacceptable and wrong” for government officials “to dictate how the Company moderates content” he excitedly claims credit for dictating how YouTube should moderate content:
“Because of our work.” So you are flat out admitting that you have told Google how to moderate, and it is complying by reinstating accounts that you wanted them to reinstate.
That certainly would raise questions about unconstitutional jawboning if we didn’t live in a world in which it has been decided “it’s okay when Republicans do it” but not okay when Democrats do something much less direct or egregious.
It’s almost like there’s a double standard, and it’s very much like Google is willing to suck up to MAGA folks to take advantage of that double standard… just as Mark Zuckerberg did.
Trump supporters cycled through increasingly desperate explanations for why the Jimmy Kimmel situation was totally legitimate. First came the absurd “low ratings” defense—because sure, networks routinely cancel shows minutes before taping due to sudden ratings revelations and just hours after the chair of the FCC threatens them with “we can do this the easy way or the hard way.” And also, if it was low ratings, how do you explain why they brought the show back after less than a week? When that collapsed under basic scrutiny, they pivoted to something even more dishonest: claiming Brendan Carr’s explicit threats to Disney are somehow identical to what the Biden administration did, and falsely claiming this makes hypocrites of those who agreed with the ruling in Murthy v. Missouri.
This false equivalency isn’t just wrong—it’s embarrassingly so. But since MAGA supporters are now running with it (and some mainstream outlets are credulously repeating it), it’s worth demolishing the argument piece by piece. Of course, the people pushing this narrative won’t bother with the details and will immediately skip to the comments to shout “you lie!” without addressing the actual points raised here as to why they’re wrong, but for everyone else, let’s dig in.
You can see some of this nonsense in a NY Times article over the weekend by Peter Baker, in which a White House spokesperson claimed (falsely) that (1) Trump supported free speech, and (2) Biden censored social media:
Asked about the disparate justifications offered by Mr. Trump and administration officials, Abigail Jackson, a White House spokeswoman, said, “President Trump is a strong supporter of free speech, and he is right — F.C.C. licensed stations have long been required to follow basic standards.” She added that “the Biden administration actually attacked free speech by demanding social media companies take Americans’ posts down.”
Vice President JD Vance likewise pointed to allegations of censorship lodged against President Joseph R. Biden Jr. to defend the Trump administration’s actions. “The bellyaching from the left over ‘free speech’ after the Biden years fools precisely no one,”he wrote on social mediaon Friday.
That NY Times article was even worse originally, as there was a quote from so-called “presidential historian” Craig Shirley claiming (falsely) that “President Biden” forced social media companies to deplatform Donald Trump in 2021:
It says something about Trump’s all-out war on free speech that the New York Times couldn’t find a more credible person than “presidential historian” Craig Shirley to defend it. www.nytimes.com/2025/09/21/u…
Craig Shirley, a presidential historian and biographer of President Ronald Reagan, said Mr. Trump’s experience was so searing that he did not believe the president would improperly restrain others’ free speech, whatever his public exhortations.
“We all especially know Biden used government to censor Trump, kicking him off many media platforms, a clear violation of the law,” Mr. Shirley said. “As his own First Amendment rights were abridged, my guess is he’s especially sensitive to anyone else seeing their First Amendment rights taken away.”
Except that’s just factually wrong, as even a basic understanding of linear time (let alone a simple fact check) would have determined. Donald Trump was banned from most platforms on January 7th and 8th in 2021. When DONALD TRUMP WAS PRESIDENT, not Joe Biden. It was literally impossible for Biden to “censor Trump” at the time. Indeed, when it happened we wrote an article about why this clearly was not censorship, but a difficult choice private companies had to make about encouraging safety. You know, like how the MAGA crowd is now demanding that platforms silence anyone who speaks ill of Charlie Kirk.
The Times later quietly removed the first half of Shirley’s quote without noting the correction—a telling admission that even they recognized how factually bankrupt it was. Beyond the basic chronological impossibility, the entire premise is absurd: Trump was deplatformed by private companies exercising their own editorial judgment in the days after he had actively encouraged the storming of the US Capitol in an effort to prevent the peaceful handover of power… not government coercion.
That said, this idea that Biden “censored” people on social media keeps making the rounds, and in particular some have been arguing that the Supreme Court said that this was okay in Murthy vs. Missouri, and are then claiming that people who supported the administration in that case have nothing to complain about. Here are a few examples:
All three of those tweets are just factually incorrect in embarrassing ways. Many people have pointed out to them in the replies (correctly) that the Supreme Court’s ruling in Murthy was about standing, not the merits, but that’s not even what’s so egregious here.
The more important thing is the reason why the Murthy ruling was about standing, which was that the Supreme Court correctly found that none of the plaintiffs in the original case presented enough evidence to suggest they have standing to challenge the administration’s actions. Five times in the ruling, Justice Amy Coney Barrett mentions “no evidence.”
The clear implication, which all these people pointing to Murthy are missing, is that if they had actual evidence of coercion by government officials then they would have had standing. Nothing (literally nothing) in the Murthy case “blesses” or “supports” the idea that it’s okay for government officials to coerce intermediaries into silencing speech. It just says you can’t just claim that happened without any evidence to back it up.
At no point did the ruling condone government pressure on intermediaries to silence speech. Quite the contrary. Rather, the ruling in Murthy (and also confirmed a few weeks earlier in the Vullo ruling, which was heard on the same day as Murthy, so clearly both issues were on the Justices’ minds) was:
No, the government cannot coerceintermediaries to suppress speech that is protected by the First Amendment
But if an intermediary suppresses your speech as a private entity, to have standing, you need to show that it was actually in response to government pressure and you can’t just handwave that away.
To understand this, it really helps to read Vullo and Murthy together (again, remembering that the two cases were effectively heard together). We quoted from Vullo a lot in our first post, but as a refresher, from the opinion:
A government official can share her views freely and criticize particular beliefs, and she can do so forcefully in the hopes of persuading others to follow her lead. In doing so, she can rely on the merits and force of her ideas, the strength of her convictions, and her ability to inspire others.What she cannot do, however, is use the power of the State to punish or suppress disfavored expression….
This is the core distinction that bad faith readers of what happened keep ignoring. There is a fundamental difference between using the bully pulpit to persuade and using the power of the government with threats to punish in a manner that is coercive.
The Supreme Court in Vullo and Murthy made it clear that government coercion is not allowed. The people claiming Murthy said otherwise either didn’t read or understand Murthy, or they’re bad faith liars.
While the Murthy ruling rejected the plaintiffs’ claims, at no point did it say it made it okay for government actors to make coercive threats. It said the opposite. Indeed, contrary to the various tweets saying Murthy blessed what Carr was doing, it says that if you can show actual coercion from a specific government actor, then you have standing to make a case. From the majority decision:
But we must confirm that each Government defendant continues to engage in the challenged conduct, which is “coercion” and “significant encouragement,” not mere “communication.”
Carr’s actions provide a textbook example of the coercion that Murthy and Vullo prohibit. He went on a podcast, explicitly threatened a media company with regulatory retaliation (“we can do this the easy way or the hard way”), and hours later that company folded. The “traceability” that the Murthy court said was missing from the Biden administration’s communications? Here it’s a straight line drawn in neon by Carr in public with him yelling to the cameras “I AM ENGAGING IN COERCIVE ACTIVITY.”
This failure to establish traceability for past harms— which can serve as evidence of expected future harm—“substantially undermines [the plaintiffs’] standing theory.”
But here there’s very clear “traceability.” Carr went on a MAGA influencer’s podcast in the morning, said “we can do this the easy way or the hard way,” and specifically said that the FCC would investigate both Disney and affiliates if they didn’t take action over Kimmel’s First Amendment protected speech. Under Murthy that very much violates the First Amendment, not the other way around.
And this is only reinforced by the ruling in Vullo, which was more explicit:
The Court explained that the First Amendment prohibits government officials from relying on the “threat of invoking legal sanctions and other means of coercion . . . to achieve the suppression” of disfavored speech.
So people trying to argue that Murthy made this okay, or even that people who supported Murthy are now regretting it, are simply ignorant or lying. Neither is a good look for professional commentators.
Murthy (and Vullo) supported the long-held understanding that, under the First Amendment, government actors cannot threaten intermediaries in a coercive manner to get them to suppress or punish protected speech. Carr did threaten intermediaries to punish such speech, and thus it is entirely consistent with the ruling in Murthy that he violated the First Amendment.
The DEA may not be an early adopter of forward-looking policies, but it certainly leads the pack when it comes to shedding accountability like a teen ditching an ill-fitting sports coat the instant a family portrait session has wrapped up.
Federal law enforcement agencies definitely trailed the trends when it came to body cam use by officers. For years, the DOJ forbade local cops from using their body cameras during joint task force operations involving federal officers. It wasn’t until November 2020 that it agreed local officers could use their cameras in joint operations, but only if they agreed to play by the DOJ’s extremely stringent rules.
It took nearly another year before the DOJ agreed to start outfitting its own agencies with body cameras — something undoubtedly provoked by several months of intense civil unrest following the murder of unarmed Black man George Floyd by Minneapolis (MN) police officer Derek Chauvin.
Now that Trump has undone anything with Biden’s name on it, the DEA has informed its officers that body cams are no longer part of the federal drug enforcement process, as Mario Ariza reports for ProPublica:
The Drug Enforcement Administration has quietly ended its body camera program barely four years after it began, according to an internal email obtained by ProPublica.
On April 2, DEA headquarters emailed employees announcing that the program had been terminated effective the day before. The DEA has not publicly announced the policy change, but by early April, links to pages about body camera policies on the DEA’s website were broken.
The email said the agency made the change to be “consistent” with a Trump executive order rescinding the 2022 requirement that all federal law enforcement agents use body cameras.
The DEA told its employees this vanishing was required to be “consistent” with Trump’s repeal of a Biden police accountability executive order. But ProPublica reports at least two other federal law enforcement agencies are still requiring officers to wear body cameras. One would assume the other agencies will follow the DEA’s lead and do the same, even if there’s nothing in Trump’s Biden order rollback or even the president’s more recent “GO POLICE STATE!” executive order that forbids the use of body cameras by federal officers.
While the DEA is taking the lead on the domestic-facing side when it comes to ditching the BWC-based pretense of accountability, it’s following the trail set by one of the most-reviled federal agencies in the nation:
In early February, U.S. Immigration and Customs Enforcement, which is part of the Department of Homeland Security, was one of the first agencies to get rid of its body cameras. Subsequent videos show plainclothes immigration agents making arrests with no visible body cameras.
Of course it was. ICE doesn’t just make policies vanish. It makes human beings disappear. The last thing DHS and ICE need are a bunch of unblinking eyes creating a permanent record of extrajudicial arrests and renditionings.
Federal law enforcement is going dark again, returning to its normal state of nigh-impenetrable opacity. Trump and his team have reset the clock, rolling back the most minimal of gains in law enforcement accountability just because he and his administration love government thuggery more than they love this country or the millions of regular people they’re supposed to be serving.
It took years for the federal government to engage in an extremely timid roll out of tech that regular cops had been using for most of the past decade. It took only a few weeks to undo three years of progress. And when agencies were given the option to shed themselves of devices officers often consider to be impositions, they acted immediately, completely disregarding even the DOJ’s own assertions about the positive aspects of body-worn cameras. It’s 2025, but the DOJ has been given permission to pretend it’s 2015 all over again.
Now it turns out that he not only did his big set of moderation changes to please Trump, but did so only after he was told by the incoming administration to act. Even worse, he reportedly made sure to share his plans with top Trump aides to get their approval first.
That’s a key takeaway from a new New York Times piece that is ostensibly a profile of the relentlessly awful Stephen Miller. However, it also has a few revealing details about the whole Zuckerberg saga buried within. First, Miller reportedly demanded that Zuckerberg make changes at Facebook “on Trump’s terms.”
Mr. Miller told Mr. Zuckerberg that he had an opportunity to help reform America, but it would be on President-elect Donald J. Trump’s terms. He made clear that Mr. Trump would crack down on immigration and go to war against the diversity, equity and inclusion, or D.E.I., culture that had been embraced by Meta and much of corporate America in recent years.
Mr. Zuckerberg was amenable. He signaled to Mr. Miller and his colleagues, including other senior Trump advisers, that he would do nothing to obstruct the Trump agenda, according to three people with knowledge of the meeting, who asked for anonymity to discuss a private conversation. Mr. Zuckerberg said he would instead focus solely on building tech products.
Even if you argue that this was more about DEI programs at Meta rather than about content moderation, it’s still the incoming administration reportedly making actual demands of Zuckerberg, and Zuckerberg not just saying “fine” but actually previewing the details to Miller to make sure they got Trump’s blessing.
Earlier this month, Mr. Zuckerberg’s political lieutenants previewed the changes to Mr. Miller in a private briefing. And on Jan. 10, Mr. Zuckerberg made them official….
This is especially galling given that it was just days ago when Zuckerberg was whining about how unfair it was that Biden officials were demanding stuff from him (even though he had no trouble saying no to them) and it was big news! The headlines made a huge deal of how unfair Biden was to Zuckerberg. Here’s just a sampling.
Also conveniently omitted was the fact that the Supreme Court found no evidence of the Biden administration going over the line in its conversations with Meta. Indeed, a Supreme Court Justice noted that conversations like those that the Biden admin had with Meta happened “thousands of times a day,” and weren’t problematic because there was no inherent threat or direct coordination.
Yet, here, we have reports of both threats and now evidence of direct coordination, including Zuckerberg asking for and getting direct approval from a top Trump official before rolling out the policy.
And where is this bombshell revelation? It’s buried in a random profile piece puffing up Stephen Miller.
It’s almost as if everyone now takes it for granted that any made-up story about Biden will be treated as fact, and everyone just takes it as expected when Trump actually does the thing that Biden gets falsely accused of.
With this new story, don’t hold your breath waiting for the same outlets to give this anywhere near the same level of coverage and outrage they directed at the Biden administration.
It’s almost as if there’s a massive double standard here: everything is okay if Trump does it, but we can blame the Biden admin for things we only pretend they did.
I’m used to hypocrisy in the political world, but this is beyond ridiculous. It’s now being made clear that the Trump admin is actually doing the exact thing that people were (falsely, misleadingly) blaming Biden for.
And it’s just a random aside in a story, and no one seems to be calling it out. Other than us here at Techdirt.
If you only remember two things about the government pressure campaign to influence Mark Zuckerberg’s content moderation decisions, make it these: Donald Trump directly threatened to throw Zuck in prison for the rest of his life, and just a couple months ago FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.
Two months later — what do you know? — Zuckerberg ended all fact-checking on Meta. But when he went on Joe Rogan, rather than blaming those actual obvious threats, he instead blamed the Biden administration, because some admin officials sent angry emails… which Zuck repeatedly admits had zero impact on Meta’s actual policies.
Indeed, this very fact check may be a good example of what I talked about regarding Zuckerberg’s decision to end fact-checking, which is that it’s not as straightforward as some people think, as layers of bullshit may be presented misleadingly around a kernel of truth, and peeling back the layers is important for understanding.
Indeed, this is my second attempt at writing this article. I killed the first version soon after it hit 10,000 words and I realized no one was going to read all that. So this is a more simplified version of what happened, which can be summarized as: the actual threats came from the GOP, to which Zuckerberg quickly caved. The supposed threats from the Biden admin were overhyped, exaggerated, and misrepresented, and Zuck directly admits he was able to easily refuse those requests.
All the rest is noise.
I know that people who dislike Rogan dismiss him out of hand, but I actually think he’s often a good interviewer for certain kinds of conversations. He’s willing to speak to all sorts of people and even ask dumb questions, taking on the role of listeners/viewers. And that’s actually really useful (and enlightening) in certain circumstances.
Where it goes off the rails, such as here, is where (1) nuance and detail matter and (2) where the person he is interviewing has an agenda to push with a message that he knows Rogan will eat up, and knows Rogan does not understand enough to pick apart what really happened.
This is not the first time that Zuckerberg has gone on Rogan and launched a narrative by saying things that are technically true in a manner that is misleading, likely knowing that Rogan and his fans wouldn’t understand the nuances, and would run with a misleading story.
Two and a half years ago, he went on Joe Rogan and said that the FBI had warned the company about the potential for hack and leak efforts put forth by the Russians, which Rogan and a whole bunch of people, including the mainstream media, falsely interpreted as “the FBI told us to block the Hunter Biden laptop story.”
Except that’s not what he said. He was asked about the NY Post story (which Facebook never actually blocked, they only — briefly — blocked it from “trending”), and Zuckerberg very carefully worded his answer to say something that was already known, but which people not listening carefully might think revealed something new:
The background here is that the FBI came to us – some folks on our team – and was like ‘hey, just so you know, you should be on high alert. We thought there was a lot of Russian propaganda in the 2016 election, we have it on notice that basically there’s about to be some kind of dump that’s similar to that’.
But the fact that the FBI had sent out a general warning to all of social media to be on the lookout for disinfo campaigns like that was widely known and reported on way earlier. The FBI did not comment specifically on the Hunter Biden laptop story, nor did they tell Facebook (or anyone) to take anything down.
Still, that turned into a big thing, and a bunch of folks thought it was a big revelation. In part because when Zuck told that story to Rogan, Rogan acted like it was big reveal, because Rogan doesn’t know the background or the details or the fact that this had been widely reported. He also doesn’t realize there’s a huge difference between a general “be on the lookout” warning and a “hey, take this down!” demand, with the former being standard and the latter being likely unconstitutional.
In other words, Zuck has a history of using Rogan’s platform to spread dubious narratives, knowing that Rogan lacks the background knowledge to push back in the moment.
After that happened, I was at least open to the idea that Zuck just spoke in generalities and didn’t realize how Rogan and audience would take what he said and run with it, believing a very misleading story. But now that he’s done it again, it seems quite likely that this is deliberate. When Zuckerberg wants to get a misleading story out to a MAGA-friendly audience, he can reliably dupe Rogan’s listeners.
Indeed, this interview was, in many ways, similar to what happened two years ago. He was relating things that were already widely known in a misleading way, and Rogan was reacting like something big was being revealed. And then the media runs with it because they don’t know the details and nuances either.
This time, Zuckerberg talks about the supposed pressure from the Biden administration as a reason for his problematic announcement last week:
Rogan:What do you think started the pathway towards increasing censorship? Because clearly we were going in that direction for the last few years. It seemed like uh we really found out about it when Elon bought Twitter and we got the Twitter Files and when you came on here and when you were explaining the relationship with FBI where they were trying to get you to take down certain things that were true and real and certain things they tried to get you to limit the exposure to them. So it’s these kind of conversations. Like when did all that start?
So first off, note the framing of this question. It’s not accurate at all. Social media websites have always had content moderation/content policy efforts. Indeed, Facebook was historically way more aggressive than most. If you don’t, your platform fills up with spam, scams, abuse, and porn.
That’s just how it works. And, indeed, Facebook in the early days was aggressively paternalistic about what was — and what was not — allowed on its site. Remember its famously prudish “no nudity” policy? Hell, there was an entire Radiolab podcast about how difficult that was to implement in practice.
So, first, calling it “censorship” is misleading, because it’s just how you handle violations of your rules, which is why moderation is always a better term for it. Rogan has never invited me on his podcast. Is that censorship? Of course not. He has rules (and standards!) for who he platforms. So does Meta. Rejecting some speech is not “censorship”, it’s just enforcing your own rules on your own private property.
Second, Rogan himself is already misrepresenting what Zuckerberg told him two years ago about the FBI. Zuck did not say that the FBI was trying to get Facebook to “take down certain things that were true and real” and “limit the exposure to them.” They only said to be on the lookout for potential attempts by foreign governments to interfere with an election, leaving it up to the platforms to decide how to handle that.
On top of that, the idea that the simple fact of how content moderation works only became public with the Twitter Files is false. The Twitter Files revealed… a whole bunch of nothing interesting that idiots have misinterpreted badly. Indeed we know this because (1) we paid attention, and (2) Elon’s own legal team admitted in court that what people were misleadingly claiming about the Twitter Files wasn’t what was actually said.
From there, Zuck starts his misleading but technically accurate-ish response:
Zuck: Yeah, well, look, I think going back to the beginning, or like I was saying, I think you start one of these if you care about giving people a voice, you know? I wasn’t too deep on our content policies for like the first 10 years of the company. It was just kind of well known across the company that, um, we were trying to give people the ability to share as much as possible.
And, issues would come up, practical issues, right? So if someone’s getting bullied, for example, we deal with that, right? We put in place systems to fight bullying, you know? If someone is saying hey um you know someone’s pirating copyrighted content on on the service, it’s like okay we’ll build controls to make it so we’ll find IP protected content.
But it was really in the last 10 years that people started pushing for like ideological-based censorship and I think it was two main events that really triggered this. In 2016 there was the election of President Trump, also coincided with basically Brexit in the EU and sort of the fragmentation of the EU. And then you know in 2020 there was COVID. And I think that those were basically these two events where for the first time we just faced this massive massive institutional pressure to basically start censoring content on ideological grounds….
So this part is fundamentally, sorta, kinda accurate, which sets up the kernel of truth around which much bullshit will be built. It’s true that Zuck didn’t pay much attention to content policies on the site early on, but it’s nonsense that it was about “giving people a voice.” That’s Zuck retconning the history of Facebook. Remember, they only added things like the Newsfeed (which was more about letting people talk) when Twitter came about and Zuck freaked out that Twitter would destroy Facebook.
Second, he then admits that the company has always moderated, though he’s wrong that it was so reactive. From quite early on (as mentioned above) the company had decently strict content policies regarding how the site was moderated. And, really, much of that was based around wanting to make sure that users had a good experience on the site. So yes, things like bullying were blocked.
But what is bullying is a very subjective thing, and so much of content moderation is just teams trying to tell you to stop being such a jackass.
It is true that there was pressure on Facebook to take moderation challenges more seriously starting in 2016, and (perhaps?!?) if he had actually spent more time understanding trust & safety at that time, he would have a better understanding of the issues. But he didn’t, which meant that he made a mess of things, and then tried to “fix it” with weird programs like the Oversight Board.
But it also meant that he’s never, ever been good at explaining the inherent tradeoffs in trust & safety, and how some people are always going to dislike the choices you make. A good leader of a social network understands and can explain those tradeoffs. But that’s not Zuck.
Also, and this is important, Zuckerberg’s claims about pressure to moderate on “ideological” grounds are incredibly misleading. Yes, I’m sure some people were putting pressure on him around that, but it was far from mainstream and easy to ignore. People were asking him to stop potentially dangerous misinformation that was causing harm. For example, the genocide in Myanmar. Or information around COVID that was potentially legitimately dangerous.
In other words, it was really (like so much of trust & safety) an extension of the “no bullying” rule. The same was true of protecting marginalized groups like LGBTQ+ users or on issues like Black Lives Matter. The demands from users (not the government in those cases) were about protecting more marginalized communities from harassment and bullying.
I’m going to jump ahead because Zuck and Rogan say a lot of stupid shit here, but this article will get too long if I go through all of it. So let’s jump forward a couple of minutes, to where Zuckerberg really flubs his First Amendment 101 in embarrassing ways while trying to describe how Meta chose to handle moderation of COVID misinformation.
Zuckerberg: Covid was the other big one. Where that was also very tricky because you know at the beginning it was, you know, it’s like a legitimate “public health crisis,” you know, in the beginning.
And it’s… even people who are like the most ardent First Amendment defenders… that the Supreme Court has this clear precedent, that’s like all rightyou can’t yell fire in a crowded theater. There are times when if there’s an emergency your ability to speak can temporarily be curtailed in order to get an emergency under control.
So I was sympathetic to that at the beginning of Covid, it seemed like, okay you have this virus, seems like it’s killing a lot of people. I don’t know like we didn’t know at the time how dangerous it was going to be. So, at the beginning, it kind of seemed like okay we should give a little bit of deference to the government and the health authorities on how we should play this.
But when it went from, you know, two weeks to flatten the curve to… in like in the beginning it was like okay there aren’t enough masks, masks aren’t that important to, then, it’s like oh no you have to wear a mask. And you know all the, like everything, was shifting around. It just became very difficult to kind of follow.
In trying to defend Meta’s approach to COVID misinformation, Zuck manages to mangle First Amendment law in a way that’s both legally inaccurate and irrelevant to the actual issues at play.
There’s so much to unpack here. First off, he totally should have someone explain the First Amendment to him. He not only got it wrong, he even got it wrong in a way that is different than how most people get it wrong. We’ve covered the whole “fire in a crowded theater” thing so many times here on Techdirt, so we’ll do the abbreviated version:
It’s not a “clear precedent.” It’s not a precedent at all. It was an offhand comment (in legal terms: dicta, so not precedential) in a case about jailing someone for handing out anti-war literature (something most people today would recognize as pretty clearly a First Amendment problem).
The Justice who said it, Oliver Wendell Holmes, appeared to regret it almost immediately, and in a similar case very shortly thereafter changed his tune and became a much more “ardent First Amendment defender.”
Most courts and lawyers (though there are a few holdouts) insist that whatever precedent there was in Schenck (which again, did not include that line) was effectively overruled a half century later in a different case that rejected the test in Schenck and moved to the “incitement to imminent lawless action” test.
So, quoting “fire in a crowded theater” these days is generally used as a (very bad, misguided) defense of saying “well, there’s some speech that’s so bad it’s obviously unprotected,” but without being able to explain why this particular speech is unprotected.
But Zuck isn’t even using it in that way. He seems to have missed that the whole point of the Holmes dicta (again, not precedent) was to talk about falsely yelling fire. Zuck implies that the (not actual) test is “can we restrict speech if there’s an actual fire, an actual emergency.” And, that’s also wrong.
But, the wrongness goes one layer deeper as well, because the First Amendment only applies to restrictions the government can put on speakers, not what a private entity like Meta (or the Joe Rogan Experience) can do on their own private property.
And then, even once you get past that, Zuck isn’t wrong that there was a lot of confusion about COVID and health in the early days, including lots of false information that came under the imprimatur of “official” sources, but… dude, Meta deliberately made the decision to effectively let the CDC decide what was acceptable even after many people (us included!) pointed out how stupid it was for platforms to outsource their decisions on “COVID misinfo” to government agencies which almost certainly would get stuff wrong as the science was still unclear.
But it wasn’t the White House that pressured Zuck into following the CDC position. Meta (alone among the major tech platforms) publicly declared early in the pandemic (for what it’s worth, when Trump was still President) that its approach to handling COVID misinformation would be based on “guidance” from official authorities like the CDC and WHO. Many of us felt that this was actually Meta abdicating its role and giving way too much power to government entities in the midst of an unclear scientific environment.
But for him to now blame the Biden admin is just blatantly ahistorical.
And from there, it gets worse:
Zuckerberg: This really hit… the most extreme, I’d say, during it was during the Biden Administration, when they were trying to roll out um the vaccine program and… Now I’m generally, like, pretty pro rolling out vaccines. I think on balance the vaccines are more positive than negative.
But I think that while they’re trying to push that program, they also tried to censor anyone who was basically arguing against it. And they pushed us super hard to take down things that were honestly were true. Right, I mean they they basically pushed us and and said, you know, anything that says that vaccines might have side effects, you basically need to take down.
And I was just like,well we’re not going to do that. Like,we’re clearly not going to do that.
Rogan then jumps in here to ask “who is they” but this is where he’s showing his own ignorance. The key point is the last line. Zuckerberg says he told them “we’re not going to do that… we’re clearly not going to do that.”
That’s it. That’s the ballgame.
The case law on this issue is clear: the government is allowed to try to persuade companies to do something. That’s known as using the bully pulpit. What it cannot do is coerce a company into taking action on speech. And if Zuckerberg and Meta felt totally comfortable saying “we’re not going to do that, we’re clearly not going to do that,” then end of story. They didn’t feel coerced.
Indeed, this is partly what the Murthy case last year was about. And during oral arguments, Justices Kavanaugh and Kagan (both of whom had been lawyers in the White House in previous lives) completely laughed off the idea that White House officials couldn’t call up media entities and try to convince them to do stuff, even with mean language.
Here was Justice Kavanaugh:
JUSTICE KAVANAUGH: Do you think on the anger point, I guess I had assumed, thought, experienced government press people throughout the federal government who regularly call up the media and — and berate them. Is that — I mean, is that not —
MR. FLETCHER: I — I — I don’t want
JUSTICE KAVANAUGH: — your understanding? You said the anger here was unusual. I guess I wasn’t —
MR. FLETCHER: So that —
JUSTICE KAVANAUGH: — wasn’t entirely clear on that from my own experience.
Later on, he said more:
JUSTICE KAVANAUGH: You’re speaking on behalf of the United States. Again, my experience is the United States, in all its manifestations, has regular communications with the media to talk about things they don’t like or don’t want to see or are complaining about factual inaccuracies.
Justice Kagan felt similarly:
JUSTICE KAGAN: I mean, can I just understand because it seems like an extremely expansive argument, I must say, encouraging people basically to suppress their own speech. So, like Justice Kavanaugh, I’ve had some experience encouraging press to suppress their own speech.
You just wrote about editorial. Here are the five reasons you shouldn’t write another one. You just wrote a story that’s filled with factual errors. Here are the 10 reasons why you shouldn’t do that again.
I mean, this happens literally thousands of times a day in the federal government.
“Literally thousands of times a day in the federal government.” What happened was not even that interesting or unique. The only issue, and the only time it creates a potential First Amendment problem, is if there is coercion.
This is why the Supreme Court rejected the argument in the Murthy case that this kind of activity was coercive and violated the First Amendment. The opinion, written by Justice Coney Barrett, makes it pretty clear that the White House didn’t even apply that much pressure towards Facebook on COVID info beyond some public statements, and instead most of the communication was Facebook sending info to the government (both admin officials and the CDC) and asking for feedback.
The Supreme Court notes that Facebook changed its policies to restrict more COVID info before it had even spoken to people in the White House.
In fact, the platforms, acting independently, had strengthened their pre-existing content moderation policies before the Government defendants got involved. For instance, Facebook announced an expansion of its COVID–19 misinformation policies in early February 2021, before White House officials began communicating with the platform. And the platforms continued to exercise their independent judgment even after communications with the defendants began. For example, on several occasions, various platforms explained that White House officials had flagged content that did not violate company policy. Moreover, the platforms did not speak only with the defendants about content moderation; they also regularly consulted with outside experts.
All of this info is public. It was in the court case. It’s in the Supreme Court transcript of oral arguments. It’s in the ruling in the Supreme Court.
Yet Rogan acts like this is some giant bombshell story. And Zuckerberg just lets him run with it. And then, the media ran with it as well, even though it’s a total non-story. As Kagan said, attempts to persuade the media happen literally thousands of times a day.
It only violates the First Amendment if they move over into coercion, threatening retaliation for not listening. And the fact that Meta felt free to say no and didn’t change its policies makes it pretty clear this wasn’t coercion.
But, Zuckerberg now knows he’s got Rogan caught on his line and starts to play it up. Rogan first asks who was “telling you to take down things” and Zuckerberg then admits that he wasn’t actually involved in any of this:
Rogan: Who is they? Who’s telling you to take down things that talk about vaccine side effects?
Zuckerberg:It was people in the um in the Biden Administration I think it was um…you know I wasn’t involved in those conversations directly…
Ah, so you’re just relaying the information that was publicly available all along and which we already know about.
Rogan then does a pretty good job of basically explaining my Impossibility Theorem (he doesn’t call it that, of course), noting the sheer scale of Meta properties, and how most people can’t even comprehend the scale, and that mistakes are obviously going to happen. Honestly, it’s one of the better “mainstream” explanations of the impossibility of content moderation at scale
Rogan: You’re moderating at scale that’s beyond the imagination. The number of human beings you’re moderating is fucking insane. Like what is… what’s Facebook… what how many people use it on a daily basis? Forget about how many overall. Like how many people use it regularly?
Zuck: It’s 3.2 billion people use one of our services every day
Rogan: (rolls around) That’s…!
Zuck: Yeah, it’s, no, it’s wild
Rogan: That’s more than a third of the planet! That’s so crazy and it’s almost half of Earth!
Zuck: Well on a monthly basis it is probably.
Rogan: UGGH!
But just I want I want to say that though for there’s a lot of like hypercritical people that are conspiracy theorists and think that everybody is a part of some cabal to control them. I want you to understand that, whether it’s YouTube or all these and whatever place that you think is doing something that’s awful, it’s good that you speak because this is how things get changed and this is how people find out that people are upset about content moderation and and censorship.
But moderating at scale is insane. It’s insane. What we were talking the other day about the number of videos that go up every hour on YouTube and it’s banana. It’s bananas. That’s like to try to get a human being that is reasonable, logical and objective, that’s going to analyze every video? It’s virtually impossible. It’s not possible. So you got to use a bunch of tools. You got to get a bunch of things wrong.
And you have also people reporting things. And how how much is that going to affect things there. You could have mass reporting because you have bad actors. You have some corporation that decides we’re going to attack this video cuz it’s bad for us. Get it taken down.
There’s so much going on. I just want to put that in people’s heads before we go on. Like understand the kind of numbers that we’re talking about here.
Like… that’s a decent enough explanation of the impossibility of moderating content at scale. If Zuckerberg wanted to lean into that, and point out that this impossibility and the tradeoffs it creates makes all of this a subjective guessing game, where mistakes often get made and everyone has opinions, that would have been interesting.
But he’s tossed out the line where he wants to blame the Biden administration (even though the evidence on this has already been deemed unproblematic by the Supreme Court just months ago) and he’s going to feed Rogan some more chum to create a misleading picture:
Zuckerberg: So I mean like you’re saying I mean this is… it’s so complicated this system that I could spend every minute of all of my time doing this and not actually focused on building any of the things that we’re trying to do. AI glasses, like the future of social media, all that stuff.
So I get involved in this stuff, but in general we we have a policy team. There are people who I trust there. The people are kind of working on this on a day-to-day basis. And the interactions that um that I was just referring to, I mean a lot of this is documented… I mean because uh you know Jim Jordan and the the House had this whole investigation and committee into into the the kind of government censorship around stuff like this and we produced all these documents and it’s all in the public domain…
I mean basically these people from the Biden Administration would call up our team and like scream at them and curse. And it’s like these documents are… it’s all kind of out there!
Rogan: Gah! Did you record any of those phone calls? God!
Zuckerberg: I don’t no… I don’t think… I don’t think we… but but… I think… I want listen… I mean, there are emails. The emails are published. It’s all… it’s all kind of out there and um and they’re like… and basically it just got to this point where we were like, no we’re not going to. We’re not going to take down things that are true. That’s ridiculous…
Parsing what he’s saying here is important. Again, we already established above a few important facts that Rogan doesn’t understand, and either Zuck doesn’t understand or is deliberately being coy in his explanation: (1) government actors are constantly trying to persuade media companies regarding their editorial discretion and that’s not against the law in any way, unless it crosses the line into coercion, and Zuck is (once again) admitting there was no coercion and they had no problem saying no. (2) He’s basing this not on actual firsthand knowledge but on stuff that is “all kind of out there” because “the emails are published” and “it’s all in the public domain.”
Now, because I’m not that busy creating AI glasses (though I am perhaps working on the future of social media), I actually did pay pretty close attention to what happened with those published emails and the documents in the public domain, and Zuckerberg is misrepresenting things, either on purpose or because the false narrative filtered back to him.
The reason I followed it closely is because I was worried that the Biden administration might cross the First Amendment line. This is not the case of me being a fan of the Biden administration, whose tech policies I thought were pretty bad almost across the board. The public statements that the White House made, whether from then press secretary Jen Psaki or Joe Biden himself, struck me as stupid things to say, but they did not appear to cross the First Amendment line, though they came uncomfortably close.
So I followed this case closely, in part, because if there was evidence that they crossed the line, I would be screaming from the Techdirt rooftops about it.
But, over and over again, it became clear that while they may have walked up to the line, they didn’t seem to cross it. That’s also what the Supreme Court found in the Murthy case.
So when Zuckerberg says that there are published emails, referencing the “screaming and cursing,” I know exactly what he’s talking about. Because it was a highlight of the district court ruling that claimed the White House had violated the First Amendment (which was later overturned by the Supreme Court).
Indeed, in my write-up of that District Court ruling, I even called out the “cursing” email as an example that struck me as one of the only things that might actually be a pretty clear violation of the First Amendment. Here’s what I wrote two years ago when that ruling came out:
Most of the worst emails seemed to come from one guy, Rob Flaherty, the former “Director of Digital Strategy,” who seemed to believe his job in the White House made it fine for him to be a total jackass to the companies, constantly berating them for moderation choices he disliked.
I mean, this is just totally inappropriate for a government official to say to a private company:
Things apparently became tense between the White House and Facebook after that, culminating in Flaherty’s July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”
But then I dug deeper and saw the filing where that quote actually comes from, realizing that the judge in the district court was taking it totally out of context. The ruling made it sound like Flaherty’s cursing outburst was in response to Facebook/Zuck refusing to go along with a content moderation demand.
If that were actually the case, then that would absolutely violate the First Amendment. The problem is that it’s not what happened. It was still inappropriate in general, but not an unconstitutional attack on speech.
What had happened was that Instagram had a bug that prevented the Biden account from getting more followers, and the White House was annoyed by that. Someone from Meta responded to a query, saying basically “oops, it was a bug, our bad, but it’s fixed now” and that response was forwarded to Flaherty, who acted like a total power-mad jackass with the “Are you guys fucking serious? I want an answer on what happened here and I want it today” response.
So here’s the key thing: that heated exchange had absolutely nothing to do with pressuring Facebook on its content moderation policies. That “public domain” “cursing” email is entirely about a bug that prevented the Biden account from getting more followers, and Rob throwing a bit of a shit fit about it.
As Zuck says (but notably no one on the Rogan team actually looks up), this is all “out there” in “the public domain.” Rogan didn’t look it up. It’s unclear if Zuckerberg looked it up.
But I did:
We can still find that response wholly inappropriate and asshole-ish. But it’s not because Facebook refused to take down information on vaccine side effects, as is clearly implied (and how Rogan takes it).
Indeed, Zuckerberg (again!) points out that the company’s response to requests to remove anti-vax memes was to tell the White House no:
Zuck: They wanted us to take down this meme of Leonardo DiCaprio looking at a TV talking about how 10 years from now or something um you know you’re going to see an ad that says okay if you took a Covid vaccine you’re um eligible you you know like uh for for this kind of payment like this sort of like class action lawsuit type meme.
And they’re like, “No, you have to take that down.” We just said, ‘No, we’re not going to take down humor and satire. We’re not going to take down things that are true.“
He then does talk about the stupid Biden “they’re killing people” comment, but leaves out the fact that Biden walked that back days later, admitting “Facebook isn’t killing people” and instead blaming people on the platform spreading misinformation and saying “that’s what I meant.”
But it didn’t change the fact that Facebook refused to take action on those accounts.
So even after he’s said multiple times that Facebook’s response to whatever comments came in from the White House was to tell them “no,” which is exactly what the Supreme Court made clear showed there was no coercion, Rogan goes on a rant as if Zuckerberg had just told him that they did, in fact, suppress the content the White House requested (something Zuck directly denied to Rogan multiple times, even right before this rant):
Rogan: Wow. [sigh] Yeah, it’s just a massive overstepping. Also, you weren’t killing people. This is the thing about all of this. It’s like they suppressed so much information about things that people should be doing regardless of whether or not you believe in the vaccine, regardless… put that aside. Metabolic health is of the utmost importance in your everyday life whether there’s a pandemic or there’s not and there’s a lot of things that you can do that can help you recover from illness.
It prevents illnesses. It makes your body more robust and healthy. It strengthens your immune system. And they were suppressing all that information and that’s just crazy. You can’t say you’re one of the good guys if you’re suppressing information that would help people recover from all kinds of diseases. Not just Covid. The flu, common cold, all sorts of different things. High doses of Vitamin C, D3 with K2 and magnesium. They were suppressing this stuff because they didn’t want people to think that you could get away with not taking a vaccine.
Dude, Zuck literally told you over and over again that they said no to the White House and didn’t suppress that content.
But Zuck doesn’t step in to correct Rogan’s misrepresentations, because he’s not here for that. He’s here to get this narrative out, and Rogan is biting hard on the narrative. Hilariously, he then follows it up by saying how the thing that Zuck just said didn’t happen, but which Rogan is chortling along as if it did happen, proves the evils of “distortion of facts” and…. where the hell is my irony font?
Rogan: This is a crazy overstep, but scared the shit out of a lot of people… redpilled as it were. A lot of people, because they realized like, oh, 1984 is like an instruction manual…
Zuck: Yeah, yeah.
Rogan: It’s like this is it shows you how things can go that way with wrong speak and withbizarre distortion of facts.
I mean, you would know, wouldn’t you, Joe?
From there, they pivot to a different discussion, though again, it’s Zuckerberg feeding Rogan lines about how the US ought to “protect” the US tech industry from foreign governments, rather than trying to regulate them.
A bit later on, there actually is a good discussion about the kinds of errors that are made in content moderation and why. Rogan (after spending so much time whining about the evils of censorship) suddenly turns around and says that, well, of course, Facebook should be blocking “misinformation” and “outright lies” and “propaganda”:
Rogan: But you do have to be careful about misinformation! And you have to be careful about just outright lies and propaganda complaints, or propaganda campaigns rather. And how do you differentiate?
Dude, like that’s the whole point of the challenge here. You yourself talked about the billions of people and how mistakes are made because so much of this is automated. But then you were misleadingly claiming that this info was taken down over demands from the government (which Zuckerberg clearly denied multiple times), and for you to then wrap back around to “but you gotta take down misinformation and lies and propaganda campaigns” is one hell of a swing.
But, as I said, it does lead to Zuck explaining how confidence levels matter, and how where you set those levels will cover both how much “bad” content gets removed, but also how much is left up and how much innocent content gets accidentally caught:
Zuck: Okay, you have some classifier that’s it’s trying to find say like drug content, right? People decide okay, it’s like the opioid epidemic is a big deal, we need to do a better job of cracking down on drugs and drug sales. Right, I don’t I don’t want people dealing drugs on our networks.
So we build a bunch of systems that basically go out and try to automate finding people who are who are dealing with dealing drugs. And then you basically have this question, which is how precise do you want to set the classifier? So do you want to make it so that the system needs to be 99% sure that someone is dealing drugs before taking them down? Do you want to to be 90% confident? 80% confident?
And then those correspond to amounts of… I guess the the statistics term would be “recall.” What percent of the bad stuff are you finding? So if you require 99% confidence then maybe you only actually end up taking down 20% of the bad content. Whereas if you reduce it and you say, okay, we’re only going to require 90% confidence now maybe you can take down 60% of the bad content.
But let’s say you say, no we really need to find everyone who’s doing this bad thing… and it doesn’t need to be as as severe as as dealing drugs. It could just be um I mean it could be any any kind of content of uh any kind of category of harmful content. You start getting to some of these classifiers might have you know 80, 85% Precision in order to get 90% of the bad stuff down.
But the problem is if you’re at, you know, 90% precision that means one out of 10 things that the classifier takes down is not actually problematic. And if you filter… if you if you kind of multiply that across the billions of people who use our services every day that is millions and millions of posts that are basically being taken down that are innocent.
And upon review we’re going to look at and be like this is ridiculous that this thing got taken down. Which, I mean, I think you’ve had that experience and we’ve talked about this for for a bunch of stuff over time.
But it really just comes down to this question of where do you want to set the classifiers so one of the things that we’re going to do is basically set them to… require more confidence. Which is this trade-off.
It’s going to mean that we will maybe take down a smaller amount of the harmful content. But it will also mean that we’ll dramatically reduce the amount of people who whose accounts were taken off for a mistake, which is just a terrible experience.
And that’s all a good and fascinating fundamental explanation of why the Masnick Impossibility Theorem remains in effect. There are always going to be different kinds of false positives and false negatives, and that’s going to always happen because of how you set the confidence levels of the classifiers.
Zuck could have explained that many of the other things that Rogan was whining about regarding the “suppression” of content around COVID (which, again, everyone but Rogan has admitted was based on Facebook’s own decision-making, not the US government), was quite often a similar sort of situation, where the confidence levels on the classifiers may have caught information it shouldn’t have, but which the company (at the time) felt had to be set at that level to make sure enough of the “bad” content (which Rogan himself says they should take down) gets caught.
But there is no recognition of how this part of the conversation impacts the earlier conversation at all.
There’s more in there, but this post is already insanely long, so I’ll close out with this: as mentioned in my opening, Donald Trump directly threatened to throw Zuck in prison for the rest of his life if Facebook didn’t moderate the way he wanted. And just a couple months ago, FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.
None of that came up in this discussion. The only “government pressure” that Zuck talks about is from the Biden admin with “cursing,” which he readily admits they weren’t intimidated by.
So we have Biden officials who were, perhaps, mean, but not so threatening that Meta felt the need to bow down to them. And then we have Trump himself and leading members of his incoming administration who sent direct and obvious threats, which Zuck almost immediately bowed down to and caved.
And yet Rogan (and much of the media covering this podcast) claims he “revealed” how the Biden admin violated the First Amendment. Hell, the NY Post even ran an editorial pretending that Zuck didn’t go far enough because he didn’t reveal all of this in time for the Murthy case. And that’s only because the author doesn’t realize he literally is talking about the documents in the Murthy case.
The real story here is that Zuckerberg caved to Trump’s threats and felt fine pushing back on the Biden admin. Rogan at one point rants about how Trump will now protect Zuck because Trump “uniquely has felt the impact of not being able to have free speech.” That seems particularly ironic given the real story: Zuckerberg caved to Trump’s threats while pushing back on the Biden admin.
Zuckerberg knew how this would play to Rogan and Rogan’s audience, and he got exactly what he needed out of it. But the reality is that all of this is Zuck caving to threats from Trump and Trump officials, while feeling no coercion from the Biden admin. As social media continues to grapple with content moderation challenges, it would be nice if leaders like Zuckerberg were actually transparent about the real pressures they face, rather than fueling misleading narratives.
But that’s not the world we live in.
Strip away all the spin and misdirection, and the truth is inescapable: Zuckerberg folded like a cheap suit in the face of direct threats from Trump and his lackeys, while barely batting an eye at some sternly worded emails from Biden officials.
We’re still waiting for the Supreme Court to rule on the TikTok ban case, which is expected to come today or tomorrow. But things are getting increasingly silly. The Biden administration, which actively pushed for the ban and eagerly signed it into law, is now making a last-ditch effort to… keep the app operating, even as the Supreme Court may side with [checks notes] the administration’s own Solicitor General who last week told the court that the law needed to go into effect.
This is according to NBC reporters who have the scoop:
President Joe Biden’s administration is considering ways to keep TikTok available in the United States if a ban that’s scheduled to go into effect Sunday proceeds, according to three people familiar with the discussions.
“Americans shouldn’t expect to see TikTok suddenly banned on Sunday,” an administration official said, adding that officials are “exploring options” for how to implement the law so TikTok does not go dark Sunday.
That story comes out a few hours after a similar report detailing how President-elect Donald Trump’s team has their own plans to “save TikTok.”
President-elect Donald Trump is considering an executive order once in office that would suspend enforcement of the TikTok ban-or-sale law for 60 to 90 days, buying the administration time to negotiate a sale or alternative solution — a legally questionable effort to win a brief reprieve for the Chinese-owned app now scheduled to be banned on Sunday nationwide.
Trump has been mulling ways to save the day for the wildly popular video app, talking through unconventional dealmaking and legal maneuvers such as an executive order that would unravel the law passed by Congress last year with bipartisan support, according to two people familiar with the deliberations, who spoke on the condition of anonymity to discuss private talks.
Trump has expressed a keen interest in being seen as rescuing a platform on which he’s been told he’s widely admired…
To recap this utterly stupid situation, let’s review the key facts here. Donald Trump was the first President who tried to ban TikTok during his last administration, only to have that attempt rejected by the courts as unconstitutional.
Subsequently, last year, the Biden administration joined forces with a large bipartisan majority to try a “more legal” way of banning the app, and they all celebrated when they bundled the TikTok ban with funding for Ukraine, Israel, and Taiwan.
Trump flipped his position after getting a big donation from a billionaire friend who happens to own a huge chunk of TikTok’s parent company, ByteDance. And, as noted, Trump filed an amicus brief with the Supreme Court, essentially asking the court to delay enforcement so that he, in his view, the super genius social media deal maker (insert sarcasm font), could swoop in and make a deal to save the app.
So, now both of these Presidents, who have both tried to ban TikTok, are suddenly both claiming they want to save TikTok just as the Supreme Court seems poised to likely allow the ban to go through… and meanwhile the kids on TikTok start embracing even crazier apps from China.
All day yesterday, you could hear various politicians in DC seemingly freak out as they watched kids eagerly embrace other Chinese apps while mocking the TikTok ban. And now it appears that the two Presidents, both of whom insisted on banning the app, are (way too late) realizing just how disconnected from kids this makes them look.
All of this continues to make the political class look like a bunch of absolute dipshits who have no clue what they’re doing.
As we head into another Presidential election, one thing has been consistent from the last two such elections as well: the tech policies of both major parties are terrible.
The Donald Trump Republican platform for 2025 is beyond crazy with all sorts of nonsense. The “tech” part of it is barely worth a mention, but just the fact that they see things like age verification laws as a first step to banning pornography should give you a sense of how batshit crazy (and against fundamental rights) it is.
That said, the Democratic platform is not great. It’s not batshit crazy, like the GOP plan, but it’s still generally bad. It’s the kind of thing that is going to lead to a lot of wasted time and effort as moral panic know-nothing “we must do something” types push out bad idea after bad idea, while people who actually understand how this stuff works have to do our best to educate against the nonsense.
Much of the tech policy part of the document appears to have been written for Biden on the assumption he was going to be the candidate, so there’s always a chance that Harris will somehow change it later on. But, on most tech policy issues, she’s been in line with Biden. In particular, both of them have hated on Section 230 for ages. Biden has insisted it should be repealed and has stumped for KOSA, despite the obvious harm it will do to kids (especially LGBTQ+ kids).
Harris hasn’t been great on these issues either. When she was California’s Attorney General, she filed a highly questionable case against Backpage that was thrown out on Section 230 grounds. She then sued Backpage execs directly in another terrible case, accusing them of “digital pimping.” In both cases, she was going after a platform or its executives for actions of the users of those platforms. As we’ve seen in the years since Backpage was shut down by the federal government, it has only served to put more women at risk.
So, none of this is that surprising from either Biden or Harris, but, still… it’s not great to see in their official platform:
We must also fundamentally reform Section 230 of the Communications Decency Act, which shields tech platforms from liability even when they host or disseminate violent or illegal content, to ensure that platforms take responsibility for the content they share.
The issue, which has been explained to administration officials (and Congress) over and over again, is that platforms do take responsibility for the content they share, otherwise users and advertisers (especially) head for the exits.
President Biden believes that all companies, including technology companies, should be held accountable for the harm they cause. The president has raised the alarm that social media and other platforms have allowed abusive and even criminal conduct like cyberstalking, child sexual exploitation, and non-consensual intimate images to proliferate on their sites – and called on Democrats and Republicans to unite on legislation to address these issues. The Surgeon General issued an advisory warning about the impacts of social media on youth mental health, noting that he cannot conclude social media is safe for children and adolescents. Democrats will pass bipartisan legislation to protect kids’ privacy and to stop Big Tech from collecting personal data on kids and teenagers online, ban targeted advertising to children, and put stricter limits on the personal data these companies collect on all of us
I mean, yes, the Surgeon General concluded that he could not conclude that it was safe for kids… but he also said that it was helpful for many kids and similarly “could not conclude” that it was inherently harmful either.
But the way the Democratic platform presents it is much scarier and misleading.
Anyway, it’s a small part of a much larger platform, and tech issues aren’t major issues this year. One also hopes that, if elected, there will be other more pressing things on the congressional agenda rather than fucking up the internet based on pseudoscience and a misunderstanding of the First Amendment.