Soon after Elon took over Twitter and brought with him a sink-shaped wrecking ball, we wrote a story wondering if there was anyone left at the company who remembered that the company had a consent decree with the FTC that required it to take certain steps to make sure private info was not revealed to people who shouldn’t see it. Turns out there were a few people! Though they were from the old Twitter team and it’s not clear if any of them are left.
As you may know, the FTC has been trying to investigate whether or not Elon has complied with the terms of the consent decree, though Elon has been fighting itevery step of the way. Part of his “fighting” it is to have lapdog Jim Jordan accuse the FTC of politically motivated “weaponization” in its investigation of Musk. Jordan sent a letter to Lina Khan requesting details of the FTC’s investigation, and in a response sent last week, Khan actually reveals that Twitter did not violate the consent decree when it gave some wannabe journalists access to internal emails in what became known as “the Twitter Files” to some gullible fools.
The key issue: did Twitter give unrestricted access to these outsiders in combing through Twitter tools and systems. It turns out they did not, but only due to the quick thinking of Twitter’s longstanding security team, and no thanks to Elon himself, who seemed happy to give full access to any outsider who fluffed his ego enough:
Through the company’s responses and depositions of former Twitter employees, FTC staff learned that the access provided to the third-party individuals turned out to be more limited than the individuals’ tweets and other public reporting had indicated. The deposition testimony revealed that in early December 2022, Elon Musk had reportedly directed staff to grant an outside third-party individual “full access to everything at Twitter. . . . No limits at all.”7 Consistent with Musk’s direction, the individual was initially assigned a company laptop and internal account, with the intent that the third-party individual be given “elevated privileges” beyond what an average company employee might have.
However, based on a concern that such an arrangement would risk exposing nonpublic user information in potential violation of the FTC’s Order, longtime information security employees at Twitter intervened and implemented safeguards to mitigate the risks. Ultimately the third-party individuals did not receive direct access to Twitter’s systems, but instead worked with other company employees who accessed the systems on the individuals’ behalf.
There’s more in the letter that suggests the underlying investigation into further potential violations of the consent decree continue, but on the narrow question of whether the company violated the consent decree in giving access to the Twitter Files gang, the answer appears to be no (but also that Elon would have done so if existing employees who understood the decree hadn’t stepped in to save the company and Elon’s ass in this particular case).
A few people have been asking me about last week’s release of something called the “Westminster Declaration,” which is a high and mighty sounding “declaration” about freedom of speech, signed by a bunch of journalists, academics, advocates and more. It reminded me a lot of the infamous “Harper’s Letter” from a few years ago that generated a bunch of controversy for similar reasons.
In short, both documents take a few very real concerns about attacks on free expression, but commingle them with a bunch of total bullshit huckster concerns that only pretend to be about free speech, in order to legitimize the latter with the former. The argument form is becoming all too common these days, in which you nod along with the obvious stuff, but any reasonable mind should stop and wonder why the nonsense part was included as well.
It’s like saying war is bad, and we should all seek peace, and my neighbor Ned is an asshole who makes me want to be violent, so since we all agree that war is bad, we should banish Ned.
The Westminster Declaration is sort of like that, but the parts about war are about legitimate attacks of free speech around the globe (nearly all of which we cover here), and the parts about my neighbor Ned are… the bogus Twitter Files.
The Daily Beast asked me to write up something about it all, so I wrote a fairly long analysis of just how silly the Westminster Declaration turns out to be when you break down the details. Here’s a snippet:
I think there is much in the Westminster Declaration that is worth supporting. We’re seeing laws pushed, worldwide, that seek to silence voices on the internet. Global attacks on privacy and speech-enhancing encryption technologies are a legitimate concern.
But the Declaration—apparently authored by Michael Shellenberger and Matt Taibbi, along with Andrew Lowenthal, according to their announcement of the document—seeks to take those legitimate concerns and wrap them tightly around a fantasy concoction. It’s a moral panic of their own creation, arguing that separate from the legitimate concern of censorial laws being passed in numerous countries passed, there is something more nefarious—what they have hilariously dubbed “the censorship-industrial complex.”
To be clear, this is something that does not actually exist. It’s a fever dream from people who are upset that they, or their friends, violated the rules of social media platforms and faced the consequences.
But, unable to admit that private entities determining their own rules is an act of free expression itself (the right not to associate with speech is just as important as the right to speak), the crux of the Westminster Declaration is an attempt to commingle legitimate concerns about government censorship with grievances about private companies’ moderation decisions.
It is amazing the degree to which some people will engage in confirmation bias and believe absolute nonsense, even as the facts show the opposite is true. Over the past few months, we’ve gone through the various “Twitter Files” releases, and pointed out over and over again how the explanations people gave for them simply don’t match up with the underlying documents.
To date, not a single document revealed has shown what people now falsely believe: that the US government and Twitter were working together to “censor” people based on their political viewpoints. Literally none of that has been shown at all. Instead, what’s been shown is that Twitter had a competent trust & safety team that debated tough questions around how to apply policies for users on their platform and did not seem at all politically motivated in their decisions. Furthermore, while various government entities sometimes did communicate with the company, there’s little evidence of any attempt by government officials to compel Twitter to moderate in any particular way, and Twitter staff regularly and repeatedly rebuffed any attempt by government officials to go after certain users or content.
Now, as you may recall, two years ago, a few months after Donald Trump was banned from Twitter, Facebook, and YouTube, he sued the companies, claiming that the banning violated the 1st Amendment. This was hilariously stupid for many reasons, not the least of which is because at the time of the banning Donald Trump was the President of the United States, and these companies were very much private entities. The 1st Amendment restricts the government, not private entities, and it absolutely does not restrict private companies from banning the President of the United States should the President violate a site’s rules.
As expected, the case went poorly for Trump, leading to it being dismissed. It is currently on appeal. However, in early May, Trump’s lawyers filed a motion to effectively try to reopen the case at the district court, arguing that the Twitter Files changed everything, and that now there was proof that Trump’s 1st Amendment rights were violated.
In October of 2022, after the entry of this Court’s Judgment, Twitter was acquired by Elon Musk. Shortly thereafter, Mr. Musk invited several journalists to review Twitter’s internal records. Allowing these journalists to search for evidence that Twitter censored content that was otherwise compliant with Twitter’s “TOS”, the journalists disclosed their findings in a series of posts on Twitter collectively known as the Twitter Files. As set out in the attached Rule 60 motion, the Twitter Files confirm Plaintiffs’ allegations that Twitter engaged in a widespread censorship campaign that not only violated the TOS but, as much of the censorship was the result of unlawful government influence, violated the First Amendment.
I had been thinking about writing this up as a story, but things got busy, and last week Twitter (which, again, is now owned by Elon Musk who has repeatedly made ridiculously misleading statements about what the Twitter Files showed) filed its response, where they say (with risk of sanctions on the line) that this is all bullshit and nothing in the Twitter Files says what Trump (and Elon, and a bunch of his fans) claim it says. This is pretty fucking damning to anyone who believed the nonsense Twitter Files narrative.
The new materials do not plausibly suggest that Twitter suspended any of Plaintiffs’ accounts pursuant to any state-created right or rule of conduct. As this Court held, Lugar’s first prong requires a “clear,” government-imposed rule. Dkt. 165 at 6. But, as with Plaintiffs’ Amended Complaint, the new materials contain only a “grab-bag” of communications about varied topics, none establishing a state-imposed rule responsible for Plaintiffs’ challenged content-moderation decisions. The new materials cover topics ranging, for example, from Hunter Biden’s laptop, Pls.’ Exs. A.14 & A.27-A.28, to foreign interference in the 2020 election, Pls.’ Exs. A.13 at, e.g., 35:15-41:4, A.22, A.37, A.38, to techniques used in malware and ransomware attacks, Pls.’ Ex. A.38. As with the allegations in the Amended Complaint, “[i]t is … not plausible to conclude that Twitter or any other listener could discern a clear state rule” from such varied communications. Dkt. 165 at 6. The new materials would not change this Court’s dismissal of Plaintiffs’ First Amendment claims for this reason alone.
Moreover, a rule of conduct is imposed by the state only if backed by the force of law, as with a statute or regulation. See Sutton v. Providence St. Joseph Med. Ctr., 192 F.3d 826, 835 (9th Cir. 1999) (regulatory requirements can satisfy Lugar’s first prong). Here, nothing in the new materials suggests any statute or regulation dictating or authorizing Twitter’s content-moderation decisions with respect to Plaintiffs’ accounts. To the contrary, the new materials show that Twitter takes content-moderation actions pursuant to its own rules and policies. As attested to by FBI Agent Elvis Chan, when the FBI reported content to social media companies, they would “alert the social media companies to see if [the content] violated their terms of service,” and the social media companies would then “follow their own policies” regarding what actions to take, if any. Pls.’ Ex. A.13 at 165:9-22 (emphases added); accord id. at 267:19-23, 295:24-296:4. And general calls from the Biden administration for Twitter and other social media companies to “do more” to address alleged misinformation, see Pls.’ Ex. A.47, fail to suggest a state-imposed rule of conduct for the same reasons this Court already held the Amended Complaint’s allegations insufficient: “[T]he comments of a handful of elected officials are a far cry from a ‘rule of decision for which the State is responsible’” and do not impose any “clear rule,” let alone one with the force of law. Dkt. 165 at 6. The new materials thus would not change this Court’s determination that Plaintiffs have not alleged any deprivation caused by a rule of conduct imposed by the State.
Later on it goes further:
Plaintiffs appear to contend (Pls.’ Ex. 1 at 16-17) that the new materials support an inference of state action in Twitter’s suspension of Trump’s account because they show that certain Twitter employees initially determined that Trump’s January 2021 Tweets (for which his account was ultimately suspended) did not violate Twitter’s policy against inciting violence. But these materials regarding Twitter’s internal deliberations and disagreements show no governmental participation with respect to Plaintiffs’ accounts. See Pls.’ Exs. A.5.5, A-49-53.5
Plaintiffs are also wrong (Ex. 1 at 15-16) that general calls from the Biden administration to address alleged COVID-19 misinformation support a plausible inference of state action in Twitter’s suspensions of Cuadros’s and Root’s accounts simply because they “had their Twitter accounts suspended or revoked due to Covid-19 content.” For one thing, most of the relevant communications date from Spring 2021 or later, after Cuadros and Roots’ suspensions in 2020 and early 2021, respectively, see Pls.’ Ex. A.46-A.47; Am. Compl. ¶¶124, 150. Such communications that “post-date the relevant conduct that allegedly injured Plaintiffs … do not establish [state] action.” Federal Agency of News LLC v. Facebook, Inc., 432 F. Supp. 3d 1107, 1125-26 (N.D. Cal. 2020). Additionally, the new materials contain only general calls on Twitter to “do more” to address COVID-19 misinformation and questions regarding why Twitter had not taken action against certain other accounts (not Plaintiffs’). Pls.’ Exs. A.43-A.48. Such requests to “do more to stop the spread of false or misleading COVID-19 information,” untethered to any specific threat or requirement to take any specific action against Plaintiffs, is “permissible persuasion” and not state action. Kennedy v. Warren, 66 F.4th 1199, 1205, 1207-12 (9th Cir. 2023). As this Court previously held, government actors are free to “urg[e]” private parties to take certain actions or “criticize” others without giving rise to state action. Dkt. 165 at 12-13. Because that is the most that the new materials suggest with respect to Cuadros and Root, the new materials would not change this Court’s dismissal of their claims.
Twitter’s filing is like a beat-by-beat debunking of the conspiracy theories pushed by the dude who owns Twitter. It’s really quite incredible.
First, the simple act of receiving information from the government, or of deciding to act upon that information, does not transform a private actor into a state actor. See O’Handley, 62 F.4th at 1160 (reports from government actors “flagg[ing] for Twitter’s review posts that potentially violated the company’s content-moderation policy” were not state action). While Plaintiffs have attempted to distinguish O’Handley on the basis of the repeated communications reflected in the new materials, (Ex. 1 at 13), O’Handley held that such “flag[s]” do not suggest state action even where done “on a repeated basis” through a dedicated, “priority” portal. Id. The very documents on which Plaintiffs rely establish that when governmental actors reported to social media companies content that potentially violated their terms of service, the companies, including Twitter, would “see if [the content] violated their terms of service,” and, “[i]f [it] did, they would follow their own policies” regarding what content-moderation action was appropriate. Pls.’ Ex. A.13 at 165:3-17; accord id. at 296:1-4 (“[W]e [the FBI] would send information about malign foreign influence to specific companies as we became aware of it, and then they would review it and determine if they needed to take action.”). In other words, Twitter made an independent assessment and acted accordingly.
Moreover, the “frequen[t] [] meetings” on which Plaintiffs rely heavily in attempting to show joint action fall even farther short of what was alleged in O’Handley because, as discussed supra at 7, they were wholly unrelated to the kinds of content-moderation decisions at issue here.
Second, contrary to Plaintiffs’ contention (Ex. 1 at 11-12), the fact that the government gave certain Twitter employees security clearance does not transform information sharing into state action. The necessity for security clearance reflects only the sensitive nature of the information being shared— i.e., efforts by “[f]oreign adversaries” to “undermine the legitimacy of the [2020] election,” Pls.’ Ex. A.22. It says nothing about whether Twitter would work hand-in-hand with the federal government. Again, when the FBI shared sensitive information regarding possible election interference, Twitter determined whether and how to respond. Pls.’ Ex. A.13 at 165:3-17, 296:1-4.
Third, Plaintiffs are also wrong (Ex. 1 at 12-13) that Twitter became a state actor because the FBI “pay[ed] Twitter millions of dollars for the staff [t]ime Twitter expended in handling the government’s censorship requests.” For one thing, the communication on which Plaintiffs rely in fact explains that Twitter was reimbursed $3 million pursuant to a “statutory right of reimbursement for time spent processing” “legal process” requests. Pls.’ Ex. A.34 (emphasis added). The “statutory right” at issue is that created under the Stored Communications Act for costs “incurred in searching for, assembling, reproducing, or otherwise providing” electronic communications requested by the government pursuant to a warrant. 18 U.S.C. § 2706(a), see also id. § 2703(a). The reimbursements were not for responding to requests to remove any accounts or content and thus are wholly irrelevant to Plaintiffs’ joint-action theory
And, in any event, a financial relationship supports joint action only where there is complete “financial integration” and “indispensability.” Vincent v. Trend W. Tech. Corp., 828 F.2d 563, 569 (9th Cir. 1987) (quotation marks omitted). During the period in which Twitter recovered $3 million (late 2019 through early 2021), the company was valued at approximately $30 billion. Even Plaintiffs do not argue that a $3 million payment would be indispensable to Twitter.
I mean, if you read Techdirt, you already knew about all this, because we debunked the nonsense “government paid Twitter to censor” story months ago, even as Elon Musk was falsely tweeting exactly that. And now, Elon’s own lawyers are admitting that the company’s owner is completely full of shit or too stupid to actually read any of the details in the Twitter files. It’s incredible.
It goes on. Remember how Elon keeps insisting that the government coerced Twitter to make content moderation decisions? Well, Twitter’s own lawyers say that’s absolute horseshit. I mean, much of the following basically is what my Techdirt posts have explained:
The new materials do not evince coercion because they contain no threat of government sanction premised on Twitter’s failure to suspend Plaintiffs’ accounts. As this Court already held, coercion requires “a concrete and specific government action, or threatened action” for failure to comply with a governmental dictate. Dkt. 165 at 11. Even calls from legislators to “do something” about Plaintiffs’ Tweets (specifically, Mr. Trump’s) do not suggest coercion absent “any threatening remark directed to Twitter.” Id. at 7. The Ninth Circuit has since affirmed the same basic conclusion, holding in O’Handley that “government officials do not violate the First Amendment when they request that a private intermediary not carry a third party’s speech so long as the officials do not threaten adverse consequences if the intermediary refuses to comply.” 62 F.4th at 1158. Like the Amended Complaint, the new materials show, at most, attempts by the government to persuade and not any threat of punitive action, and thus would not alter the Court’s dismissal of Plaintiffs’ First Amendment claims.
FBI Officials. None of the FBI’s communications with Twitter cited by Plaintiffs evince coercion because they do not contain a specific government demand to remove content—let alone one backed by the threat of government sanction. Instead, the new materials show that the agency issued general updates about their efforts to combat foreign interference in the 2020 election. For example, one FBI email notified Twitter that the agency issued a “joint advisory” on recent ransomware tactics, and another explained that the Treasury department seized domains used by foreign actors to orchestrate a “disinformation campaign.” Pls.’ Ex. A.38. These informational updates cannot be coercive because they merely convey information; there is no specific government demand to do anything—let alone one backed by government sanction.
So too with respect to the cited FBI emails flagging specific Tweets. The emails were phrased in advisory terms, flagging accounts they believed may violate Twitter’s policies—and Twitter employees received them as such, independently reviewing the flagged Tweets. See, e.g., Pls.’ Exs. A.30 (“The FBI San Francisco Emergency Operations Center sent us the attached report of 207 Tweets they believe may be in violation of our policies.”), A.31, A.40. None even requested—let alone commanded—Twitter to take down any content. And none threatened retaliatory action if Twitter did not remove the flagged Tweets. As in O’Handley, therefore, the FBI’s “flags” cannot amount to coercion because there was “no intimation that Twitter would suffer adverse consequences if it refused.” 62 F.4th at 1158. What is more, unlike O’Handley, not one of the cited communications contains a request to take any action whatsoever with respect to any of Plaintiffs’ accounts.6
Plaintiffs’ claim (Ex. 1 at 14) that the FBI’s “compensation of Twitter for responding to its requests” had coercive force is meritless. As a threshold matter, as discussed supra at 10, the new materials demonstrate only that Twitter exercised its statutory right—provided to all private actors—to seek reimbursement for time it spent processing a government official’s legal requests for information under the Stored Communications Act, 18 U.S.C. § 2706; see also id. § 2703. The payments therefore do not concern content moderation at all—let alone specific requests to take down content. And in any event, the Ninth Circuit has made clear that, under a coercion theory, “receipt of government funds is insufficient to convert a private [actor] into a state actor, even where virtually all of the [the party’s] income [i]s derived from government funding.” Heineke, 965 F.3d at 1013 (quotation marks omitted) (third alteration in original). Therefore, Plaintiffs’ reliance on those payments does not evince coercion.
What about the pressure from Congress? That too is garbage, admits Twitter:
Congress. The new materials do not contain any actionable threat by Congress tied to Twitter’s suspension of Plaintiffs’ accounts. First, Plaintiffs place much stock (Ex. 1 at 14-15) in a single FBI agent’s opinion that Twitter employees may have felt “pressure” by Members of Congress to adopt a more proactive approach to content moderation, Pls.’ Ex. A13 at 117:15-118:6. But a third-party’s opinion as to what Twitter’s employees might have felt is hardly dispositive. And in any event, “[g]enerating public pressure to motivate others to change their behavior is a core part of public discourse,” and is not coercion absent a specific threatened sanction for failure to comply….
White House Officials. The new materials do not evince any actionable threat by White House officials either. Plaintiffs rely (Ex. 1 at 16) on a single statement by a Twitter employee that “[t]he Biden team was not satisfied with Twitter’s enforcement approach as they wanted Twitter to do more and to deplatform several accounts,” Pls.’ Ex. A.47. But those exchanges took place in December 2022, id.— well after Plaintiffs’ suspensions, and so could not have compelled Twitter to suspend their accounts. Furthermore, the new materials fail to identify any threat of government sanction arising from the officials’ “dissatisfaction”; indeed, Twitter was only asked to join “other calls” to continue the dialogue
Basically, Twitter’s own lawyers are admitting in a court filing that the guy who owns their company is spewing utter nonsense about what the Twitter Files revealed. I don’t think I’ve ever seen anything quite like this.
Guy takes over company because he’s positive that there are awful things happening behind the scenes. Gives “full access” to a bunch of very ignorant journalists who are confused about what they find. Guy who now owns the company falsely insists that they proved what he believed all along, leading to the revival of a preternaturally stupid lawsuit… only to have the company’s lawyers basically tell the judge “ignore our stupid fucking owner, he can’t read or understand any of this.”
The refrain to remember with Twitter under Elon Musk: it can always get dumber.
Quick(ish) recap:
On Thursday, Musk’s original hand-picked Twitter Files scribe, Matt Taibbi, went on Mehdi Hasan’s show (which Taibbi explicitly demanded from Hasan, after Hasan asked about Taibbi’s opinions on Musk blocking accounts for Modi in India). The interview did not go well for Taibbi in the same manner that finding an iceberg did not go well for the Titanic.
One segment of the absolutely brutal interview involves Hasan asking Taibbi the very question that Taibbi had said he wanted to come on the show to answer: what was his opinion of Musk blocking Twitter accounts in India, including those of journalists and activists, that were critical of the Modi government? Hasan notes that Taibbi has talked up how he believes Musk is supporting free speech, and asked Taibbi if he’d like to criticize the blocking of journalists.
Taibbi refused to do so, and claimed he doesn’t really know about the story, even though it was the very story that Hasan initially tweeted about that resulted in Taibbi saying he’d tell Hasan his opinion on the story if he was invited on the show. It was, well, embarrassing to watch Taibbi squirm as he knew he couldn’t say anything critical about Musk. He already saw how the second Twitter Files scribe, Bari Weiss, was ex-communicated from the Church of Musk for criticizing Musk’s banning of journalists.
The conversation was embarrassing in real time:
Hasan: What’s interesting about Elon Musk is that, we’ve checked, you’ve tweeted over thirty times about Musk since he announced he was going to buy Twitter last April, and not a word of criticism about him in any of those thirty plus tweets. Musk is a billionaire who’s been found to have violated labor laws multiples times, including in the past few days. He’s attacked labor unions, reportedly fired employees on a whim, slammed the idea of a wealth tax. Told his millions of followers to vote Republican last year, and in response to a right-wing coup against Bolivian leftist President Evo Morales tweeted “we’ll coup whoever we want.”
And yet, you’ve been silent on all that.
How did you go, Matt, from being the scourge of Wall St. The man who called Goldman Sachs the Vampire Squid, to be unwilling to say anything critical at all about this right wing reactionary anti-union billionaire.
Taibbi: Look….[long pause… then a sigh]. So… so… I like Elon Musk. I met him. This is part of the calculation when you do one of these stories. Are they going to give you information that’s gonna make you look stupid. Do you think their motives are sincere about doing x or y…. I did. I thought his motives were sincere about the Twitter Files. And I admired them. I thought he did a tremendous public service in opening the files up. But that doesn’t mean I have to agree with him about everything.
Hasan: I agree with you. But you never disagree with him. You’ve gone silent. Some would say that’s access journalism.
Taibbi: No! No. I haven’t done… I haven’t reported anything that limits my ability to talk about Elon Musk…
Hasan: So will you criticize him today? For banning journalists, for working with Modi government to shut down speech, for being anti-union. You can go for it. I’ll give you as much time as you’d like. Would you like to criticize Musk now?
Taibbi: No, I don’t particularly want to… uh… look, I didn’t criticize him really before… uh… and… I think that what the Twitter Files are is a step in the right direction…
Hasan: But it’s the same Twitter he’s running right now…
Taibbi: I don’t have to disagree with him… if you wanna ask… a question in bad faith…
[crosstalk]
Hasan: It’s not in bad faith, Matt!
Taibbi: It absolutely is!
Hasan: Hold on, hold on, let me finish my question. You saying that he’s good for Twitter and good for speech. I’m saying that he’s using Twitter to help one of the most rightwing governments in the world censor speech. I will criticize that. Will you?
Taibbi: I have to look at the story first. I’m not looking at it now!
By Friday, that exchange became even more embarrassing. Because, due to a separate dispute that Elon was having with Substack (more on that in a bit), he decided to arbitrarily bar anyone from retweeting, replying, or even liking any tweet that had a Substack link in it. But Taibbi’s vast income stems from having one of the largest paying Substack subscriber bases. So, in rapid succession he announced that he was leaving Twitter, and would rely on Substack, and that this would likely limit his ability to continue working on the Twitter Files. Minutes later, Elon Musk unfollowed Taibbi on Twitter.
Quite a shift in the Musk/Taibbi relationship in 24 hours.
Then came Saturday. First Musk made up some complete bullshit about both Substack and Taibbi, claiming that Taibbi was an employee of Substack, and also that Substack was violating their (rapidly changing to retcon whatever petty angry outburst Musk has) API rules.
Somewhat hilariously, the Community Notes feature — which old Twitter had created, though once Musk changed its name from “Birdwatch” to “Community Notes,” he acted as if it was his greatest invention — is correcting Musk:
Because also either late Friday or early Saturday, Musk had added substack.com to Twitter’s list of “unsafe” URLs, suggesting that it may contain malicious links that could steal information. Of course, the only one malicious here was Musk.
Also correcting Musk? Substack founder Chris Best:
Then, a little later on Saturday, people realized that searching for Matt Taibbi’s account… turned up nothing. Taibbi wrote on Substack that he believed all his Twitter Files had been “removed” as first pointed out by William LeGate:
But, if you dug into Taibbi’s Twitter account, you could still find them. Mashable’s Matt Binder solved the mystery and revealed, somewhat hilariously, that Taibbi’s acount appears to have been “max deboosted” or, in Twitter’s terms, had the highest level of visibility filters applied, meaning you can’t find Taibbi in search. Or, in the parlance of today, he shadowbanned Matt Taibbi.
Again, this shouldn’t be a surprise, even though the irony is super thick. Early Twitter Files revealed that Twitter had long used visibility filtering to limit the spread of certain accounts. Musk screamed about how this was horrible shadowbanning… but then proceeded to use those tools to suppress speech of people he disliked. And now he’s using the tool, at max power, to hide Taibbi and the very files that we were (falsely) told “exposed” how old Twitter shadow banned people.
This is way more ironic than the Alanis song.
So, yes, we went from Taibbi praising Elon Musk for supporting free speech and supposedly helping to expose the evil shadowbanning of the old regime, and refusing to criticize Musk on anything, to Taibbi leaving Twitter, and Musk not just unfollowing him but shadowbanning him and all his Twitter Files.
Not much happened then on Sunday, though Twitter first added a redirect on any searches for “substack” to “newsletters” (what?) and then quietly stopped throttling links to Substack, though no explanation was given. And as far as I can tell, Taibbi’s account is still “max deboosted.”
Anyway, again, to be clear: Elon Musk is perfectly within his rights to be as arbitrary and capricious as he wants to be with his own site. But can people please stop pretending his actions have literally anything to do with “free speech”?
So here’s the deal. If you think the Twitter Files are still something legit or telling or powerful, watch this 30 minute interview that Mehdi Hasan did with Matt Taibbi (at Taibbi’s own demand):
Hasan came prepared with facts. Lots of them. Many of which debunked the core foundation on which Taibbi and his many fans have built the narrative regarding the Twitter Files.
We’ve debunked many of Matt’s errors over the past few months, and a few of the errors we’ve called out (though not nearly all, as there are so, so many) show up in Hasan’s interview, while Taibbi shrugs, sighs, and makes it clear he’s totally out of his depth when confronted with facts.
Since the interview, Taibbi has been scrambling to claim that the errors Hasan called out are small side issues, but they’re not. They’re literally the core pieces on which he’s built the nonsense framing that Stanford, the University of Washington, some non-profits, the government, and social media have formed an “industrial censorship complex” to stifle the speech of Americans.
The errors that Hasan highlights matter a lot. A key one is Taibbi’s claim that the Election Integrity Partnership flagged 22 million tweets for Twitter to take down in partnership with the government. This is flat out wrong. The EIP, which was focused on studying election interference, flagged less than 3,000 tweets for Twitter to review (2,890 to be exact).
And they were quite clear in their report on how all this worked. EIP was an academic project to track election interference information and how it flowed across social media. The 22 million figure shows up in the report, but it was just a count of how many tweets they tracked in trying to follow how this information spread, not seeking to remove it. And the vast majority of those tweets weren’t even related to the ones they did explicitly create tickets on.
In total, our incident-related tweet data included 5,888,771 tweets and retweets from ticket status IDs directly, 1,094,115 tweets and retweets collected first from ticket URLs, and 14,914,478 from keyword searches, for a total of 21,897,364 tweets.
Tracking how information spreads is… um… not a problem now is it? Is Taibbi really claiming that academics shouldn’t track the flow of information?
Either way, Taibbi overstated the number of tweets that EIP reported by 21,894,474 tweets. In percentage terms, the actual number of reported tweets was 0.013% of the number Taibbi claimed.
Okay, you say, but STILL, if the government is flagging even 2,890 tweets, that’s still a problem! And it would be if it was the government flagging those tweets. But it’s not. As the report details, basically all of the tickets in the system were created by non-government entities, mainly from the EIP members themselves (Stanford, University of Washington, Graphika, and Digital Forensics Lab).
This is where the second big error that Taibbi makes knocks down a key pillar of his argument. Hasan notes that Taibbi falsely turned the non-profit Center for Internet Security (CIS) into the government agency the Cybersecurity and Infrastructure Security Agency (CISA). Taibbi did this by assuming that when someone at Twitter noted information came from CIS, they must have meant CISA, and therefore he appended the A in brackets as if he was correcting a typo:
Taibbi admits that this was a mistake and has now tweeted a correction (though this point was identified weeks ago, and he claims he only just learned about it). I’ve seen Taibbi and his defenders claim that this is no big deal, that he just “messed up an acronym.” But, uh, no. Having CISA report tweets to Twitter was a key linchpin in the argument that the government was sending tweets for Twitter to remove. But it wasn’t the government, it was an independent non-profit.
The thing is, this mistake also suggests that Taibbi never even bothered to read the EIP report on all of this, which lays out extremely clearly where the flagged tweets came from, noting that CIS (which was not an actual part of the EIP) sent in 16% of the total flagged tweets. It even pretty clearly describes what those tweets were:
Compared to the dataset as a whole, the CIS tickets were (1) more likely to raise reports about fake official election accounts (CIS raised half of the tickets on this topic), (2) more likely to create tickets about Washington, Connecticut, and Ohio, and (3) more likely to raise reports that were about how to vote and the ballot counting process—CIS raised 42% of the tickets that claimed there were issues about ballots being rejected. CIS also raised four of our nine tickets about phishing. The attacks CIS reported used a combination of mass texts, emails, and spoofed websites to try to obtain personal information about voters, including addresses and Social Security numbers. Three of the four impersonated election official accounts, including one fake Kentucky election website that promoted a narrative that votes had been lost by asking voters to share personal information and anecdotes about why their vote was not counted. Another ticket CIS reported included a phishing email impersonating the Election Assistance Commission (EAC) that was sent to Arizona voters with a link to a spoofed Arizona voting website. There, it asked voters for personal information including their name, birthdate, address, Social Security number, and driver’s license number.
In other words, CIS was raising pretty legitimate issues: people impersonating election officials, and phishing pages. This wasn’t about “misinformation.” These were seriously problematic tweets.
There is one part that perhaps deserves some more scrutiny regarding government organizations, as the report does say that a tiny percentage of reports came from the GEC, which is a part of the State Department, but the report suggests that this was probably less than 1% of the flags. 79% of the flags came from the four organizations in the partnership (not government). Another 16% came from CIS (contrary to Taibbi’s original claim, not government). That leaves 5%, which came from six different organizations, mostly non-profits. Though it does list the GEC as one of the six organizations. But the GEC is literally focused entirely on countering (not deleting) foreign state propaganda aimed at destabilizing the US. So, it’s not surprising that they might call out a few tweets to the EIP researchers.
Okay, okay, you say, but even so this is still problematic. It was still, as a Taibbi retweet suggests, these organizations who are somehow close to the government trying to silence narratives. And, again, that would be bad if true. But, that’s not what the information actually shows. First off, we already discussed how some of what they targeted was just out and out fraud.
But, more importantly, regarding the small number of tweets that EIP did report to Twitter… it never suggested what Twitter should do about them, and Twitter left the vast majority of them up. The entire purpose of the EIP program, as laid out in everything that the EIP team has made clear from before, during, and after the election, was just to be another set of eyes looking out for emerging trends and documenting how information flows. In the rare cases (again less than 1%) where things looked especially problematic (phishing attempts, impersonation) they might alert the company, but made no effort to tell Twitter how to handle them. And, as the report itself makes clear, Twitter left up the vast majority of them:
We find, overall, that platforms took action on 35% of URLs that we reported to them. 21% of URLs were labeled, 13% were removed, and 1% were soft blocked. No action was taken on 65%. TikTok had the highest action rate: actioning (in their case, their only action was removing) 64% of URLs that the EIP reported to their team.)
They don’t break it out by platform, but across all platforms no action was taken on 65% of the reported content. And considering that TikTok seemed quite aggressive in removing 64% of flagged content, that means that all of the other platforms, including Twitter, took action on way less than 35% of the flagged content. And then, even within the “took action” category, the main action taken was labeling.
In other words, the top two main results of EIP flagging this content were:
Nothing
Adding more speech
The report also notes that the category of content that was most likely to get removed was the out and out fraud stuff: “phishing content and fake official accounts.” And given that TikTok appears to have accounted for a huge percentage of the “removals” this means that Twitter removed significantly less than 13% of the tweets that EIP flagged for them. So not only is it not 22 million tweets, it’s that EIP flagged less than 3,000 tweets, and Twitter ignored most of them and removed probably less than 10% of them.
When looked at in this context, basically the entire narrative that Taibbi is pushing melts away.
The EIP is not part of the “industrial censorship complex.” It’s a mostly academic group that was tracking how information flows across social media, which is a legitimate area of study. During the election they did exactly that. In the tiny percentage of cases where they saw stuff they thought was pretty worrisome, they’d simply alert the platforms with no push for the platforms to take any action, and (indeed) in most cases the platforms took no action whatsoever. In a few cases, they added more speech.
In a tiny, tiny percentage of the already tiny percentage, when the situation was most extreme (phishing, fake official accounts) then the platforms (entirely on their own) decided to pull down that content. For good reason.
That’s not “censorship.” There’s no “complex.” Taibbi’s entire narrative turns to dust.
There’s a lot more that Taibbi gets wrong in all of this, but the points that Hasan got him to admit he was wrong about are literally core pieces in the underlying foundation of his entire argument.
At one point in the interview, Hasan also does a nice job pointing out that the posts that the Biden campaign (note: not the government) flagged to Twitter were of Hunter Biden’s dick pics, not anything political (we’ve discussed this point before) and Taibbi stammers some more and claims that “the ordinary person can’t just call up Twitter and have something taken off Twitter. If you put something nasty about me on Twitter, I can’t just call up Twitter…”
Except… that’s wrong. In multiple ways. First off, it’s not just “something nasty.” It’s literally non-consensual nude photos. Second, actually, given Taibbi’s close relationship with Twitter these days, uh, yeah, he almost certainly could just call them up. But, most importantly, the claim about “the ordinary” person not being able to have non-consensual nude images taken off the site? That’s wrong.
You can. There’s a form for it right here. And I’ll admit that I’m not sure how well staffed Twitter’s trust & safety team is to handle those reports today, but it definitely used to have a team of people who would review those reports and take down non-consensual nude photos, just as they did with the Hunter Biden images.
As Hasan notes, Taibbi left out this crucial context to make his claims seem way more damning than they were. Taibbi’s response is… bizarre. Hasan asks him if he knew that the URLs were nudes of Hunter Biden and Taibbi admits that “of course” he did, but when Hasan asks him why he didn’t tell people that, Taibbi says “because I didn’t need to!”
Except, yeah, you kinda do. It’s vital context. Without it, the original Twitter Files thread implied that the Biden campaign (again, not the government) was trying to suppress political content or embarrassing content that would harm the campaign. The context that it’s Hunter’s dick pics is totally relevant and essential to understanding the story.
And this is exactly what the rest of Hasan’s interview (and what I’ve described above) lays out in great detail: Taibbi isn’t just sloppy with facts, which is problematic enough. He leaves out the very important context that highlights how the big conspiracy he’s reporting is… not big, not a conspiracy, and not even remotely problematic.
He presents it as a massive censorship operation, targeting 22 million tweets, with takedown demands from government players, seeking to silence the American public. When you look through the details, correcting Taibbi’s many errors, and putting it in context, you see that it was an academic operation to study information flows, who sent the more blatant issues they came across to Twitter with no suggestion that they do anything about them, and the vast majority of which Twitter ignored. In some minority of cases, Twitter applied its own speech to add more context to some of the tweets, and in a very small number of cases, where it found phishing attempts or people impersonating election officials (clear terms of service violations, and potentially actual crimes), it removed them.
There remains no there there. It’s less than a Potemkin village. There isn’t even a façade. This is the Emperor’s New Clothes for a modern era. Taibbi is pointing to a naked emperor and insisting that he’s clothed in all sorts of royal finery, whereas anyone who actually looks at the emperor sees he’s naked.
As soon as it was announced, we warned that the new “Select Subcommittee on the Weaponization of the Federal Government,” (which Kevin McCarthy agreed to support to convince some Republicans to support his speakership bid) was going to be not just a clown show, but one that would, itself, be weaponized to suppress speech (the very thing it claimed it would be “investigating.”)
Anyway, it’s now gone up a notch beyond just performative beclowing to active maliciousness.
This week, Jordan sent information requests to Stanford University, the University of Washington, Clemson University and the German Marshall Fund, demanding they reveal a bunch of internal information, that serves no purpose other than to intimidate and suppress speech. You know, the very thing that Jim Jordan pretends his committee is “investigating.”
House Republicans have sent letters to at least three universities and a think tank requesting a broad range of documents related to what it says are the institutions’ contributions to the Biden administration’s “censorship regime.”
As we were just discussing, the subcommittee seems taken in by Matt Taibbi’s analysis of what he’s seen in the Twitter files, despite nearly every one of his “reports” on them containing glaring, ridiculous factual errors that a high school newspaper reporter would likely catch. I mean, here he claims that the “Disinformation Governance Board” (an operation we mocked for the abject failure of the administration in how it rolled out an idea it never adequately explained) was somehow “replaced” by Stanford University’s Election Integrity Project.
Except the Disinformation Governance Board was announced, and then disbanded, in April and May of 2022. The Election Integrity Partnership was very, very publicly announced in July of 2020. Now, I might not be as decorated a journalist as Matt Taibbi, but I can count on my fingers to realize that 2022 comes after 2020.
Look, I know that time has no meaning since the pandemic began. And that journalists sometimes make mistakes (we all do!), but time is, you know, not that complicated. Unless you’re so bought into the story you want to tell you just misunderstand basically every last detail.
The problem, though, goes beyond just getting simple facts wrong (and the list of simple facts that Taibbi gets wrong is incredibly long). It’s that he gets the less simple, more nuanced facts, even more wrong. Taibbi still can’t seem to wrap his head around the idea that this is how free speech and the marketplace of ideas actually works. Private companies get to decide the rules for how anyone gets to use their platform. Other people get to express their opinions on how those rules are written and enforced.
As we keep noting, the big revelations so far (if you read the actual documents in the Twitter Files, and not Taibbi’s bizarrely disconnected-from-what-he’s-commenting-on commentary), is that Twitter’s Trust and Safety team was… surprisingly (almost boringly) competent. I expected way more awful things to come out in the Twitter Files. I expected dirt. Awful dirt. Embarrassing dirt. Because every company of any significant size has that. They do stupid things for stupid fucking reasons, and bend over backwards to please certain constituents.
But… outside of a few tiny dumb decisions, Twitter’s team has seemed… remarkably competent. They put in place rules. If people bent the rules, they debated how to handle it. They sometimes made mistakes, but seemed to have careful, logical debates over how to handle those things. They did hear from outside parties, including academic researchers, NGOs, and government folks, but they seemed quite likely to mock/ignore those who were full of shit (in a manner that pretty much any internal group would do). It’s shockingly normal.
I’ve spent years talking to insiders working on trust and safety teams at big, medium, and small companies. And, nothing that’s come out is even remotely surprising, except maybe how utterly non-controversial Twitter’s handling of these things was. There’s literally less to comment on then I expected. Nearly every other company would have a lot more dirt.
Still, Jordan and friends seem driven by the same motivation as Taibbi, and they’re willing to do exactly the things that they claim they’re trying to stop: using the power of the government to send threatening intimidation letters that are clearly designed to chill academic inquiry into the flow of information across the internet.
By demanding that these academic institutions turn over all sorts of documents and private communications, Jordan must know that he’s effectively chilling the speech of not just them, but any academic institution or civil society organization that wants to study how false information (sometimes deliberately pushed by political allies of Jim Jordan) flow across the internet.
It’s almost (almost!) as if Jordan wants to use the power of his position as the head of this subcommittee… to create a stifling, speech-suppressing, chilling effect on academic researchers engaged in a well-established field of study.
Can’t wait to read Matt Taibbi’s report on this sort of chilling abuse by the federal government. It’ll be a real banger, I’m sure. I just hope he uses some of the new Substack revenue he’s made from an increase in subscribers to hire a fact checker who knows how linear time works.
Over the last few months, Elon Musk’s handpicked journalists have continued revealing less and less with each new edition of the “Twitter Files,” to the point that even those of us who write about this area have mostly been skimming each new release, confirming that yet again these reporters have no idea what they’re talking about, are cherry picking misleading examples, and then misrepresenting basically everything.
It’s difficult to decide if it’s even worth giving these releases any credibility at all in going through the actual work of debunking them, but sometimes a few out of context snippets from the Twitter Files, mostly from Matt Taibbi, seem to get picked up by others and it becomes necessary to dive back into the muck to clean up the mess that Matt has made yet again.
Unfortunately, this seems like one of those times.
Over the last few “Twitter Files” releases, Taibbi has been pushing hard on the false claim that, okay, maybe he can’t find any actual evidence that the government tried to force Twitter to remove content, but he can find… information about how certain university programs and non-governmental organizations received government grants… and they setup “censorship programs.”
It’s “censorship by proxy!” Or so the claim goes.
Except, it’s not even remotely accurate. The issue, again, goes back to understanding some pretty fundamental concepts that seem to escape Taibbi’s ability to understand. Let’s go through them.
Point number one: Studying misinformation and disinformation is a worthwhile field of study. That’s not saying that we should silence such things, or that we need an “arbiter of truth.” But the simple fact remains that some have sought to use misinformation and disinformation to try to influence people, and studying and understanding how and why that happens is valuable.
Indeed, I personally tend to lean towards the view that most discussions regarding mis- and disinformation are overly exaggerated moral panics. I think the terms are overused, and often misused (frequently just to attack factual news that people dislike). But, in part, that’s why it’s important to study this stuff. And part of studying it is to actually understand how such information is spread, which includes across social media.
Point number two: It’s not just an academic field of interest. For fairly obvious reasons, companies that are used to spread such information have a vested interest in understanding this stuff as well, though to date, it’s mostly been the social media companies that have shown the most interest in understanding these things, rather than say, cable news, even as some of the evidence suggests cable news is a bigger vector for spreading such things than social media.
Still, the companies have an interest in understand this stuff, and sometimes that includes these organizations flagging content they find and sharing it with the companies for the sole purpose of letting those companies evaluate if the content violate existing policies. And, once again, the companies regularly did nothing after noting that the flagged accounts didn’t violate any policies.
Point number three: governments also have an interest in understand how such information flows, in part to help combat foreign influence campaigns designed to cause strife and even violence.
Note what none of these three points are saying: that censorship is necessary or even desired. But it’s not surprising that the US government has funded some programs to better understand these things, and that includes bringing in a variety of experts from academia and civil society and NGOs to better understand these things. It’s also no surprise that some of the social media companies are interested in what these research efforts find because it might be useful.
And, really, that’s basically everything that Taibbi has found out in his research. There are academic centers and NGOs that have received some grants from various government agencies to study mis- and disinformation flows. Also, that sometimes Twitter communicated with those organization. Notably, many of his findings actually show that Twitter employees absolutely disagreed with the conclusions of those research efforts. Indeed, some of the revealed emails show Twitter employees somewhat dismissive of the quality of the research.
What none of this shows is a grand censorship operation.
However, that’s what Taibbi and various gullible culture warriors in Congress are arguing, because why not?
So, some of the organizations in questions have decided they finally need to do some debunking on their own. I especially appreciate the University of Washington (UW), which did a step by step debunker that, in any reasonable world, would completely embarrass Matt Taibbi for the very obvious fundamental mistakes he made:
False impression: The EIP orchestrated a massive “censorship” effort. In a recent tweet thread, Matt Taibbi, one of the authors of the “Twitter Files” claimed: “According to the EIP’s own data, it succeeded in getting nearly 22 million tweets labeled in the runup to the 2020 vote.” That’s a lot of labeled tweets! It’s also not even remotely true. Taibbi seems to be conflating our team’s post-hoc research mapping tweets to misleading claims about election processes and procedures with the EIP’s real-time efforts to alert platforms to misleading posts that violated their policies. The EIP’s research team consisted mainly of non-expert students conducting manual work without the assistance of advanced AI technology. The actual scale of the EIP’s real-time efforts to alert platforms was about 0.01% of the alleged size.
Now, that’s embarrassing.
There’s a lot more that Taibbi misunderstands as well. For example, the freak-out over CISA:
False impression: The EIP operated as a government cut-out, funneling censorship requests from federal agencies to platforms. This impression is built around falsely framing the following facts: the founders of the EIP consulted with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) office prior to our launch, CISA was a “partner” of the EIP, and the EIP alerted social media platforms to content EIP researchers analyzed and found to be in violation of the platforms’ stated policies. These are all true claims — and in fact, we reported them ourselves in the EIP’s March 2021 final report. But the false impression relies on the omission of other key facts. CISA did not found, fund, or otherwise control the EIP. CISA did not send content to the EIP to analyze, and the EIP did not flag content to social media platforms on behalf of CISA.
There are multiple other false claims that UW debunks as well, including that it was a partisan effort, that it happened in secret, or that it did anything related to content moderation. None of those are true.
The Stanford Internet Observatory (SIO), which works with UW on some of these programs, ended up putting out a similar debunker statement as well. For whatever reason, the SIO seems to play a central role in Taibbi’s fever dream of “government-driven censorship.” He focuses on projects like the Election Integrity Project or the Virality Project, both of which were focused on looking at the flows of viral misinformation.
In Taibbi’s world, these were really government censorship programs. Except, as SIO points out, they weren’t funded by the government:
Does the SIO or EIP receive funding from the federal government?
As part of Stanford University, the SIO receives gift and grant funding to support its work. In 2021, the SIO received a five-year grant from the National Science Foundation, an independent government agency, awarding a total of $748,437 over a five-year period to support research into the spread of misinformation on the internet during real-time events. SIO applied for and received the grant after the 2020 election. None of the NSF funds, or any other government funding, was used to study the 2020 election or to support the Virality Project. The NSF is the SIO’s sole source of government funding.
They also highlight how the Virality Project’s work on vaccine disinformation was never about “censorship.”
Did the SIO’s Virality Project censor social media content regarding coronavirus vaccine side-effects?
No. The VP did not censor or ask social media platforms to remove any social media content regarding coronavirus vaccine side effects. Theories stating otherwise are inaccurate and based on distortions of email exchanges in the Twitter Files. The Project’s engagement with government agencies at the local, state, or federal level consisted of factual briefings about commentary about the vaccine circulating on social media.
The VP’s work centered on identification and analysis of social media commentary relating to the COVID-19 vaccine, including emerging rumors about the vaccine where the truth of the issue discussed could not yet be determined. The VP provided public information about observed social media trends that could be used by social media platforms and public health communicators to inform their responses and further public dialogue. Rather than attempting to censor speech, the VP’s goal was to share its analysis of social media trends so that social media platforms and public health officials were prepared to respond to widely shared narratives. In its work, the Project identified several categories of allegations on Twitter relating to coronavirus vaccines, and asked platforms, including Twitter, which categories were of interest to them. Decisions to remove or flag tweets were made by Twitter.
In other words, as was obvious to anyone who actually had followed any of this while these projects were up and running, these are not examples of “censorship” regimes. Nor are they efforts to silence anyone. They’re research programs on information flows. That’s also clear if you don’t read Taibbi’s bizarrely disjointed commentary and just look at the actual things he presents.
In a normal world, the level of just outright nonsense and mistakes in Taibbi’s work would render his credibility completely shot going forward. Instead, he’s become a hero to a certain brand of clueless troll. It’s the kind of transformation that would be interesting to study and understand, but I assume Taibbi would just build a grand conspiracy theory about how doing that was just an attempt by the illuminati to silence him.
I know, I know, there are no room for facts in the modern GOP, just feelings. But, still, it’s kind of remarkable just how much they seem committed to the bit that Twitter was actively trying to suppress Republicans to help Joe Biden. There remains zero proof of this. Zero. Over the course of the various “Twitter Files” all we’ve seen is Twitter literally pushing back on anything that suggests political bias, and instead trying to review things based on whether or not they legitimately broke the rules.
But, still, Republicans are insisting that Twitter unfairly benefited Democrats, and they already held a ridiculous hearing on it (with more on the way!) that highlighted (repeatedly) that Twitter did not, in fact, try to help Democrats, but rather that they bent over backwards to give Republicans extra chances after they broke the rules, even when the Trump White House demanded Twitter block his critics.
During that hearing, Rep. Jamie Raskin highlighted something I’ve been saying for a while: that if Democrats had held the same kind of hearing regarding Fox News and its editorial choices, many people (and not just Republicans) would rightly be up in arms about the 1st Amendment implications of demanding a media company explain its editorial choices.
Separately (and this will become important in a moment), in 2021, the Federal Election Commission conducted an investigation to see if Twitter’s handling of the Hunter Biden laptop story represented an illegal “in-kind contribution” to the Biden campaign. The FEC concluded that here was no evidence, and specifically that there was no evidence of Twitter working with the Biden campaign:
As discussed below, Twitter has credibly explained that it acted with a commercial motivation in response to the New York Post articles rather than with an electoral purpose. With respect to its actions concerning Trump’s tweets, there is no evidence that Twitter coordinated its actions with the Biden Committee, and as such, the actions did not constitute contributions. Finally, the remaining allegations that Twitter limited the visibility of Republican users, suppressed distribution of an interview, and limited coverage of election lawsuits are vague, speculative, and unsupported by the available information. Therefore, the Commission finds no reason to believe that Twitter violated 52 U.S.C. § 30118(a) and 11 C.F.R. § 114.2(b) by making prohibited in-kind corporate contributions; finds no reason to believe that Jack Dorsey, Twitter’s CEO, and Brandon Borrman, Twitter’s Vice President, Global Communications, violated 52 U.S.C. § 30118(a) and 11 C.F.R. § 114.2(e) by consenting to prohibited corporate contributions; and finds no reason to believe that the Biden Committee knowingly accepted or received and failed to report such contributions in violation of 52 U.S.C. §§ 30104(b)(3)(A), 30118(a) and 11 C.F.R. §§ 104.3(a), 114.2(d).
I bet you can guess where this is going, right?
Last week in the ongoing lawsuit from Dominion Voting Systems against Fox News, among many other things that were in Dominion’s latest filing was this fascinating tidbit.
During Trump’s campaign, Rupert provided Trump’s son-in-law and senior advisor, Jared Kushner, with Fox confidential information about Biden’s ads, along with debate strategy. Ex.600, R.Murdoch 210:6-9; 213:17-20; Ex.603 (providing Kushner a preview of Biden’s ads before they were public)
In other words, for all the talk of Twitter supposedly helping the Biden campaign, Fox News, via the chairman of its parent company, Rupert Murdoch, was literally taking proprietary information regarding the Biden campaign, which it only obtained because of its position as a news channel on which the campaign was advertising, and feeding it directly to Trump’s campaign via one of Trump’s most trusted advisors.
It sure looks like Fox actually was potentially engaged in providing an illegal in-kind contribution to the Trump campaign. I’m assuming, though, that the House Judiciary Committee won’t be hosting a long series of day-long hearings about this?
This pattern is getting frustrating. Each and every time we see Republicans making nonsense, unsubstantiated claims about what companies are doing, it turns out it’s because it’s exactly what the GOP itself is doing. Each accusation is more of a confession, both about what levels they’ll stoop to, but also the inability to comprehend that the other side isn’t so lacking in ethics, and wouldn’t stoop to the same level.
We’ve got a double-header of cross-post episodes for you this week! Recently, Mike joined two different podcasts to discuss Congress’s response to the Twitter Files and the dumpster fire of a hearing held by the House Oversight Committee: The New Abnormal podcast from the Daily Beast, and The Sunday Show podcast from Tech Policy Press. You can listen to both conversations back-to-back right here in today’s extra-long episode.
So, we already noted that Wednesday’s House Oversight Committee’s grandstanding hearing about Twitter revealed how Trump’s White House asked Twitter to remove a tweet from Chrissy Teigen that mocked the then president by calling him a “pussy ass bitch.” Apparently Trump’s fragile ego couldn’t handle that level of insult, and so he had to ask for it to be taken down.
Soon after that came out, Rolling Stone released a report, quoting a variety of former Twitter employees and Trump officials, noting that this was a regular thing from Republican officials in both Congress and the White House.
But former Trump administration officials and Twitter employees tell Rolling Stone that the White House’s Teigen tweet demand was hardly an isolated incident: The Trump administration and its allied Republicans in Congress routinely asked Twitter to take down posts they objected to — the exact behavior that they’re claiming makes President Biden, the Democrats, and Twitter complicit in an anti-free speech conspiracy to muzzle conservatives online.
“It was strange to me when all of these investigations were announced because it was all about the exact same stuff that we had done [when Donald Trump was in office],” one former top aide to a senior Trump administration official tells Rolling Stone. “It was normal.”
The article does note that some powerful Democrats made similar requests, and Twitter set up a sort of “database” of requests from thin-skinned politicians. The report notes that these very same Republicans who are running around insisting that the FBI flagging some tweets for Twitter to review under its own policies… were doing the exact same thing:
But during both the Trump and Biden presidencies, these types of moderation requests or demands were routinely sent to Twitter by the staff of influential GOP lawmakers — ones with names like Kevin McCarthy and Elise Stefanik.
Oftentimes, requests would demand Twitter stop “shadowbanning” certain conservative accounts, or that the company reinstate banned or suspended right-wing personas. Other times, offices of senior Trump administration officials would send emails seeking to remove tweets that they believed to be “hate speech” or death threats aimed at their principals. And over the years, the knowledgeable sources say, staffers for Republican officials would regularly flag to Twitter content that they believed violated the app’s terms of service or other policies, including on spreading “misinformation” or “disinformation.”
Now, as we’ve explained over and over again, simply flagging content as potentially violating the rules is not a problem, so long as there is no coercion or threats of coercion associated with it. Where one draws the line on that is at least somewhat complicated, but the simple fact is that the very same GOP that is whining about this… was apparently as active, if not more so, in doing the same damn things.
Most people would call that hypocritical.
The obvious irony here is, the sources note, that Republican leaders and elected officials have long been committing precisely the kind of “government interference” that they are now investigating, fundraising off of, and accusing Democrats and the so-called anti-Trump “Deep State” of perpetrating. Some of the loudest conservative and MAGA voices on Capitol Hill — who’ve been endlessly demanding taxpayer-funded, high-profile investigations into Big Tech “bias” and “collusion” — were themselves engaged in the behavior they now claim is colluding.
And, again, all of this seems to raise questions about why none of these requests from Republicans seem to be showing up in any of Elon Musk’s Twitter files?