from the that-article-is-bullshit dept
Do not believe everything you read. Even if it comes from more “respectable” publications. The Intercept had a big story this week that is making the rounds, suggesting that “leaked” documents prove the DHS has been coordinating with tech companies to suppress information. The story has been immediately picked up by the usual suspects, claiming it reveals the “smoking gun” of how the Biden administration was abusing government power to censor them on social media.
The only problem? It shows nothing of the sort.
The article is garbage. It not only misreads things, it is confused about what the documents the reporters have actually say, and presents widely available, widely known things as if they were secret and hidden when they were not.
The entire article is a complete nothingburger, and is fueling a new round of lies and nonsense from people who find it useful to misrepresent reality. If the Intercept had any credibility at all it would retract the article and examine whatever processes failed in leading to the article getting published.
Let’s dig in. Back in 2018, then President Donald Trump signed the Cybersecurity and Infrastructure Security Agency Act into law, creating the Cybersecurity and Infrastructure Security Agency as a separate agency in the Department of Homeland Security. While there are always reasons to be concerned about government interference in various aspects of life, CISA was pretty uncontroversial (perhaps with the exception of when Trump freaked out and fired the first CISA director, Chris Krebs, for pointing out that the election was safe and there was no evidence of manipulation or foul play).
While CISA has a variety of things under its purview, one thing that it is focused on is general information sharing between the government and private entities. This has actually been really useful for everyone, even though the tech companies have been (quite reasonably!) cautious about how closely they’ll work with the government (because they’ve been burned before). Indeed, as you may recall, one of the big revelations from the Snowden documents was about the PRISM program, which turned out to be oversold by the media reporting on it, but was still problematic in many ways. Since then, the tech companies have been even more careful about working with government, knowing that too much government involvement will eventually come out and get everyone burned.
With that in mind, CISA’s role has been pretty widely respected with almost everyone I’ve spoken to, both in government and at various companies. It provides information regarding actual threats, which has been useful to companies, and they seem to appreciate it. Given their historical distrust of government intrusion and their understanding of the limits of government authority here, the companies have been pretty attuned to any attempt at coercion, and I’ve heard of nothing regarding CISA at all.
That’s why the story seemed like such a big deal when I first read the headline and some of the summaries. But then I read the article… and the supporting documents… and there’s no there there. There’s nothing. There’s… the information sharing that everyone already knew was happening and that has been widely discussed in the past.
Let’s go through the supposed “bombshells”:
Behind closed doors, and through pressure on private platforms, the U.S. government has used its power to try to shape online discourse. According to meeting minutes and other records appended to a lawsuit filed by Missouri Attorney General Eric Schmitt, a Republican who is also running for Senate, discussions have ranged from the scale and scope of government intervention in online discourse to the mechanics of streamlining takedown requests for false or intentionally misleading information.
This sounds all scary and stuff, but most of those “meeting minutes” are from the already very, very public Misinformation & Disinformation Subcommittee that was part of an effort to counter foreign influence campaigns. As is clear on their website, their focus is very much on information sharing, with an eye towards protecting privacy and civil liberties, not suppressing speech.
The MDM team’s guiding principle is the protection of privacy, free speech, and civil liberties. To that end, the MDM team closely consults with the DHS Privacy Office and DHS Office for Civil Rights and Civil Liberties on all activities.
The MDM team is also committed to collaboration with partners and stakeholders. In addition to civil society groups, researchers, and state and local government officials, the MDM team works in close collaboration with the FBI’s Foreign Influence Task Force, the U.S. Department of State, the U.S. Department of Defense, and other agencies across the federal government. Federal Agencies respective roles in recognizing, understanding, and helping manage the threat and dangers of MDM and foreign influence on the American people are mutually supportive, and it is essential that we remain coordinated and cohesive when we engage stakeholders.
As professor Kate Starbird notes, the Intercept article makes out like this was some nefarious secret meeting when it was actually a publicly announced meeting with public minutes, and part of the discussion was even on where the guardrails should be for the government so that it doesn’t go too far. Indeed, even though the public output of this meeting is available directly on the CISA website for anyone to download, The Intercept published a blurry draft version, making it seem more secret and nefarious. (Updated: to note that not all of the meeting minutes published by The Intercept were public: they include a couple of extra subcommittee minutes that are not on the CISA website, but which have nothing particularly of substance, and certainly nothing that supports the claims in the article. And all of the claims here stand: the committee is public, their meeting minutes are public, including summaries of the subcommittee efforts, even if not all the full subcommittee meeting minutes are public).
And if you read the actual document it’s… all kinda reasonable? It does talk about responding to misinformation and disinformation threats, mainly around elections — not by suppressing speech, but by sharing information to help local election officials respond to it and provide correct information. From the actual, non-scary, very public report:
Currently, many election officials across the country are struggling to conduct their critical work of
administering our elections while responding to an overwhelming amount of inquiries, including false
and misleading allegations. Some elections officials are even experiencing physical threats. Based on
briefings to this subcommittee by an election official, CISA should be providing support — through
education, collaboration, and funding — for election officials to pre-empt and respond to MD
It includes four specific recommendations for how to deal with mis- and disinformation and none of them involve suppressing it. They all seem to be about responding to and countering such information by things like “broad public awareness campaigns,” “enhancing information literacy,” “providing informational resources,” “providing education frameworks,” “boosting authoritative sources,” and “rapid communication.” See a pattern? All of this is about providing information, which makes sense. Nothing about suppressing. The report even notes that there are conflicting studies on the usefulness of “prebunking/debunking” misinformation, and suggests that CISA pay attention to where that research goes before going too hard on any program.
There’s literally nothing nefarious at all.
The next paragraph in the Intercept piece then provides an email that kinda debunks the entire framing of the article:
“Platforms have got to get comfortable with gov’t. It’s really interesting how hesitant they remain,” Microsoft executive Matt Masterson, a former DHS official, texted Jen Easterly, a DHS director, in February.
Masterson had worked in DHS on these kinds of programs and then moved over to Microsoft. But here he’s literally pointing out that the companies remain hesitant to work too closely with government, which is exactly what we’ve been saying all along, and completely undermines the narrative people have taken out of this article that it proves that the government was too chummy with the companies.
(Also updating to note that the original Intercept story falsely claimed that Masterson was working for DHS at the time of the text, which makes it sound more nefarious. They later quietly changed it, and only added a correction days later when people called them out on it).
Also, this text message is completely out of context, but hold on for that, because it comes up again later in the article.
Next up, the article takes a single quote out of context from an FBI official.
In a March meeting, Laura Dehmlow, an FBI official, warned that the threat of subversive information on social media could undermine support for the U.S. government. Dehmlow, according to notes of the discussion attended by senior executives from Twitter and JPMorgan Chase, stressed that “we need a media infrastructure that is held accountable.”
First off, this is generally no different than the nonsense the FBI says publicly, and there’s nothing in the linked document that suggests the companies were in agreement that anyone should be “held accountable.” But even if we look at what Dehmlow actually said, in context, while she did talk about accountability, she mostly focused on education.
Ms. Dehmlow was asked to provide her thoughts or to define a goal for approaching MDM and she mentioned “resiliency”. She stated we need a media infrastructure that is held accountable; we need to early educate the populace; and that today, critical thinking seems a problem currently, [REDACTED] Senior Advisor for Homeland Security and Director of Defending Democratic Institutions Center for Strategic and International Studies (CSIS) stated that civics education should be provided at all ages.
Read in context, it sure looks like Dehmlow’s use of the phrase that media should be “held accountable,” means by an educated public. I mean, there’s some notable irony in all of this, where Dehmlow is talking about better educating people on critical thinking, and that’s been turned into pure nonsense and misinformation.
From there, the misleading article jumps randomly to Meta’s interface for the government to submit reports, again implying that this is somehow connected to everything above (it’s not, it’s something totally different):
There is also a formalized process for government officials to directly flag content on Facebook or Instagram and request that it be throttled or suppressed through a special Facebook portal that requires a government or law enforcement email to use. At the time of writing, the “content request system” at facebook.com/xtakedowns/login is still live. DHS and Meta, the parent company of Facebook, did not respond to a request for comment. The FBI declined to comment.
Again, this is wholly unrelated to the paragraphs above it. The article is just randomly trying to tie this to it. Every company has systems for anyone to report information for the companies to review. But the big companies, for fairly obvious and sensible reasons, also set up specialized versions of that reporting system for government officials so that reports don’t get lost in the flow. Nothing in that system is about demanding or suppressing information, and it’s basically misinformation for the Intercept to imply otherwise. It’s just the standard reporting tool. The presentation that the Intercept links to is just about how government officials can log into the system because it has multiple layers of security to make sure that you’re actually a government official.
It remains difficult to see (1) how this is connected to the CISA discussion, and (2) how this is even remotely new, interesting or relevant. Indeed, you can find out more about this system on Facebook’s “information for law enforcement authorities” page, and the nefarious sounding “Content Request System (CRS)” highlighted in the document the Intercept shows appears to just be the system for law enforcement agents to request information regarding an investigation. That is, a system for submitting a subpoena, court order, search warrant, or national security letter.
Update: Now there is also a part of the system that enables governments to report potential misinformation and disinformation, though again that appears to be the same kind of reporting that anyone can do, because such information breaks Facebook’s rules. The actual document this comes from again does not seem nefarious at all. It literally is just saying the government can alert Facebook to content that violates its existing rules.
So, it allows law enforcement to report the content, but it shows with it the relevant rules. This is the same kind of reporting that any regular user can do, it’s just that law enforcement is viewed as a “trusted” flagger, so their flags get more attention. It does not mean that the government is censoring content, and Facebook’s ongoing transparency reports show that they often reject these requests.
After tossing in that misleading and unrelated point, the article takes another big shift, jumps to a separate DHS “Homeland Security Review” in which DHS warns about the problem of “inaccurate information” which, you know, is a legitimate thing for DHS to be concerned about, because it can impact security. It’s certainly quite reasonable to be worried about DHS overreach. We’ve screamed about DHS overreach for years.
But I keep reading through the article and the documents, and there’s nothing here.
The report notes that there’s a lot of misinformation, and there is, including on the withdrawal of US troops from Afghanistan. That’s true, and it seems like a reasonable concern for DHS… but the Intercept then throws in a random quote about how Republicans (who have been one source of misinformation about the withdrawal) are planning to investigate if they retake the House.
The inclusion of the 2021 U.S. withdrawal from Afghanistan is particularly noteworthy, given that House Republicans, should they take the majority in the midterms, have vowed to investigate. “This makes Benghazi look like a much smaller issue,” said Rep. Mike Johnson, R-La., a member of the Armed Services Committee, adding that finding answers “will be a top priority.”
But how is that relevant to the rest of the article and what does it have to do with the government supposedly suppressing information or working with the companies? The answer is absolutely nothing at all, but I guess it’s the sort of bullshit you throw in to make things sound scary when your “secret” (not actually secret) documents don’t actually reveal anything.
There’s also a random non sequitur about DHS in 2004 ramping up the national threat level for terrorism. What’s that got to do with anything? ¯\_(ツ)_/¯
The article keeps pinballing around to random anecdotes like that, which are totally disconnected and have nothing to do with one another. For example:
That track record has not prevented the U.S. government from seeking to become arbiters of what constitutes false or dangerous information on inherently political topics. Earlier this year, Republican Gov. Ron DeSantis signed a law known by supporters as the “Stop WOKE Act,” which bans private employers from workplace trainings asserting an individual’s moral character is privileged or oppressed based on his or her race, color, sex, or national origin. The law, critics charged, amounted to a broad suppression of speech deemed offensive. The Foundation for Individual Rights and Expression, or FIRE, has since filed a lawsuit against DeSantis, alleging “unconstitutional censorship.” A federal judge temporarily blocked parts of the Stop WOKE Act, ruling that the law had violated workers’ First Amendment rights.
I keep rereading that, and the paragraph before and after it, trying to figure out if they were working on a different article and accidentally slipped it into this one. It has nothing whatsoever to do with the rest of the article. And Ron DeSantis is not in “the U.S. government.” While he may want to be president, right now he’s governor of Florida, which is a state, not the federal government. It’s just… weird?
Then, finally, after these random tangents, with zero effort to thread them into any kind of coherent narrative, the article veers back to DHS and social media by saying it’s not actually clear if DHS is doing anything.
The extent to which the DHS initiatives affect Americans’ daily social feeds is unclear. During the 2020 election, the government flagged numerous posts as suspicious, many of which were then taken down, documents cited in the Missouri attorney general’s lawsuit disclosed. And a 2021 report by the Election Integrity Partnership at Stanford University found that of nearly 4,800 flagged items, technology platforms took action on 35 percent — either removing, labeling, or soft-blocking speech, meaning the users were only able to view content after bypassing a warning screen. The research was done “in consultation with CISA,” the Cybersecurity and Infrastructure Security Agency.
Again, this is extremely weak sauce. People “report” content that violates social media platform rules all the time. You and I can do it. The very fact that the article admits the companies only “took action” on 35% of reports (and again, only a subset of that was removing) shows that this is not about the government demanding action and the companies complying.
In fact, if you actually read the Stanford report (which it’s unclear if these reporters did), the flagged items they’re talking about are ones that the Election Integrity Project flagged, not the government. And, even then, the 35% number is incredibly misleading. Here’s the paragraph from the report:
We find, overall, that platforms took action on 35% of URLs that we reported to
them. 21% of URLs were labeled, 13% were removed, and 1% were soft blocked.
No action was taken on 65%. TikTok had the highest action rate: actioning (in
their case, their only action was removing) 64% of URLs that the EIP reported
to their team.
So the most active in removals was TikTok, which people already think is problematic, but the big American companies were even less involved. Second, only 13% of the reports resulted in removing the content, and the EIP report actually breaks down what kinds of content were removed vs . labeled, and it’s a bit eye opening (and again destroys the Intercept’s narrative):
If you look, the only cases where the majority of content reported was removed rather than just “labeled” (i.e., providing more information) were phishing attempts and fake official accounts. Those seems like the sorts of things where it makes sense for the platforms to take down that content, and I’m curious if the reporters at the Intercept think we’d be better off if the platforms ignored phishing attempts.
The article then pinballs back to talking about DHS and CISA, how it was set up, and concerns about elections. Again, none of that is weird or secret or problematic. Finally, it gets to another bit that, when read in the article, sounds questionable and certainly concerning:
Emails between DHS officials, Twitter, and the Center for Internet Security outline the process for such takedown requests during the period leading up to November 2020. Meeting notes show that the tech platforms would be called upon to “process reports and provide timely responses, to include the removal of reported misinformation from the platform where possible.”
Except if you look at the actual documents, again, they’re taking things incredibly out of context and turning nothing into something that sounds scary. The first link — supposedly the one that “outlines the process for such takedown events” — does no such thing. It’s literally CISA passing information on to Twitter from the Colorado government, highlighting accounts that they were worried were impersonating Colorado state official Twitter accounts.
The email flat out says that CISA “is not the originator of this information. CISA is forwarding this information, unedited, from its originating source.” And the “information” is literally accounts that Colorado officials are worried are pretending to be Colorado state official government accounts.
Now, it does look like at least some of those accounts may be parody accounts (at least one claims to be in its bio). But there’s no evidence that Twitter actually took them down. And nowhere in that document is there an outline of a process for a takedown.
The second document also does not seem to show what the Intercept claims. It shows some emails, where CISA was trying to set up a reporting portal to make all of this easier (state officials seeing something questionable and passing it on to the companies via CISA). What the email actually shows is that whoever is responding to CISA from Twitter has a whole bunch of questions about the portal before they’re willing to sign on to it. And those concerns include things like “how long will reported information be retained?” and “what is the criteria used to determine who has access to the portal?”
These are the questions you ask when you are making sure that this kind of thing is not government coercion, but is a limited purpose tool for a specific situation. The response from a CISA official does say that their hope is the social media companies will (as the Intercept notes) “process reports and provide timely responses, to include the removal of reported misinformation from the platform where possible.” But in context, again, that makes sense. This portal is for election officials to report problematic accounts, and part of the point of the portal is that if the platforms agree that the content or accounts break their rules they will report back to the election officials.
And, again, this is not all that different from how things work for every day users. If I report a spam account on Twitter, later on Twitter sends me back a notification on the resolution for what I reported. This sounds like the same thing, but perhaps a slightly more rapid response so that election officials know what’s happening.
Again, I’m having difficulty finding anything nefarious here at all, and certainly no evidence of coercion or the companies agreeing to every government request. In fact, it’s quite the opposite.
Then the article pinballs again, back around to the (again, very public) MDM team. And, again, it tries to spin what is clearly reasonable information sharing into something more nefarious:
CISA has defended its burgeoning social media monitoring authorities, stating that “once CISA notified a social media platform of disinformation, the social media platform could independently decide whether to remove or modify the post.”
And, again, as the documents (but not the article!) demonstrate, the companies are often resistant to these government requests.
Then suddenly we come back around to the Easterly / Masterson text messages. The texts are informal, which is not a surprise. They work in similar circles, and both have been at CISA (though not at the same time). The Intercept presents this text exchange in a nefarious manner, even as Masterson is making it clear that the companies are resistant. But the Intercept reporters leave out exactly what Masterson is saying they’re resistant to. Here’s what the Intercept says:
In late February, Easterly texted with Matthew Masterson, a representative at Microsoft who formerly worked at CISA, that she is “trying to get us in a place where Fed can work with platforms to better understand mis/dis trends so relevant agencies can try to prebunk/debunk as useful.”
Here’s the full exchange:
If you can’t read that, Easterly texts:
Thx so much! Really appreciate it. And sorry I didn’t ring last week… think you were on the call this week? Just trying to get us in a place where Fed can work with platforms to better understand the mis/dis trends so relevant agencies can try to prebunk/debunk as useful…
Not our mission but was looking to play a coord role so not every D/A is independently reaching out to platforms which could cause a lot of chaos.
And Masterson replies:
Was on the call. The coordination is greatly appreciated. Was disappointed that platforms including us didn’t offer more (we’ll get there) and sector leadership had 0 questions.
We’ll get there and that kind leadership really helps. Platforms have got to get more comfortable with gov’t. It’s really interesting how hesitant they remain.
Again Microsoft included.
This shows that the platforms are treading very carefully in working with government, even around this request which seems pretty innocuous. CISA is trying to help coordinate so that when local officials have issues they have a path to reach out to the platforms, rather than just reaching out willy-nilly.
We’re now deep, deep in this article, and despite all these hints of nefariousness, and people insisting that it shows how the government is collaborating with social media, all the underlying documents suggest the exact opposite.
Then the article pinballs back to the MDM meeting (whose recommendations are and have been publicly available on the CISA website), and note that Twitter’s former head of legal, Vijaya Gadde, took part in one of the meetings. And, um, yeah? Again, the entire point of the MDM board is to figure out how to understand the information ecosystem and, as we noted up top, to do what they can to provide additional information, education and context.
There is literally nothing about suppression.
But the Intercept, apparently desperate to put in some shred that suggests this proves the government is looking to suppress information, slips in this paragraph:
The report called on the agency to closely monitor “social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio and other online resources.” They argued that the agency needed to take steps to halt the “spread of false and misleading information,” with a focus on information that undermines “key democratic institutions, such as the courts, or by other sectors such as the financial system, or public health measures.”
Note the careful use of quotes. All of the problematic words and phrases like “closely monitor” and “take steps to halt” are not in the report at all. You can go read the damn thing. It does not say that it should “closely monitor” social media platforms of all sizes. It says that the misinformation/disinformation problem involves the “entire information ecosystem.” It’s saying that to understand the flow of this, you have to recognize that it flows all over the place. And that’s accurate. It says nothing about monitoring it, closely or otherwise.
As for “taking steps to halt the spread” it also does not even remotely say that. If you look for the word “spread” it appears in the report seven times. Not once does it discuss anything about trying to halt the spread. It talks about teaching people how not to accidentally spread misinformation, about how the spread of misinformation can create a risk to critical functions like public health and financial services, how foreign adversaries abuse it, and how election officials lack the tools to identify it.
Honestly, the only point where “spread” appears in a proactive sense is where it says that they should measure “the spread” of CISA’s own information and messages.
The Intercept article is journalistic malpractice.
It then pinballs yet again, jumping to the whole DHS Disinformation Governance Board, which we criticized, mainly because of the near total lack of clarity around its rollout, and how the naming of it (idiotic) and the secrecy seemed primed to fuel conspiracy theories, as it did. But that’s unrelated to the CISA stuff. The conspiracy theories around the DGB (which was announced and disbanded within weeks) only help to fuel more nonsense in this article.
The article continues to pinball around, basically pulling random examples of questionable government behavior, but never tying it to anything related to the actual subject. I mean, yes, the FBI does bad stuff in spying on people. We know that. But that’s got fuck all to do with CISA, and yet the article spends paragraphs on it.
And then, I can’t even believe we need to go here, but it brings up the whole stupid nonsense about Twitter and the Hunter Biden laptop story. As we’ve explained at great length, Twitter blocked links to one article (not others) by the NY Post because they feared that the article included documents that violated its hacked materials policy, a policy that had been in place since 2019 and had been used before (equally questionably, but it gets no attention) on things like leaked documents of police chatter. We had called out that policy at the time, noting how it could potentially limit reporting, and right after there was the outcry about the NY Post story, Twitter changed the policy.
Yet this story remains the bogeyman for nonsense grifters who claim it’s proof that Twitter acted to swing the election. Leaving aside that (1) there’s nothing in that article that would swing the election, since Hunter Biden wasn’t running for president, and (2) the story got a ton of coverage elsewhere, and Twitter’s dumb policy enforcement actually ended up giving it more attention, this story is one about the trickiness in crafting reasonable trust & safety policies, not of any sort of nefariousness.
Yet the Intercept takes up the false narrative and somehow makes it even dumber:
In retrospect, the New York Post reporting on the contents of Hunter Biden’s laptop ahead of the 2020 election provides an elucidating case study of how this works in an increasingly partisan environment.
Much of the public ignored the reporting or assumed it was false, as over 50 former intelligence officials charged that the laptop story was a creation of a “Russian disinformation” campaign. The mainstream media was primed by allegations of election interference in 2016 — and, to be sure, Trump did attempt to use the laptop to disrupt the Biden campaign. Twitter ended up banning links to the New York Post’s report on the contents of the laptop during the crucial weeks leading up to the election. Facebook also throttled users’ ability to view the story.
In recent months, a clearer picture of the government’s influence has emerged.
In an appearance on Joe Rogan’s podcast in August, Meta CEO Mark Zuckerberg revealed that Facebook had limited sharing of the New York Post’s reporting after a conversation with the FBI. “The background here is that the FBI came to us — some folks on our team — and was like, ‘Hey, just so you know, you should be on high alert that there was a lot of Russian propaganda in the 2016 election,’” Zuckerberg told Rogan. The FBI told them, Zuckerberg said, that “‘We have it on notice that basically there’s about to be some kind of dump.’” When the Post’s story came out in October 2020, Facebook thought it “fit that pattern” the FBI had told them to look out for.
Zuckerberg said he regretted the decision, as did Jack Dorsey, the CEO of Twitter at the time. Despite claims that the laptop’s contents were forged, the Washington Post confirmed that at least some of the emails on the laptop were authentic. The New York Times authenticated emails from the laptop — many of which were cited in the original New York Post reporting from October 2020 — that prosecutors have examined as part of the Justice Department’s probe into whether the president’s son violated the law on a range of issues, including money laundering, tax-related offenses, and foreign lobbying registration.
The Zuckerberg/Rogan podcast thing has also been taken out of context by the same people. As he notes, the FBI gave a general warning to be on the lookout for false material, which was a perfectly reasonable thing for them to do. And, in response Facebook did not actually block links to the article. It just limited how widely the algorithm would share it until the article had gone through a fact check process. This is a reasonable way to handle information when there are questions about its authenticity.
But neither Twitter nor Facebook suggest that the government told them to suppress the story, because it didn’t. It told them generally to be on the lookout, and both companies did what they do when faced with similar info.
From there, the Intercept turns to a nonsense frivolous lawsuit filed by Missouri’s Attorney General and takes a laughable claim at face value:
Documents filed in federal court as part of a lawsuit by the attorneys general of Missouri and Louisiana add a layer of new detail to Zuckerberg’s anecdote, revealing that officials leading the push to expand the government’s reach into disinformation also played a quiet role in shaping the decisions of social media giants around the New York Post story.
According to records filed in federal court, two previously unnamed FBI agents — Elvis Chan, an FBI special agent in the San Francisco field office, and Dehmlow, the section chief of the FBI’s Foreign Influence Task Force — were involved in high-level communications that allegedly “led to Facebook’s suppression” of the Post’s reporting.
Now here, you can note that Dehmlow was the person mentioned way above who talked about platforms and responsibility, but as we noted, in context, she was talking about better education of the public. The section quoted in Missouri’s litigation is laughable. It’s telling a narrative for fan service to Trumpist voters. We already know that the FBI told Facebook to be on the lookout for fake information. The legal complaint just makes up the idea that Dehmlow tells them what to censor. That’s bullshit without evidence, and there’s nothing to back it up beyond a highly fanciful and politicized narrative.
But from there, the Intercept says this:
The Hunter Biden laptop story was only the most high-profile example of law enforcement agencies pressuring technology firms.
Except… it wasn’t. Literally nothing anywhere in this story shows law enforcement “pressuring technology firms” about the Hunter Biden laptop story.
The article then goes on at length about the silly politicized lawsuit, quoting two highly partisan commentators with axes to grind, before quoting former ACLU president Nadine Strossen claiming:
“If a foreign authoritarian government sent these messages,” noted Nadine Strossen, the former president of the American Civil Liberties Union, “there is no doubt we would call it censorship.”
Because of the horrible way the article is written, it’s not even clear which “messages” she’s talking about, but I’ve gone through every underlying document in the entire article and none of them involve anything remotely close to censorship. Given the selective quoting and misrepresentation in the rest of the article, it makes me wonder what was actually shown to Strossen.
As far as I can tell, the emails they’re discussing (again, this is not at all clear from the article) are the ones discussed earlier in which Colorado officials (not DHS) were concerned that some new accounts were attempting to impersonate Colorado officials. They sent a note to CISA, which auto-forwarded it to the companies. Yes, some of the accounts may have been parodies, but there’s no evidence that Twitter actually took action on the accounts, and the fact is that the accounts did make some effort to at least partially appear as Colorado official state accounts. All the government officials did was flag it.
I think Strossen is a great defender of free speech, but I honestly can’t see how anyone thinks that’s “censorship.”
Anyway, that’s where the article ends. There’s no smoking gun. There’s nothing. There are a lot of random disconnected anecdotes, misreading and misrepresenting documents, and taking publicly available documents and pretending they’re secret.
If you look at the actual details it shows… some fairly basic and innocuous information sharing with nothing even remotely looking like pressure on the companies to take down information. We also see pushback from the companies, which are being extremely careful not to get too close to the government and to keep them at arms’ length.
But, of course, a bunch of nonsense peddlers are turning the story into a big deal. And other media is picking up on it and turning it into nonsense.
None of those headlines are accurate if you actually look at the details. But all are getting tremendous play all over the place.
And, of course, the reporters on the story rushed to appear on Tucker Carlson:
Except that’s not at all what the “docs show.” At no point do they talk about “monitoring disinformation.” And there is nothing about them “working together” on this beyond basic information sharing.
In fact, just after this story came out, ProPublica released a much more interesting (and better reported) article that basically talks about how the Biden administration gave up on fighting disinformation because Republicans completely weaponized it by misrepresenting perfectly reasonable activity as nefarious.
Instead, a ProPublica review found, the Biden administration has backed away from a comprehensive effort to address disinformation after accusations from Republicans and right-wing influencers that the administration was trying to stifle dissent.
Incredibly, that ProPublica piece quotes Colorado officials (you know, like the ones who emailed CISA their concern, which got forwarded to Twitter, about fake accounts) noting how they really could use some help from the government and they’re not getting it:
“States need more support. It is clear that threats to election officials and workers are not dissipating and may only escalate around the 2022 and 2024 elections,” Colorado Secretary of State Jena Griswold, a Democrat, said in an email to ProPublica. “Election offices need immediate meaningful support from federal partners.”
I had tremendous respect for The Intercept, which I think has done some great work in the past, but this article is so bad, so misleading, and just so full of shit that it should be retracted. A credible news organization would not put out this kind of pure bullshit.
Filed Under: cisa, coordination, dhs, disinformation, fbi, hunter biden laptop, misinformation
Companies: meta, the intercept, twitter