Bullshit Reporting: The Intercept’s Story About Government Policing Disinfo Is Absolute Garbage

from the that-article-is-bullshit dept

Do not believe everything you read. Even if it comes from more “respectable” publications. The Intercept had a big story this week that is making the rounds, suggesting that “leaked” documents prove the DHS has been coordinating with tech companies to suppress information. The story has been immediately picked up by the usual suspects, claiming it reveals the “smoking gun” of how the Biden administration was abusing government power to censor them on social media.

The only problem? It shows nothing of the sort.

The article is garbage. It not only misreads things, it is confused about what the documents the reporters have actually say, and presents widely available, widely known things as if they were secret and hidden when they were not.

The entire article is a complete nothingburger, and is fueling a new round of lies and nonsense from people who find it useful to misrepresent reality. If the Intercept had any credibility at all it would retract the article and examine whatever processes failed in leading to the article getting published.

Let’s dig in. Back in 2018, then President Donald Trump signed the Cybersecurity and Infrastructure Security Agency Act into law, creating the Cybersecurity and Infrastructure Security Agency as a separate agency in the Department of Homeland Security. While there are always reasons to be concerned about government interference in various aspects of life, CISA was pretty uncontroversial (perhaps with the exception of when Trump freaked out and fired the first CISA director, Chris Krebs, for pointing out that the election was safe and there was no evidence of manipulation or foul play).

While CISA has a variety of things under its purview, one thing that it is focused on is general information sharing between the government and private entities. This has actually been really useful for everyone, even though the tech companies have been (quite reasonably!) cautious about how closely they’ll work with the government (because they’ve been burned before). Indeed, as you may recall, one of the big revelations from the Snowden documents was about the PRISM program, which turned out to be oversold by the media reporting on it, but was still problematic in many ways. Since then, the tech companies have been even more careful about working with government, knowing that too much government involvement will eventually come out and get everyone burned.

With that in mind, CISA’s role has been pretty widely respected with almost everyone I’ve spoken to, both in government and at various companies. It provides information regarding actual threats, which has been useful to companies, and they seem to appreciate it. Given their historical distrust of government intrusion and their understanding of the limits of government authority here, the companies have been pretty attuned to any attempt at coercion, and I’ve heard of nothing regarding CISA at all.

That’s why the story seemed like such a big deal when I first read the headline and some of the summaries. But then I read the article… and the supporting documents… and there’s no there there. There’s nothing. There’s… the information sharing that everyone already knew was happening and that has been widely discussed in the past.

Let’s go through the supposed “bombshells”:

Behind closed doors, and through pressure on private platforms, the U.S. government has used its power to try to shape online discourse. According to meeting minutes and other records appended to a lawsuit filed by Missouri Attorney General Eric Schmitt, a Republican who is also running for Senate, discussions have ranged from the scale and scope of government intervention in online discourse to the mechanics of streamlining takedown requests for false or intentionally misleading information.

This sounds all scary and stuff, but most of those “meeting minutes” are from the already very, very public Misinformation & Disinformation Subcommittee that was part of an effort to counter foreign influence campaigns. As is clear on their website, their focus is very much on information sharing, with an eye towards protecting privacy and civil liberties, not suppressing speech.

The MDM team’s guiding principle is the protection of privacy, free speech, and civil liberties. To that end, the MDM team closely consults with the DHS Privacy Office and DHS Office for Civil Rights and Civil Liberties on all activities.

The MDM team is also committed to collaboration with partners and stakeholders. In addition to civil society groups, researchers, and state and local government officials, the MDM team works in close collaboration with the FBI’s Foreign Influence Task Force, the U.S. Department of State, the U.S. Department of Defense, and other agencies across the federal government. Federal Agencies respective roles in recognizing, understanding, and helping manage the threat and dangers of MDM and foreign influence on the American people are mutually supportive, and it is essential that we remain coordinated and cohesive when we engage stakeholders.

As professor Kate Starbird notes, the Intercept article makes out like this was some nefarious secret meeting when it was actually a publicly announced meeting with public minutes, and part of the discussion was even on where the guardrails should be for the government so that it doesn’t go too far. Indeed, even though the public output of this meeting is available directly on the CISA website for anyone to download, The Intercept published a blurry draft version, making it seem more secret and nefarious. (Updated: to note that not all of the meeting minutes published by The Intercept were public: they include a couple of extra subcommittee minutes that are not on the CISA website, but which have nothing particularly of substance, and certainly nothing that supports the claims in the article. And all of the claims here stand: the committee is public, their meeting minutes are public, including summaries of the subcommittee efforts, even if not all the full subcommittee meeting minutes are public).

And if you read the actual document it’s… all kinda reasonable? It does talk about responding to misinformation and disinformation threats, mainly around elections — not by suppressing speech, but by sharing information to help local election officials respond to it and provide correct information. From the actual, non-scary, very public report:

Currently, many election officials across the country are struggling to conduct their critical work of administering our elections while responding to an overwhelming amount of inquiries, including false and misleading allegations. Some elections officials are even experiencing physical threats. Based on briefings to this subcommittee by an election official, CISA should be providing support — through education, collaboration, and funding — for election officials to pre-empt and respond to MD

It includes four specific recommendations for how to deal with mis- and disinformation and none of them involve suppressing it. They all seem to be about responding to and countering such information by things like “broad public awareness campaigns,” “enhancing information literacy,” “providing informational resources,” “providing education frameworks,” “boosting authoritative sources,” and “rapid communication.” See a pattern? All of this is about providing information, which makes sense. Nothing about suppressing. The report even notes that there are conflicting studies on the usefulness of “prebunking/debunking” misinformation, and suggests that CISA pay attention to where that research goes before going too hard on any program.

There’s literally nothing nefarious at all.

The next paragraph in the Intercept piece then provides an email that kinda debunks the entire framing of the article:

“Platforms have got to get comfortable with gov’t. It’s really interesting how hesitant they remain,” Microsoft executive Matt Masterson, a former DHS official, texted Jen Easterly, a DHS director, in February.

Masterson had worked in DHS on these kinds of programs and then moved over to Microsoft. But here he’s literally pointing out that the companies remain hesitant to work too closely with government, which is exactly what we’ve been saying all along, and completely undermines the narrative people have taken out of this article that it proves that the government was too chummy with the companies.

(Also updating to note that the original Intercept story falsely claimed that Masterson was working for DHS at the time of the text, which makes it sound more nefarious. They later quietly changed it, and only added a correction days later when people called them out on it).

Also, this text message is completely out of context, but hold on for that, because it comes up again later in the article.

Next up, the article takes a single quote out of context from an FBI official.

In a March meeting, Laura Dehmlow, an FBI official, warned that the threat of subversive information on social media could undermine support for the U.S. government. Dehmlow, according to notes of the discussion attended by senior executives from Twitter and JPMorgan Chase, stressed that “we need a media infrastructure that is held accountable.”

First off, this is generally no different than the nonsense the FBI says publicly, and there’s nothing in the linked document that suggests the companies were in agreement that anyone should be “held accountable.” But even if we look at what Dehmlow actually said, in context, while she did talk about accountability, she mostly focused on education.

Ms. Dehmlow was asked to provide her thoughts or to define a goal for approaching MDM and she mentioned “resiliency”. She stated we need a media infrastructure that is held accountable; we need to early educate the populace; and that today, critical thinking seems a problem currently, [REDACTED] Senior Advisor for Homeland Security and Director of Defending Democratic Institutions Center for Strategic and International Studies (CSIS) stated that civics education should be provided at all ages.

Read in context, it sure looks like Dehmlow’s use of the phrase that media should be “held accountable,” means by an educated public. I mean, there’s some notable irony in all of this, where Dehmlow is talking about better educating people on critical thinking, and that’s been turned into pure nonsense and misinformation.

From there, the misleading article jumps randomly to Meta’s interface for the government to submit reports, again implying that this is somehow connected to everything above (it’s not, it’s something totally different):

There is also a formalized process for government officials to directly flag content on Facebook or Instagram and request that it be throttled or suppressed through a special Facebook portal that requires a government or law enforcement email to use. At the time of writing, the “content request system” at facebook.com/xtakedowns/login is still live. DHS and Meta, the parent company of Facebook, did not respond to a request for comment. The FBI declined to comment.

Again, this is wholly unrelated to the paragraphs above it. The article is just randomly trying to tie this to it. Every company has systems for anyone to report information for the companies to review. But the big companies, for fairly obvious and sensible reasons, also set up specialized versions of that reporting system for government officials so that reports don’t get lost in the flow. Nothing in that system is about demanding or suppressing information, and it’s basically misinformation for the Intercept to imply otherwise. It’s just the standard reporting tool. The presentation that the Intercept links to is just about how government officials can log into the system because it has multiple layers of security to make sure that you’re actually a government official.

It remains difficult to see (1) how this is connected to the CISA discussion, and (2) how this is even remotely new, interesting or relevant. Indeed, you can find out more about this system on Facebook’s “information for law enforcement authorities” page, and the nefarious sounding “Content Request System (CRS)” highlighted in the document the Intercept shows appears to just be the system for law enforcement agents to request information regarding an investigation. That is, a system for submitting a subpoena, court order, search warrant, or national security letter.

Update: Now there is also a part of the system that enables governments to report potential misinformation and disinformation, though again that appears to be the same kind of reporting that anyone can do, because such information breaks Facebook’s rules. The actual document this comes from again does not seem nefarious at all. It literally is just saying the government can alert Facebook to content that violates its existing rules.

So, it allows law enforcement to report the content, but it shows with it the relevant rules. This is the same kind of reporting that any regular user can do, it’s just that law enforcement is viewed as a “trusted” flagger, so their flags get more attention. It does not mean that the government is censoring content, and Facebook’s ongoing transparency reports show that they often reject these requests.

After tossing in that misleading and unrelated point, the article takes another big shift, jumps to a separate DHS “Homeland Security Review” in which DHS warns about the problem of “inaccurate information” which, you know, is a legitimate thing for DHS to be concerned about, because it can impact security. It’s certainly quite reasonable to be worried about DHS overreach. We’ve screamed about DHS overreach for years.

But I keep reading through the article and the documents, and there’s nothing here.

The report notes that there’s a lot of misinformation, and there is, including on the withdrawal of US troops from Afghanistan. That’s true, and it seems like a reasonable concern for DHS… but the Intercept then throws in a random quote about how Republicans (who have been one source of misinformation about the withdrawal) are planning to investigate if they retake the House.

The inclusion of the 2021 U.S. withdrawal from Afghanistan is particularly noteworthy, given that House Republicans, should they take the majority in the midterms, have vowed to investigate. “This makes Benghazi look like a much smaller issue,” said Rep. Mike Johnson, R-La., a member of the Armed Services Committee, adding that finding answers “will be a top priority.”

But how is that relevant to the rest of the article and what does it have to do with the government supposedly suppressing information or working with the companies? The answer is absolutely nothing at all, but I guess it’s the sort of bullshit you throw in to make things sound scary when your “secret” (not actually secret) documents don’t actually reveal anything.

There’s also a random non sequitur about DHS in 2004 ramping up the national threat level for terrorism. What’s that got to do with anything? ¯\_(ツ)_/¯

The article keeps pinballing around to random anecdotes like that, which are totally disconnected and have nothing to do with one another. For example:

That track record has not prevented the U.S. government from seeking to become arbiters of what constitutes false or dangerous information on inherently political topics. Earlier this year, Republican Gov. Ron DeSantis signed a law known by supporters as the “Stop WOKE Act,” which bans private employers from workplace trainings asserting an individual’s moral character is privileged or oppressed based on his or her race, color, sex, or national origin. The law, critics charged, amounted to a broad suppression of speech deemed offensive. The Foundation for Individual Rights and Expression, or FIRE, has since filed a lawsuit against DeSantis, alleging “unconstitutional censorship.” A federal judge temporarily blocked parts of the Stop WOKE Act, ruling that the law had violated workers’ First Amendment rights.

I keep rereading that, and the paragraph before and after it, trying to figure out if they were working on a different article and accidentally slipped it into this one. It has nothing whatsoever to do with the rest of the article. And Ron DeSantis is not in “the U.S. government.” While he may want to be president, right now he’s governor of Florida, which is a state, not the federal government. It’s just… weird?

Then, finally, after these random tangents, with zero effort to thread them into any kind of coherent narrative, the article veers back to DHS and social media by saying it’s not actually clear if DHS is doing anything.

The extent to which the DHS initiatives affect Americans’ daily social feeds is unclear. During the 2020 election, the government flagged numerous posts as suspicious, many of which were then taken down, documents cited in the Missouri attorney general’s lawsuit disclosed. And a 2021 report by the Election Integrity Partnership at Stanford University found that of nearly 4,800 flagged items, technology platforms took action on 35 percent — either removing, labeling, or soft-blocking speech, meaning the users were only able to view content after bypassing a warning screen. The research was done “in consultation with CISA,” the Cybersecurity and Infrastructure Security Agency.

Again, this is extremely weak sauce. People “report” content that violates social media platform rules all the time. You and I can do it. The very fact that the article admits the companies only “took action” on 35% of reports (and again, only a subset of that was removing) shows that this is not about the government demanding action and the companies complying.

In fact, if you actually read the Stanford report (which it’s unclear if these reporters did), the flagged items they’re talking about are ones that the Election Integrity Project flagged, not the government. And, even then, the 35% number is incredibly misleading. Here’s the paragraph from the report:

We find, overall, that platforms took action on 35% of URLs that we reported to them. 21% of URLs were labeled, 13% were removed, and 1% were soft blocked. No action was taken on 65%. TikTok had the highest action rate: actioning (in their case, their only action was removing) 64% of URLs that the EIP reported to their team.

So the most active in removals was TikTok, which people already think is problematic, but the big American companies were even less involved. Second, only 13% of the reports resulted in removing the content, and the EIP report actually breaks down what kinds of content were removed vs . labeled, and it’s a bit eye opening (and again destroys the Intercept’s narrative):

If you look, the only cases where the majority of content reported was removed rather than just “labeled” (i.e., providing more information) were phishing attempts and fake official accounts. Those seems like the sorts of things where it makes sense for the platforms to take down that content, and I’m curious if the reporters at the Intercept think we’d be better off if the platforms ignored phishing attempts.

The article then pinballs back to talking about DHS and CISA, how it was set up, and concerns about elections. Again, none of that is weird or secret or problematic. Finally, it gets to another bit that, when read in the article, sounds questionable and certainly concerning:

Emails between DHS officials, Twitter, and the Center for Internet Security outline the process for such takedown requests during the period leading up to November 2020. Meeting notes show that the tech platforms would be called upon to “process reports and provide timely responses, to include the removal of reported misinformation from the platform where possible.”

Except if you look at the actual documents, again, they’re taking things incredibly out of context and turning nothing into something that sounds scary. The first link — supposedly the one that “outlines the process for such takedown events” — does no such thing. It’s literally CISA passing information on to Twitter from the Colorado government, highlighting accounts that they were worried were impersonating Colorado state official Twitter accounts.

The email flat out says that CISA “is not the originator of this information. CISA is forwarding this information, unedited, from its originating source.” And the “information” is literally accounts that Colorado officials are worried are pretending to be Colorado state official government accounts.

Now, it does look like at least some of those accounts may be parody accounts (at least one claims to be in its bio). But there’s no evidence that Twitter actually took them down. And nowhere in that document is there an outline of a process for a takedown.

The second document also does not seem to show what the Intercept claims. It shows some emails, where CISA was trying to set up a reporting portal to make all of this easier (state officials seeing something questionable and passing it on to the companies via CISA). What the email actually shows is that whoever is responding to CISA from Twitter has a whole bunch of questions about the portal before they’re willing to sign on to it. And those concerns include things like “how long will reported information be retained?” and “what is the criteria used to determine who has access to the portal?”

These are the questions you ask when you are making sure that this kind of thing is not government coercion, but is a limited purpose tool for a specific situation. The response from a CISA official does say that their hope is the social media companies will (as the Intercept notes) “process reports and provide timely responses, to include the removal of reported misinformation from the platform where possible.” But in context, again, that makes sense. This portal is for election officials to report problematic accounts, and part of the point of the portal is that if the platforms agree that the content or accounts break their rules they will report back to the election officials.

And, again, this is not all that different from how things work for every day users. If I report a spam account on Twitter, later on Twitter sends me back a notification on the resolution for what I reported. This sounds like the same thing, but perhaps a slightly more rapid response so that election officials know what’s happening.

Again, I’m having difficulty finding anything nefarious here at all, and certainly no evidence of coercion or the companies agreeing to every government request. In fact, it’s quite the opposite.

Then the article pinballs again, back around to the (again, very public) MDM team. And, again, it tries to spin what is clearly reasonable information sharing into something more nefarious:

CISA has defended its burgeoning social media monitoring authorities, stating that “once CISA notified a social media platform of disinformation, the social media platform could independently decide whether to remove or modify the post.”

And, again, as the documents (but not the article!) demonstrate, the companies are often resistant to these government requests.

Then suddenly we come back around to the Easterly / Masterson text messages. The texts are informal, which is not a surprise. They work in similar circles, and both have been at CISA (though not at the same time). The Intercept presents this text exchange in a nefarious manner, even as Masterson is making it clear that the companies are resistant. But the Intercept reporters leave out exactly what Masterson is saying they’re resistant to. Here’s what the Intercept says:

In late February, Easterly texted with Matthew Masterson, a representative at Microsoft who formerly worked at CISA, that she is “trying to get us in a place where Fed can work with platforms to better understand mis/dis trends so relevant agencies can try to prebunk/debunk as useful.”

Here’s the full exchange:

If you can’t read that, Easterly texts:

Thx so much! Really appreciate it. And sorry I didn’t ring last week… think you were on the call this week? Just trying to get us in a place where Fed can work with platforms to better understand the mis/dis trends so relevant agencies can try to prebunk/debunk as useful…

Not our mission but was looking to play a coord role so not every D/A is independently reaching out to platforms which could cause a lot of chaos.

And Masterson replies:

Was on the call. The coordination is greatly appreciated. Was disappointed that platforms including us didn’t offer more (we’ll get there) and sector leadership had 0 questions.

We’ll get there and that kind leadership really helps. Platforms have got to get more comfortable with gov’t. It’s really interesting how hesitant they remain.

Again Microsoft included.

This shows that the platforms are treading very carefully in working with government, even around this request which seems pretty innocuous. CISA is trying to help coordinate so that when local officials have issues they have a path to reach out to the platforms, rather than just reaching out willy-nilly.

We’re now deep, deep in this article, and despite all these hints of nefariousness, and people insisting that it shows how the government is collaborating with social media, all the underlying documents suggest the exact opposite.

Then the article pinballs back to the MDM meeting (whose recommendations are and have been publicly available on the CISA website), and note that Twitter’s former head of legal, Vijaya Gadde, took part in one of the meetings. And, um, yeah? Again, the entire point of the MDM board is to figure out how to understand the information ecosystem and, as we noted up top, to do what they can to provide additional information, education and context.

There is literally nothing about suppression.

But the Intercept, apparently desperate to put in some shred that suggests this proves the government is looking to suppress information, slips in this paragraph:

The report called on the agency to closely monitor “social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio and other online resources.” They argued that the agency needed to take steps to halt the “spread of false and misleading information,” with a focus on information that undermines “key democratic institutions, such as the courts, or by other sectors such as the financial system, or public health measures.”

Note the careful use of quotes. All of the problematic words and phrases like “closely monitor” and “take steps to halt” are not in the report at all. You can go read the damn thing. It does not say that it should “closely monitor” social media platforms of all sizes. It says that the misinformation/disinformation problem involves the “entire information ecosystem.” It’s saying that to understand the flow of this, you have to recognize that it flows all over the place. And that’s accurate. It says nothing about monitoring it, closely or otherwise.

As for “taking steps to halt the spread” it also does not even remotely say that. If you look for the word “spread” it appears in the report seven times. Not once does it discuss anything about trying to halt the spread. It talks about teaching people how not to accidentally spread misinformation, about how the spread of misinformation can create a risk to critical functions like public health and financial services, how foreign adversaries abuse it, and how election officials lack the tools to identify it.

Honestly, the only point where “spread” appears in a proactive sense is where it says that they should measure “the spread” of CISA’s own information and messages.

The Intercept article is journalistic malpractice.

It then pinballs yet again, jumping to the whole DHS Disinformation Governance Board, which we criticized, mainly because of the near total lack of clarity around its rollout, and how the naming of it (idiotic) and the secrecy seemed primed to fuel conspiracy theories, as it did. But that’s unrelated to the CISA stuff. The conspiracy theories around the DGB (which was announced and disbanded within weeks) only help to fuel more nonsense in this article.

The article continues to pinball around, basically pulling random examples of questionable government behavior, but never tying it to anything related to the actual subject. I mean, yes, the FBI does bad stuff in spying on people. We know that. But that’s got fuck all to do with CISA, and yet the article spends paragraphs on it.

And then, I can’t even believe we need to go here, but it brings up the whole stupid nonsense about Twitter and the Hunter Biden laptop story. As we’ve explained at great length, Twitter blocked links to one article (not others) by the NY Post because they feared that the article included documents that violated its hacked materials policy, a policy that had been in place since 2019 and had been used before (equally questionably, but it gets no attention) on things like leaked documents of police chatter. We had called out that policy at the time, noting how it could potentially limit reporting, and right after there was the outcry about the NY Post story, Twitter changed the policy.

Yet this story remains the bogeyman for nonsense grifters who claim it’s proof that Twitter acted to swing the election. Leaving aside that (1) there’s nothing in that article that would swing the election, since Hunter Biden wasn’t running for president, and (2) the story got a ton of coverage elsewhere, and Twitter’s dumb policy enforcement actually ended up giving it more attention, this story is one about the trickiness in crafting reasonable trust & safety policies, not of any sort of nefariousness.

Yet the Intercept takes up the false narrative and somehow makes it even dumber:

In retrospect, the New York Post reporting on the contents of Hunter Biden’s laptop ahead of the 2020 election provides an elucidating case study of how this works in an increasingly partisan environment.

Much of the public ignored the reporting or assumed it was false, as over 50 former intelligence officials charged that the laptop story was a creation of a “Russian disinformation” campaign. The mainstream media was primed by allegations of election interference in 2016 — and, to be sure, Trump did attempt to use the laptop to disrupt the Biden campaign. Twitter ended up banning links to the New York Post’s report on the contents of the laptop during the crucial weeks leading up to the election. Facebook also throttled users’ ability to view the story.

In recent months, a clearer picture of the government’s influence has emerged.

In an appearance on Joe Rogan’s podcast in August, Meta CEO Mark Zuckerberg revealed that Facebook had limited sharing of the New York Post’s reporting after a conversation with the FBI. “The background here is that the FBI came to us — some folks on our team — and was like, ‘Hey, just so you know, you should be on high alert that there was a lot of Russian propaganda in the 2016 election,’” Zuckerberg told Rogan. The FBI told them, Zuckerberg said, that “‘We have it on notice that basically there’s about to be some kind of dump.’” When the Post’s story came out in October 2020, Facebook thought it “fit that pattern” the FBI had told them to look out for.

Zuckerberg said he regretted the decision, as did Jack Dorsey, the CEO of Twitter at the time. Despite claims that the laptop’s contents were forged, the Washington Post confirmed that at least some of the emails on the laptop were authentic. The New York Times authenticated emails from the laptop — many of which were cited in the original New York Post reporting from October 2020 — that prosecutors have examined as part of the Justice Department’s probe into whether the president’s son violated the law on a range of issues, including money laundering, tax-related offenses, and foreign lobbying registration.

The Zuckerberg/Rogan podcast thing has also been taken out of context by the same people. As he notes, the FBI gave a general warning to be on the lookout for false material, which was a perfectly reasonable thing for them to do. And, in response Facebook did not actually block links to the article. It just limited how widely the algorithm would share it until the article had gone through a fact check process. This is a reasonable way to handle information when there are questions about its authenticity.

But neither Twitter nor Facebook suggest that the government told them to suppress the story, because it didn’t. It told them generally to be on the lookout, and both companies did what they do when faced with similar info.

From there, the Intercept turns to a nonsense frivolous lawsuit filed by Missouri’s Attorney General and takes a laughable claim at face value:

Documents filed in federal court as part of a lawsuit by the attorneys general of Missouri and Louisiana add a layer of new detail to Zuckerberg’s anecdote, revealing that officials leading the push to expand the government’s reach into disinformation also played a quiet role in shaping the decisions of social media giants around the New York Post story.

According to records filed in federal court, two previously unnamed FBI agents — Elvis Chan, an FBI special agent in the San Francisco field office, and Dehmlow, the section chief of the FBI’s Foreign Influence Task Force — were involved in high-level communications that allegedly “led to Facebook’s suppression” of the Post’s reporting.

Now here, you can note that Dehmlow was the person mentioned way above who talked about platforms and responsibility, but as we noted, in context, she was talking about better education of the public. The section quoted in Missouri’s litigation is laughable. It’s telling a narrative for fan service to Trumpist voters. We already know that the FBI told Facebook to be on the lookout for fake information. The legal complaint just makes up the idea that Dehmlow tells them what to censor. That’s bullshit without evidence, and there’s nothing to back it up beyond a highly fanciful and politicized narrative.

But from there, the Intercept says this:

The Hunter Biden laptop story was only the most high-profile example of law enforcement agencies pressuring technology firms.

Except… it wasn’t. Literally nothing anywhere in this story shows law enforcement “pressuring technology firms” about the Hunter Biden laptop story.

The article then goes on at length about the silly politicized lawsuit, quoting two highly partisan commentators with axes to grind, before quoting former ACLU president Nadine Strossen claiming:

“If a foreign authoritarian government sent these messages,” noted Nadine Strossen, the former president of the American Civil Liberties Union, “there is no doubt we would call it censorship.”

Because of the horrible way the article is written, it’s not even clear which “messages” she’s talking about, but I’ve gone through every underlying document in the entire article and none of them involve anything remotely close to censorship. Given the selective quoting and misrepresentation in the rest of the article, it makes me wonder what was actually shown to Strossen.

As far as I can tell, the emails they’re discussing (again, this is not at all clear from the article) are the ones discussed earlier in which Colorado officials (not DHS) were concerned that some new accounts were attempting to impersonate Colorado officials. They sent a note to CISA, which auto-forwarded it to the companies. Yes, some of the accounts may have been parodies, but there’s no evidence that Twitter actually took action on the accounts, and the fact is that the accounts did make some effort to at least partially appear as Colorado official state accounts. All the government officials did was flag it.

I think Strossen is a great defender of free speech, but I honestly can’t see how anyone thinks that’s “censorship.”

Anyway, that’s where the article ends. There’s no smoking gun. There’s nothing. There are a lot of random disconnected anecdotes, misreading and misrepresenting documents, and taking publicly available documents and pretending they’re secret.

If you look at the actual details it shows… some fairly basic and innocuous information sharing with nothing even remotely looking like pressure on the companies to take down information. We also see pushback from the companies, which are being extremely careful not to get too close to the government and to keep them at arms’ length.

But, of course, a bunch of nonsense peddlers are turning the story into a big deal. And other media is picking up on it and turning it into nonsense.

None of those headlines are accurate if you actually look at the details. But all are getting tremendous play all over the place.

And, of course, the reporters on the story rushed to appear on Tucker Carlson:

Except that’s not at all what the “docs show.” At no point do they talk about “monitoring disinformation.” And there is nothing about them “working together” on this beyond basic information sharing.

In fact, just after this story came out, ProPublica released a much more interesting (and better reported) article that basically talks about how the Biden administration gave up on fighting disinformation because Republicans completely weaponized it by misrepresenting perfectly reasonable activity as nefarious.

Instead, a ProPublica review found, the Biden administration has backed away from a comprehensive effort to address disinformation after accusations from Republicans and right-wing influencers that the administration was trying to stifle dissent.

Incredibly, that ProPublica piece quotes Colorado officials (you know, like the ones who emailed CISA their concern, which got forwarded to Twitter, about fake accounts) noting how they really could use some help from the government and they’re not getting it:

“States need more support. It is clear that threats to election officials and workers are not dissipating and may only escalate around the 2022 and 2024 elections,” Colorado Secretary of State Jena Griswold, a Democrat, said in an email to ProPublica. “Election offices need immediate meaningful support from federal partners.”

I had tremendous respect for The Intercept, which I think has done some great work in the past, but this article is so bad, so misleading, and just so full of shit that it should be retracted. A credible news organization would not put out this kind of pure bullshit.

Filed Under: , , , , , ,
Companies: meta, the intercept, twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Bullshit Reporting: The Intercept’s Story About Government Policing Disinfo Is Absolute Garbage”

Subscribe: RSS Leave a comment
85 Comments

This comment has been flagged by the community. Click here to show it.

Chozen (profile) says:

“Let’s dig in. Back in 2018, then President Donald Trump signed the Cybersecurity and Infrastructure Security Agency Act into law, creating the Cybersecurity and Infrastructure Security Agency as a separate agency in the Department of Homeland Security. ”

Show me anything in the CISA 2018 that authorizes the agency to have anything to do with “Misinformation and disinformation” the agency was formed in response to the Solar Winds hacking not “misinformation and disinformation”. Not one word in the act gives DHS the authority to do what it has done.

Its been a hop skip and a jump by bureaucrats claiming people as infrastructure.

“One could argue we’re in the business of critical infrastructure, and the most critical infrastructure is our cognitive infrastructure, so building that resilience to misinformation and disinformation, I think, is incredibly important,” said Easterly, speaking at a conference in November 2021.

That is fucking Orwellian. A critical infrastructure cyber security act drafted in response to a hacking is perverted to policing speech because ‘the people’s minds are critical infrastructure’

You should be ashamed that you have been a part of this Mike.

I didn’t know you were this evil.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
James Burkhardt (profile) says:

Re: Re: Re:3

Yes, yes. 20 years after DHS was created to share information on threats to US security and 4 years after an amendment created a distinct agency to handle the responsibilities relating to digital and infrastructure threats, SCOTUS has ruled delegation is now illegal and has left all of us unsure how specific congress will need to be. But what you haven’t done is assert that anything specific isn’t authorized by congress.

From the text of H.R. 3359 of the 115th congress, the responsibilities of the director include :

“(10) carry out cybersecurity, infrastructure security, and emergency communications stakeholder outreach and engagement and coordinate that outreach and engagement with critical infrastructure Sector-Specific Agencies, as appropriate;
(formatting adjusted, emphasis added)

Emergency Communications stakeholder outreach. Communications stakeholder(s), Like social media. Outreach, as in contacting them.

Absent you highlighting specific things you claim aren’t covered, SCOTUS has given little guidance about what is considered ‘explicit authorization’. This makes debating your positiion hard, because I have to assume what parts of the actions here ‘aren’t explicitly authorized’. Hense what is now my answer. I’ve found one thing that is here that appears explicitly authorized by congress. So There you go. There is where CISA 2018 authorizes ‘any of this’. Care to try again with something more specific?

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Chozen (profile) says:

Re: Re:

First off

” Back in 2018, then President Donald Trump signed the Cybersecurity and Infrastructure Security Agency Act into law, creating the Cybersecurity and Infrastructure Security Agency as a separate agency in the Department of Homeland Security. While there are always reasons to be concerned about government interference in various aspects of life, CISA was pretty uncontroversial (perhaps with the exception of when Trump freaked out and fired the first CISA director, Chris Krebs, for pointing out that the election was safe and there was no evidence of manipulation or foul play).”

There is absolutely nothing in the CISA 2018 that authorizes any of this. The DHS has whimsically defined people as infrastructure to pervert a cyber security act into a thought police act.

Stephen T. Stone (profile) says:

Re: Re: Re:

Back in 2018, then President Donald Trump signed the Cybersecurity and Infrastructure Security Agency Act into law, creating the Cybersecurity and Infrastructure Security Agency as a separate agency in the Department of Homeland Security. While there are always reasons to be concerned about government interference in various aspects of life, CISA was pretty uncontroversial (perhaps with the exception of when Trump freaked out and fired the first CISA director, Chris Krebs, for pointing out that the election was safe and there was no evidence of manipulation or foul play).

How is any of that inaccurate or misleading in and of itself?

There is absolutely nothing in the CISA 2018 that authorizes any of this.

So what?

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
nasch (profile) says:

Re: Re: Re:5

How about:

To integrate relevant information, analysis, and vulnerability assessments, regardless of whether the information, analysis, or assessments are provided or produced by the Department, in order to make recommendations, including prioritization, for protective and support measures by the Department, other Federal Government agencies, State, local, tribal, and territorial government agencies and authorities, the private sector, and other entities regarding terrorist and other threats to homeland security.

James Burkhardt (profile) says:

Re: Re: Re:3

Thats a Lie. Since I’d already cited that link, you might have thought one of us would read it.

(a) Redesignation.--(1) In general.–The National Protection and Programs Directorate of the Department shall, on and after the date of the enactment of this subtitle, be known as the Cybersecurity and Infrastructure Security Agency' (in this subtitle referred to as theAgency’).

This is the text of the law taking a department of the DHS and creating a specific agency called the
“Cybersecurity and Infrastructure Security Agency”. It is the law authorizing the DOJ to create this agency. WTF are you smoking?

Chozen (profile) says:

Re: Re: Re:4

“To develop, in coordination with the Sector-Specific Agencies with available expertise, a comprehensive national plan for securing the key resources and critical infrastructure of the United States, including power production, generation, and distribution systems, information technology and telecommunications systems (including satellites), electronic financial and property record storage and transmission systems, emergency communications systems, and the physical and technological assets that support those systems.”

Don’t act like you didn’t see this definition of what “critical infrastructure” encompasses. This entire DHS bullshit is based on a redefinition of “critical infrastructure” and “cyber security” beyond what the act limits those terms to.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re:

Giving government agents priority in the reporting system is inherently inviting the government to assist in regulating online speech. I was under the impression Mike was against that.

This is the same kind of reporting that any regular user can do, it’s just that law enforcement is viewed as a “trusted” flagger, so their flags get more attention.

Hope you don’t offend any cops with your posts. I guess we just have to trust the cops when they report Tim Cushing’s posts for promoting misinformation and even violence against police.

Stephen T. Stone (profile) says:

Re: Re: Re:

Giving government agents priority in the reporting system is inherently inviting the government to assist in regulating online speech.

No, it isn’t. Twitter, Facebook, and the like can give priority to reports from the government without actually taking action on those reports. As the Techdirt article points out:

As far as I can tell, the emails they’re discussing (again, this is not at all clear from the article) are the ones discussed earlier in which Colorado officials (not DHS) were concerned that some new accounts were attempting to impersonate Colorado officials. They sent a note to CISA, which auto-forwarded it to the companies. Yes, some of the accounts may have been parodies, but there’s no evidence that Twitter actually took action on the accounts, and the fact is that the accounts did make some effort to at least partially appear as Colorado official state accounts. All the government officials did was flag it.

To break a tired-ass cliché out of the rest home: You can lead a horse to water, but you can’t make it drink.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Yeah, when i read the article at The Intercept, it put me in mind of one of those cats that suddenly starts randomly tearing around the house at 3 AM for no discernible reason. It was disappointing disappointing of the publication, and the authors. Some of the covid stuff there also pushes the boundaries of credibility, but that article just flushed it straight down the toilet.

The only thing i found remotely interesting was the MS/DHS guy’s quote. There’s always people like that around.

So thanks for practically fisking that whole debacle. It points out even more problems than i had originally caught. Good thing i wasn’t reading it for work, as it was so stupid it made me tired.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

The Intercept article must be directly over the target to get a three-times-longer-than-normal screed trying to discredit it.

Adding needed context to a critique of an article will often produce an article larger than the one it’s critiquing. Being unable to understand that is your problem.

This comment has been flagged by the community. Click here to show it.

Chozen (profile) says:

Re:

Its right there at the bottom of this page

“Copia Institute”

Take that in context with this from the “confidential” linked in the intercept

“Geoff Hale, the director of the Election Security Initiative at CISA, recommended the use of third-party information-sharing nonprofits as a “clearing house for information to avoid the appearance of government propaganda.””

Hmmm? Use third party information sharing non-profits to hide that appearance of government propaganda.

What is the Copia Institute again?

Anonymous Coward says:

Re: Re: Re:4

NeoNazis would throw actual mental health professionals under the bus and bludgeon them to death with the DSMV if they don’t bow down to whitey and Trump.

I know that autism is an actual diagnosis and a spectrum under the DSMV. It sadly down not stop people from misusing the word. Plus, we know how little Chozen, Hyman and his ilk care about those with poor mental health. (That is, two shots to the back of their heads.)

They won’t shirk their ideology ever.

bhull242 (profile) says:

Re:

The Intercept article must be directly over the target to get a three-times-longer-than-normal screed trying to discredit it.

First, if something’s over the target, that means it missed. Perhaps you meant “on target”?

Second, a good refutation to a claim is often much longer than the claim itself. It generally takes much less time and space to make a ridiculous claim than it takes to refute it, and making multiple ridiculous claims doesn’t take much more effort than making just one but often does require exponentially more effort to refute. Ever heard of a Gish Gallop?

This comment has been deemed insightful by the community.
Anonymous Hero says:

The Intercept article must be directly over the target to get a three-times-longer-than-normal screed trying to discredit it.

You’re apparently unfamiliar with Brandolini’s Law: “The amount of energy needed to refute bullshit is an order of magnitude bigger than the amount required to produce it.”

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

How come your worldview can accomodate millions of people doing foreign countries work for them, but not the same number of bots?
I feel like the default reply used to be more “you’re a bot!“, but its overuse led to the still-unfalsifiable manchurian candidate accusation.

“You may not know, but mysterious foreigner infowarfare is working through you as we speak“

Anonymous Coward says:

Re: Re: Re:2

You really should see some of the bptspam here.

Moat of them are ahort and post a link to a service or something we’re not interested in. Usually with a short message. Sometimes it’s a longer passage, but it’s usually the same faux-cheery nonsense meant to promote something.

Automating the sort of insane anti-US screeds and FUD is going to take quite a lot of work, training the AI, understanding machine learning, and frankly, lying about giving parole is far, far cheaper and more effective than actually making an AI propaganda spewer.

That Anonymous Coward (profile) says:

“the New York Post reporting on the contents of Hunter Biden’s laptop ahead of the 2020 election provides an elucidating case study of how this works in an increasingly partisan environment”

IIRC isn’t this the same article that the ‘paper’ (using the term loosely) retracted because it didn’t even come close to their reporting requirements and lacked the name of an author?

Jay says:

You say there's no evidence, but can this evidence even be obtained?

Several times in your essay you say there is no evidence of some thing, but given how non-transparent these companies are, can the evidence even be obtained?

How does one know that any records from companies that are voluntarily, even eagerly turned over to some committee or researcher is representative of reality?

But at times I’ve been told that requiring audits or allowing some sort of accredited 3rd party researcher to examine the actual databases would be unconscionable and may even violate the 1st Amendment by compelling speech.

I dislike arguments that say we need some kind of evidence when they come from folks who should know that evidence is impossible to obtain.

Which doesn’t mean your conclusion is wrong, but it’s not as strong as I think you see it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Antoine Wilson says:

I read more than the half of your rebutal and thats the most weak response against an article. Half the time you conceding and normalise more than actually refuting, you are also strawmaning the whole thing, there is no mention that their is proof of gouvernement to coerce social media, but that gouvernment want social media to be more responsive to their request by asking them more by example.

You are also missreprensting more the material than the article itself. You are saying about the gouv portal, that the gouvernement ”hope their best” that if the social media access the portal they do what they arer told to do when in reality they say if you have access to our portal you will do x. You also saying that they will just remove account when they are requested to ”remove missinformation” in the email.

Anonymous Coward says:

It is absolutely fair to challenge established powers. Both when the power is handed over to the deranged hands of trump or the incumbents they have to set a standard of checks balances to make sure this doesn’t get out of hand? Why, have you seen the Afghanistan papers and that 3 presidents of any side continued to lie to american people while spending billions of our tax money to ‘bring freedom’ basically taking tax dollars where they say infrastructure in missouri water is too expensive but then give a blank check to Arms and weapon manufacturers. All those billions, promises, and we are back to square one with an extremist taking over Afghanistan just now. And that is an undeveloped nation. Ukraine, with all the injustice being done to them, isn’t a question if you support their fight. It is what can we realistically help, what checks and balances that we do it fairly and in a way that it gets to the ground because that area is the center of the illegal arms trade, and then also where is the inspector for money to arms contractors. IT all sounds rozy until we realize the same people who will be heading DHS talk of democracy but then support Saudi Arabia in their human rights violation in Yemen.

Time and time again they have sadly been caught misleading the public. Does it mean its a big shadow govertnment conspiracy? no It means we need to continue to do our checks and balances to anyone who yields the public power and responsibilities be it a Cop, politician, judge, city commissioner, so on and so forth. Set that standard so we can apply it be it Hunter Biden or Jared Kushner rather than have them divide and conquer and point fingers. They are the same people who said Hunter biden’s was misinformation, didn’t recant even sadly public radio called it a non story then posted about Britians long living Cat the same time. Unfortunately there is an economic thumb in our system and people want to just find the worse enemy. They dont realize their almost all doing the same thing that benefits themselves (https://time.com/6218708/congress-stock-trading-ban-bill/)

https://www.axios.com/2021/07/14/bidens-trump-ethics-antagonist-walter-shaub

Stop making this tribal and set a standard towards a better responsive democracy across the board.

Stephen Harrison (user link) says:

Mentioned in Slate

A quick comment that I mentioned Masnick’s reporting here in my latest article for Slate: https://slate.com/technology/2022/11/dhsleaks-wikipedia-no-collusion.html

“But is there any substance to the claim that the feds have been deciding what information should be published on Wikipedia and other sites? There is not. As Techdirt’s Mike Masnick rightfully argued, the Intercept’s story about the U.S. government arbitrating disinformation on tech platforms like Wikipedia is ‘absolute garbage’ and ‘bullshit reporting.'”

Douglas Lain says:

Nothingburger?

The confidential meeting minutes do mention monitoring for disinformation. In response to the recommendation to run rumor control,~ lerted subcommittee members to the 10 ongoing projects on monitoring and aggregating MDM threats around elections, run by the National Science Foundation. She suggested that CISA coordinate and amplify resources to individual locations to connect state and local elections officia ls with this research capacity. affirmed that CISA should direct media, government, and other organizations to trusted resources.”
The quote about monitoring social media platforms of all sizes actionaly reads:
“CISA should approach the MD [disinformation] problem with the entire information ecosystem in view. This includes social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio, and other online resources.”
The report quoted mentions training media and other civic institutions to recognize disinformation narratives as defined by the CISA so these media companies can avoid or suppress spreading MD (disinformation.)

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...