For decades, academics have been trying to warn anybody who’d listen that the death of your local newspaper and the steady consolidation of local TV broadcasters has created either “news deserts,” or local news reporting that’s mostly just low-calorie puffery and simulacrum. Despite claims that the “internet would fix this,” fixing local news just wasn’t profitable enough, so the internet… didn’t.
Those same academics will then tell you that the end result is an American populace that’s decidedly less informed and more divided, something that not only has a measurable impact on electoral outcomes, but paves the way for more state and local corruption (since fewer journalists are reporting on stuff like local city council meetings or local political decisions).
But that’s just the start of the problem. Every six months or so, a news report will emerge showing how all manner of political propagandists and bullshit artists have rushed to fill the vacuum created by longstanding policy failures and our refusal to competently fund local journalism at scale.
These reports have repeated noted that increasingly, what uninformed Americans think is local news is actually just political or corporate propaganda. It’s something the original Deadspin highlighted in that popular Sinclair Broadcasting video a few years back:
More recently, outlets like the Washington Post and NPR have documented how political operatives are increasingly creating free, fake “pink slime” local newspapers that look like the kind of newspapers and local news websites locals are used to, but are just propaganda designed to mislead and misinform, usually to the benefit of a local politician or company.
While some Democratic politicians have embraced the tactic, researchers say the overwhelming majority of the efforts are the product of Republican operatives who’ve increasingly embraced conspiracy theories and propaganda to try and counter unfavorably shifting electoral demographics:
[Pri Bengani, a senior researcher at the Tow Center for Digital Journalism at Columbia University] notes the difference in scale. She counts 64 such pro-Democratic newspapers and news sites. That’s equal to about 5% of the right-wing publications she has been monitoring.
Last week the Washington Post profiled how top Republican political campaigns in Illinois used a private online portal last year to directly shape coverage and request favorable stories and op-eds via a large network of “media outlets” that present themselves as local newspapers, but, well, aren’t:
Screenshots show that the password-protected portal, called Lumen, allowed users to pitch stories; provide interview subjects as well as questions; place announcements and submit op-eds to be “published verbatim” in any of about 30 sites that form part of the Illinois-focused media network, called Local Government Information Services.
The portal was created by a man the Post says pretends to be helping to fix local news, but, well, isn’t:
The network is run by Brian Timpone, a businessman and former television broadcaster who told federal regulators in 2016 that his publishing company was filling the void left by the decline of community news, “delivering hundreds and sometimes thousands of local news stories each week.” He did not respond to requests for comment.
While the portal was widely used by Republicans in the state to influence voters, the Post says that Democratic politicians weren’t invited and didn’t even know of the portal’s existence. The end result, again, is a flood of websites (and sometimes actual, physical papers passed around for free) designed to look like local news, despite being anything but:
The typical homepage of a Local Government Information Services website looks like an ordinary local publication. Headlines about college Republicans appear alongside notices of spring wine walks. The sites have titles like Prairie State Wire, Peoria Standard and West Cook News.
The Post notes that since its founding in 2016, Local Government Information Services has more than doubled its total number of sites, and has been in recent conversations with the Trump campaign. How effective these fake paper efforts are may not be measurable, but in conjunction with existing propaganda wings of the GOP (AM radio, Fox, OANN, Newsmax, Daily Caller, popular far right influencers), it seems naïve to think the impact on voting, polling, and opinion isn’t meaningful.
Given the increasingly radical and unpopular nature of modern GOP policies (see: the erosion of child labor protections, the assault on abortion, the steady assault on environmental protections, the slow but steady dismantling of effectively all competent corporate oversight, tax cuts for billionaires), they’ve increasingly focused attention on gerrymandering, eroding voting rights, and propaganda.
AI tech like ChatGPT will, of course, likely make this kind of propaganda easier and cheaper to produce at scale, whether we’re talking about creating fake news or flooding regulators with fake public support for unpopular policies. And of course there’s nothing really stopping Democrats from ramping up their own propaganda; encouraging a disinformation arms race.
If you recall, the Trump FCC under Ajit Pai spent several years stripping away popular media consolidation limits established over decades with bipartisan approval. The push was ironically to directly help aid Sinclair broadcasting’s steady consolidation of local broadcast news, which resulted in a homogenized soup of well-funded propaganda and the erosion of real, local reporting.
Insatiable, the big four broadcasters have been lobbying the FCC as part of the agency’s belated 2022 Quadrennial Regulatory Review of the Commission’s Broadcast Ownership Rules. Fox, Viacom/CBS, Comcast/NBC, and ABC/Disney are pushing the agency to eliminate restrictions prohibiting the nation’s biggest four companies from merging:
The Dual Network Rule effectively prohibits a merger between any of the four broadcast networks specifically named in the rule: ABC, CBS, Fox, and NBC. According to the FCC, it is needed to “foster competition in the provision of primetime entertainment programming and the sale of national advertising time.” However, dramatic changes in the market for entertainment programming and national advertising in recent years have upended the status quo…
Ironically the same week they issued this filing, two of these companies, News Corporation and Comcast, were busy successfully derailing the FCC nomination of a popular media reformer using a homophobic smear campaign. The goal: to keep the agency gridlocked in perpetuity, preventing it from reversing any of the unpopular policies implemented during the Trump administration.
While it’s true that the big four major broadcasters see significantly more competition courtesy of the streaming evolution, that doesn’t mean that letting these media giants consolidate further won’t be harmful. Outside from the usual massive layoffs (which merging parties will pretend won’t happen… until they do), there’s zero real indication such consolidation benefits the public interest.
At the same time they’re arguing for further consolidation among the big four broadcasters, the National Association of Broadcasters is also calling on the FCC to further erode consolidation restrictions on radio, arguing, again, that increased competition from streaming means that consolidation restrictions are no longer necessary.
In his own public filing, Christopher Terry, Assistant Professor of Media Law at the University of Minnesota, notes that the FCC’s policy approach to media consolidation has been a hot mess for the better part of several decades, consistently resulting in the opposite of the FCC’s stated objectives:
We ask a simple question that we hope the agency will consider in its assessment, “How will more consolidation benefit the public interest?” If the FCC’s local radio ownership limits are to be raised, additional ownership consolidation at the local market level is almost certain to follow. We are skeptical that additional consolidation, a policy likely to result in fewer competitors will result in better competition, to say nothing of the effects that further blind reliance on the benefits of economy of scale by agency will have on localism and diversity.
There’s really no shortage of evidence that mindless consolidation in both broadcast media and telecom has resulted in numerous, well-documented harms, especially in local media markets and particularly among marginalized communities. Similarly, there’s no evidence that industry claims that gutting media ownership limits has ever actually resulted in widespread competition and innovation.
Occasionally the FCC does the right thing. Such as its recent decision to send Standard General’s acquisition of Tegna to an administrative law judge out of concern the local broadcast TV merger would result in layoffs and even lower quality local news (contrary to industry claims, online competitors don’t inherently rush to fill local “news deserts” as media consolidates).
But generally the outlook on this subject doesn’t look great. The FCC spent four years under Trump as effectively a rubber stamp to industry. Now it’s been effectively gridlocked indefinitely by the successful attacks on Sohn, meaning it can’t vote to block mergers outright. The agency specifically built to regulate telecom and media… can’t actually do its job, quite by design.
That likely means further consolidation, less quality local reporting (at a time when we’re drowning in authoritarian propaganda), and more convoluted FCC proposals that, more often than not, don’t actually fix the actual problem. Meanwhile the myopic fixation exclusively on “Big Tech” policy means this stuff routinely flies under the radar, something media and telecom giants surely appreciate.
Over the last few months, Elon Musk’s handpicked journalists have continued revealing less and less with each new edition of the “Twitter Files,” to the point that even those of us who write about this area have mostly been skimming each new release, confirming that yet again these reporters have no idea what they’re talking about, are cherry picking misleading examples, and then misrepresenting basically everything.
It’s difficult to decide if it’s even worth giving these releases any credibility at all in going through the actual work of debunking them, but sometimes a few out of context snippets from the Twitter Files, mostly from Matt Taibbi, seem to get picked up by others and it becomes necessary to dive back into the muck to clean up the mess that Matt has made yet again.
Unfortunately, this seems like one of those times.
Over the last few “Twitter Files” releases, Taibbi has been pushing hard on the false claim that, okay, maybe he can’t find any actual evidence that the government tried to force Twitter to remove content, but he can find… information about how certain university programs and non-governmental organizations received government grants… and they setup “censorship programs.”
It’s “censorship by proxy!” Or so the claim goes.
Except, it’s not even remotely accurate. The issue, again, goes back to understanding some pretty fundamental concepts that seem to escape Taibbi’s ability to understand. Let’s go through them.
Point number one: Studying misinformation and disinformation is a worthwhile field of study. That’s not saying that we should silence such things, or that we need an “arbiter of truth.” But the simple fact remains that some have sought to use misinformation and disinformation to try to influence people, and studying and understanding how and why that happens is valuable.
Indeed, I personally tend to lean towards the view that most discussions regarding mis- and disinformation are overly exaggerated moral panics. I think the terms are overused, and often misused (frequently just to attack factual news that people dislike). But, in part, that’s why it’s important to study this stuff. And part of studying it is to actually understand how such information is spread, which includes across social media.
Point number two: It’s not just an academic field of interest. For fairly obvious reasons, companies that are used to spread such information have a vested interest in understanding this stuff as well, though to date, it’s mostly been the social media companies that have shown the most interest in understanding these things, rather than say, cable news, even as some of the evidence suggests cable news is a bigger vector for spreading such things than social media.
Still, the companies have an interest in understand this stuff, and sometimes that includes these organizations flagging content they find and sharing it with the companies for the sole purpose of letting those companies evaluate if the content violate existing policies. And, once again, the companies regularly did nothing after noting that the flagged accounts didn’t violate any policies.
Point number three: governments also have an interest in understand how such information flows, in part to help combat foreign influence campaigns designed to cause strife and even violence.
Note what none of these three points are saying: that censorship is necessary or even desired. But it’s not surprising that the US government has funded some programs to better understand these things, and that includes bringing in a variety of experts from academia and civil society and NGOs to better understand these things. It’s also no surprise that some of the social media companies are interested in what these research efforts find because it might be useful.
And, really, that’s basically everything that Taibbi has found out in his research. There are academic centers and NGOs that have received some grants from various government agencies to study mis- and disinformation flows. Also, that sometimes Twitter communicated with those organization. Notably, many of his findings actually show that Twitter employees absolutely disagreed with the conclusions of those research efforts. Indeed, some of the revealed emails show Twitter employees somewhat dismissive of the quality of the research.
What none of this shows is a grand censorship operation.
However, that’s what Taibbi and various gullible culture warriors in Congress are arguing, because why not?
So, some of the organizations in questions have decided they finally need to do some debunking on their own. I especially appreciate the University of Washington (UW), which did a step by step debunker that, in any reasonable world, would completely embarrass Matt Taibbi for the very obvious fundamental mistakes he made:
False impression: The EIP orchestrated a massive “censorship” effort. In a recent tweet thread, Matt Taibbi, one of the authors of the “Twitter Files” claimed: “According to the EIP’s own data, it succeeded in getting nearly 22 million tweets labeled in the runup to the 2020 vote.” That’s a lot of labeled tweets! It’s also not even remotely true. Taibbi seems to be conflating our team’s post-hoc research mapping tweets to misleading claims about election processes and procedures with the EIP’s real-time efforts to alert platforms to misleading posts that violated their policies. The EIP’s research team consisted mainly of non-expert students conducting manual work without the assistance of advanced AI technology. The actual scale of the EIP’s real-time efforts to alert platforms was about 0.01% of the alleged size.
Now, that’s embarrassing.
There’s a lot more that Taibbi misunderstands as well. For example, the freak-out over CISA:
False impression: The EIP operated as a government cut-out, funneling censorship requests from federal agencies to platforms. This impression is built around falsely framing the following facts: the founders of the EIP consulted with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) office prior to our launch, CISA was a “partner” of the EIP, and the EIP alerted social media platforms to content EIP researchers analyzed and found to be in violation of the platforms’ stated policies. These are all true claims — and in fact, we reported them ourselves in the EIP’s March 2021 final report. But the false impression relies on the omission of other key facts. CISA did not found, fund, or otherwise control the EIP. CISA did not send content to the EIP to analyze, and the EIP did not flag content to social media platforms on behalf of CISA.
There are multiple other false claims that UW debunks as well, including that it was a partisan effort, that it happened in secret, or that it did anything related to content moderation. None of those are true.
The Stanford Internet Observatory (SIO), which works with UW on some of these programs, ended up putting out a similar debunker statement as well. For whatever reason, the SIO seems to play a central role in Taibbi’s fever dream of “government-driven censorship.” He focuses on projects like the Election Integrity Project or the Virality Project, both of which were focused on looking at the flows of viral misinformation.
In Taibbi’s world, these were really government censorship programs. Except, as SIO points out, they weren’t funded by the government:
Does the SIO or EIP receive funding from the federal government?
As part of Stanford University, the SIO receives gift and grant funding to support its work. In 2021, the SIO received a five-year grant from the National Science Foundation, an independent government agency, awarding a total of $748,437 over a five-year period to support research into the spread of misinformation on the internet during real-time events. SIO applied for and received the grant after the 2020 election. None of the NSF funds, or any other government funding, was used to study the 2020 election or to support the Virality Project. The NSF is the SIO’s sole source of government funding.
They also highlight how the Virality Project’s work on vaccine disinformation was never about “censorship.”
Did the SIO’s Virality Project censor social media content regarding coronavirus vaccine side-effects?
No. The VP did not censor or ask social media platforms to remove any social media content regarding coronavirus vaccine side effects. Theories stating otherwise are inaccurate and based on distortions of email exchanges in the Twitter Files. The Project’s engagement with government agencies at the local, state, or federal level consisted of factual briefings about commentary about the vaccine circulating on social media.
The VP’s work centered on identification and analysis of social media commentary relating to the COVID-19 vaccine, including emerging rumors about the vaccine where the truth of the issue discussed could not yet be determined. The VP provided public information about observed social media trends that could be used by social media platforms and public health communicators to inform their responses and further public dialogue. Rather than attempting to censor speech, the VP’s goal was to share its analysis of social media trends so that social media platforms and public health officials were prepared to respond to widely shared narratives. In its work, the Project identified several categories of allegations on Twitter relating to coronavirus vaccines, and asked platforms, including Twitter, which categories were of interest to them. Decisions to remove or flag tweets were made by Twitter.
In other words, as was obvious to anyone who actually had followed any of this while these projects were up and running, these are not examples of “censorship” regimes. Nor are they efforts to silence anyone. They’re research programs on information flows. That’s also clear if you don’t read Taibbi’s bizarrely disjointed commentary and just look at the actual things he presents.
In a normal world, the level of just outright nonsense and mistakes in Taibbi’s work would render his credibility completely shot going forward. Instead, he’s become a hero to a certain brand of clueless troll. It’s the kind of transformation that would be interesting to study and understand, but I assume Taibbi would just build a grand conspiracy theory about how doing that was just an attempt by the illuminati to silence him.
I wrote last week about the bizarrely bad House Oversight hearing that was supposed to expose how Twitter, the deep state, and the, um “Biden Crime Family” conspired to suppress the NY Post’s story about Hunter Biden’s laptop. Of course, wishful thinking does not make facts, and we already know that story is totally false. The hearing not only reconfirmed that the GOP’s fantasy scenario never happened, instead it revealed that the Trump White House actually demanded tweets that insulted the President get taken down and that Twitter bent over backwards to give Trump more leeway, even after he broke clear rules. It was something of a disaster hearing for the GOP.
But, one of the craziest bits of the hearing came from new Congressional Rep. Anna Paulina Luna, who worked for Turning Point USA and PragerU before being elected. Her five minutes has garnered some extra attention for being even crazier than either Reps. Lauren Boebert or Marjorie Taylor Greene, both of whom had pretty crazy rants.
In particular, Rep. Luna (who has been facing some interesting news reporting of late) made some claims about there being a conspiracy between Twitter and the government to communicate via “the private cloud server”… Jira.
Of course, as anyone with even the slightest bit of understanding about, well, anything, would tell you, it’s that Jira is an issue and project tracking software, normally used for things like bug tracking. Luna claimed this was a violation of the 1st Amendment, because she apparently hasn’t the slightest clue how the 1st Amendment actually works.
From the transcript (helpfully provided by Tech Policy Press, though we’ve corrected it based on the video), you can see former Twitter exec Yoel Roth’s confusion over all this. For anyone who understands this, you can recognize Roth’s confusion because he recognizes that she’s completely misconstruing Jira and what it does. But, to Rep. Luna, she seems to think she’s caught Roth out in a giant conspiracy.
Rep. Anna Luna (R-FL):
Mr. Roth. Mr. Roth, have you communicated with government officials ever on a platform called Jira? Yes or no? Real quick answer, we’re on the clock, yes or no?
Yoel Roth:
Not to the best of my recollection.
Rep. Anna Luna (R-FL):
Not to your recollection. Great. Have, if you did in the event, communicate who would’ve had access to this platform.
Yoel Roth:
That’s the nature of my confusion. JIRA’s…
Rep. Anna Luna (R-FL):
Okay. Did you ever speak to government officials on Jira regarding taking down social media posts?
Yoel Roth:
Again, not to the best of my recollection.
Rep. Anna Luna (R-FL):
Can you explain to me why the federal government would ever have interest in communicating through Jira? Mind you, a private cloud server with social media companies without oversight to censor American voices? I wanna let you know that this is a violation of the First Amendment and the federal government is colluding with social media companies to censor Americans. Mr. Chairman, I ask for unanimous consent to submit these graphics into record. And Mr. Roth, I’m gonna refresh your memory for you this flow chart.
Rep. James Comer (R-KY):
Without objection so ordered.
Rep. Anna Luna (R-FL):
Thank you chair. This flow chart shows the following Federal agency’s social media companies, Twitter, leftist, nonprofits, and organizations communicating regarding their version of misinformation using Jira, a private cloud server. On this chart, I wanna annotate that the Department of Homeland Security, which has a following branches, cybersecurity and infrastructure security agency, also known as CISA Countering Foreign Intelligence Task Force, now known as the Misinfo, Disinfo and Mal-information, MDM, this was again, used against the American people. The Election Partnership Institute or Election Integrity Partnership, EIP, which includes the following, Stanford Internet Observatory, University of Washington Center for Informed Public, Graphika and Atlantic Council’s Digital Forensic Research Lab. And potentially according to what we found on the final report by EIP, the DNC, the Center for Internet Security, CIS- a nonprofit funded by DHS, the National Association of Secretaries of State, also known as NASS and the National Association of State Election Directors, NASED.
And in this case, because there are other social media companies involved, Twitter, what do all of these groups though, have in common? And I’m going to refresh your memory. They were all communicating on a private cloud server known as Jira. Now, the screenshot behind me, which is an example of one of thousands shows on November 3rd, 2020, that you, Mr. Roth, a Twitter employee, were exchanging communications on Jira, a private cloud server with CISA, NASS, NASED, and Alex Stamos, who now works at Stanford and is a former security of security officer at Facebook to remove a posting. Do you now remember communicating on a private cloud server to remove a posting? Yes or no?
Yoel Roth:
I wouldn’t agree with the characteristics.
Rep. Anna Luna (R-FL):
I don’t care if you agree. Do you, this is, this is your stuff, yes or no? Did you communicate with a private entity, the government agency on a private cloud server? Yes or no?
Yoel Roth:
The question was, if I…
Rep. Anna Luna (R-FL):
Yes or no? Yeah, I’m on time. Yes or no?
Yoel Roth:
Ma’am, I don’t believe I can give you a yes or no.
Rep. Anna Luna (R-FL):
Well, I’m gonna tell you right now that you did and we have proof of it. This ladies and gentlemen, is joint action between the federal government and a private company to censor and violate the First Amendment. This is also known, and I’m so glad that there’s many attorneys on this panel, joint state actors, it’s highly illegal. You are all engaged in this action, and I want you to know that you will be all held accountable. Ms. Gadde, are you still on CISA’s Cybersecurity Advisory Council? Yes or no?
Vijaya Gadde:
Yes, I am.
Rep. Anna Luna (R-FL):
Okay. For those who have said that this is a pointless hearing, and I just wanna let you guys all know, we found that Twitter was indeed communicating with the federal government to censor Americans. I’d like to remind you that this was all in place before January 6th. So, to say that these mechanisms weren’t in place, and to make it about January 6th, I wanna let you know that you guys were actually in control of all of the content and clearly have proof of that. Now, if you don’t think that this is important to your constituents and the American people from those saying that this was a pointless hearing, I suggest you find other jobs. Chairman, I yield my time.
If you actually want to watch all this play out, it’s at 5 hours and 31 minutes in this video (the link should take you to that point). You can see how proud Luna is of herself as she thinks she’s proven “joint state action” and found the secret “Jira private cloud server” where social media and government actors colluded to censor people.
The problem, of course, is that none of this is even remotely true. Whether Luna knows it’s not true, has very stupid staffers who told her something false, or if they just don’t care because it sounds good… I don’t know. I do know that Luna has continued to take a victory lap on this nonsense, including claiming on Steve Bannon’s podcast that she caught Roth “lying” under oath to a member of Congress, and she insisted that the panelist’s stunned faces were not because they were realizing just how confused Luna was about all this, but (she said) because they all wanted to immediately text their lawyers about how in trouble they were.
So, let’s debunk all of this nonsense. And, I won’t even bother digging into the fact that at the time of this supposed smoking gun, Trump was in office, and his hand appointed director ran CISA. There’s so much other dumb stuff, I don’t even have time to spend any more time on that.
Now, once again, Jira is a ticketing system, and a widely used one. It is not a “private cloud server” for “communicating.”
All of the details of what’s going on here were totally public already. The Election Integrity Partnership, which was a private project run by the Stanford Internet Observatory, UW Center for an Informed Public, Graphika, and the Digital Forensic Research Lab, have been quite open and public about what they did to try to track and monitor election mis- and dis-information.
They released a big report, called The Long Fuse in 2021 that details how they used Jira to track possible election disinfo vectors. They used it internally, but they were also able to “tag” in different organizations if they thought it was necessary. This is described pretty clearly and publicly in the report on page 18 and 19:
To illustrate the scope of collaboration types discussed above, the following case
study documents the value derived from the multistakeholder model that the
EIP facilitated. On October 13, 2020, a civil society partner submitted a tip via
their submission portal about well-intentioned but misleading information in a
Facebook post. The post contained a screenshot (See Figure 1.4).
In their comments, the partner stated, “In some states, a mark is intended
to denote a follow-up: this advice does not apply to every locality, and may
confuse people. A local board of elections has responded, but the meme is
being copy/pasted all over Facebook from various sources.” A Tier 1 analyst
investigated the report, answering a set of standardized research questions,
archiving the content, and appending their findings to the ticket. The analyst
identified that the text content of the message had been copied and pasted
verbatim by other users and on other platforms. The Tier 1 analyst routed
the ticket to Tier 2, where the advanced analyst tagged the platform partners
Facebook and Twitter, so that these teams were aware of the content and could
independently evaluate the post against their policies. Recognizing the potential
for this narrative to spread to multiple jurisdictions, the manager added in the
CIS partner as well to provide visibility on this growing narrative and share the
information on spread with their election official partners. The manager then
routed the ticket to ongoing monitoring. A Tier 1 analyst tracked the ticket until
all platform partners had responded, and then closed the ticket as resolved.
According to two different people I spoke to at the EIP, this Tier 2 setup, where companies got tagged in happened rarely. Instead, these tickets were mostly just used internally for EIP’s own research efforts. But, either way, note the issue. This is not government employees telling social media to take down posts. This is the EIP, basically a bunch of disinformation researchers, conducting research, and escalating issues to companies to be “independently evaluated against their policies.”
Now, as for the “smoking gun” which Luna showed where she claimed she’s proven “state action,” it’s very blurry and impossible to see in the C-SPAN video, and she didn’t tweet it either. Perhaps because it kinda debunks her entire argument.
The screenshot also isn’t anything secret. It was part of EIP’s own presentation explaining how the EIP worked! In this 12 minute video, Stanford’s Alex Stamos explains the whole process, and at 4 minutes and 14 seconds, he shows a specific example, which appears to be the blurry example that Luna claimed was her smoking gun. Except when you look at it, you see it’s actually an item that (1) EIP found and highlighted (not government officials) of actual election disinfo (someone claiming to be a poll worker burning ballots for anyone who voted for Trump). (2) They tagged in Yoel Roth from Twitter, who rather than just take it down, actually pushed back saying “Is there any evidence establishing that this was a hoax.” (3) EIP then reached out to the relevant election board to see if they had any proof that it was a hoax, and (4) them getting back a press release from the Election Board saying it was a hoax.
That is… not the government colluding to censor Americans. Nor is it Yoel Roth communicating with government officials. It’s EIP (not a gov’t org) raising a potential issue that clearly violates Twitter’s policies, but rather than immediately taking it down, Roth wants actual evidence. That then causes EIP to reach out to other orgs who can speak to the government officials and find out if there’s any further evidence.
In other words, nothing shown in the screenshot is Yoel communicating with government officials (only with EIP). Nothing shown is government officials demanding Twitter censor anyone. Instead, it shows private actors flagging some potentially consequential election disinfo. Finally, nothing in it shows that Twitter is quick to censor content based on these requests, rather it shows Yoel’s sole communication in the chain pushing back on what seems to be pretty clear disinfo, but demanding actual evidence that it’s false before he is willing to take action. Also, none of it was secret! EIP literally posted it themselves to brag about how their system worked to share useful information about election disinfo.
Once again, America, I beg you: elect better people.
Back in 2015, you might recall how the Russian Government was caught hacking into the DNC. It wasn’t particularly subtle; a Russian intelligence officer pretending to be a Romanian hacker made the dumb mistake of forgetting to turn on his VPN, revealing his Russian intelligence agency IP address to the world. The data he obtained concerning ongoing squabbling within the DNC was later leaked to the press to influence the 2016 election, and the rest is well documented history.
The problem: it was all absolute, unrefined, 100% bullshit.
What actually happened? Seven years after the fact, and journalist Duncan Campbell has finally published a story examining The Nation’s odd editorial history of stifling criticism of Russia internally. It also rips apart the Nation’s article, written by Patrick Lawrence, who, Campbell claims, repeated baseless claims made by pro-Trump trolls and hackers pretending to be intelligence analysts:
Lawrence invented situations and people, got facts wrong, and made far-reaching claims without substantiation. Information that Lawrence described as “hard evidence” had, in reality, been manufactured by members of a Trump-supporting website, Disobedient Media, founded in 2017 by William Craddick, a former law student who claimed to have started the “Pizzagate” conspiracy theory. The primary source in Lawrence’s story, cited eighteen times, was an anonymous figure, a supposed forensic expert known as “Forensicator.” That name was created by Disobedient Media in consultation with Tim Leonard, a British hacker, as an identity through which to present the “Forensicator report,” the document purporting to substantiate the “inside job” theory.
At the time, we pointed out how one of the key claims, that the speed at which the files had been transferred were too fast to have been done remotely over broadband, were absolute bullshit any actual intelligence expert or fact checker would have noticed. That resulted in an anti-Techdirt temper tantrum by the fake news troll in question over at his since-dismantled website.
Another cornerstone of The Nation’s story, the claim that a group of intelligence professionals like William Binney (dubbed the “VIPS”) had reviewed “Forensicator’s” evidence and corroborated its claims, also proved to be bullshit. Later on, Binney would admit to Campbell the entire thing was a “fabrication”:
When I met with Binney the next month, however, he told me that, when the Lawrence piece was published, the VIPS had not actually checked the evidence or reasoning in the Forensicator report. When Binney eventually looked into one of its key claims—that the stolen data could be proven to have been copied directly at a computer on the east coast—he changed his mind. There was “no evidence to prove where the copy was done”, he told me. The data “Forensicator” had given to VIPS had been “manipulated”, Binney said, and was “a fabrication”.
At that point, the pile on was afoot, and numerous outlets had security experts who also noted that The Nation story was bullshit. Instead of pulling the story, Campbell states that after significant pressure, The Nation co-owner and former editor Katrina vanden Heuvel finally affixed a “we were just asking questions” pre-amble to the head of piece, which was only quietly pulled offline last year (copy here).
Both vanden Huevel and new Nation editorial boss D.D. Guttenplan downplay the monumental fuck up in conversations with Campbell, at one point urging him to “get a life”:
When I ask Guttenplan about the controversy surrounding the Lawrence piece, he replies, “Water has gone under the bridge. I am comfortable.” He adds, “The Nation is a beacon for progressive ideas, democratic politics, women’s rights, racial and economic justice, and open debate between liberals and radicals.” Any damage done to the reputation of the magazine is minor, he argues, compared to all of the good it has done. What about the objections of his staff? “I don’t see the point of obsessing about it,” Guttenplan concludes. “Get a life!”
In 2018, a DOJ indictment against nearly a dozen Russian hackers would lay out in detail how Russian intelligence compromised the DNC, stole data, then carefully leaked that data to outlets like The Intercept to divide Democrats and improve Trump’s chance of winning the 2016 election (the author of said piece has since enjoyed a lucrative career spreading authoritarian apologia).
Campbell notes that Lawrence was allowed to write fifteen more features for The Nation in the year after the story was published, and there’s been no shortage of similar stories at the outlet written since by other authors with a tendency to downplay Russian authoritarianism. Some even referencing the “DNC hacked itself” theory as established fact.
Originally slated to appear in 2018, Campbell claims that his story was killed by Columbia Journalism Review (CJR) and then new editor Kyle Pope. Pope this week denied the story was killed, claiming it wasn’t run because it was late. Campbell has since written a second story outlining his experiences with CJR killing his story, claiming CJR had previously undisclosed business relationships with The Nation they didn’t want to jeopardize, and the story was “slow-walked to dismissal” after a year-long editing process.
Ultimately the whole dumb thing remains a cautionary tale of propaganda’s effective reach and U.S. journalism’s ongoing failure to counter or even recognize it, whether it’s coming from the U.S. or Russian government or a basement-dwelling troll half a world away. To this day, the lie that the DNC hacked itself remains a stone-cold fact in the brains of many right wingers and conspiracy theorists, and The Nation still, the better part of a decade later, hasn’t meaningfully owned the “error” heard ’round the world.
This is some bad looking precedent here. Everyone is right to be concerned about election disinformation, especially if that disinformation is intended to keep certain people from voting, but historically, it has been public officials facing criminal charges for voter suppression, rather than toxic Twitter trolls.
And Douglas Mackey, known as “Ricky Vaughn” on Twitter, is definitely toxic. He and his followers created social media campaigns during the 2016 election that attempted to dupe people (Hillary Clinton voters, specifically) into casting their votes via text message or social media posts, hoping to steer them away from venues where votes could actually be cast.
For that, Mackey was arrested and charged by the DOJ. Even the DOJ admitted this prosecution was novel: the first time a person had been criminally charged with election interference for trolling people on social media. According to the DOJ, Mackey’s efforts resulted in “4,900 unique phone numbers” attempting to vote by phone.
That’s pretty disturbing, if true. But is it actually a criminal act? Misleading people during election season is the national pastime, one often enjoyed by political candidates. The federal court handling this case says that something often considered to be nothing more than noxious speech — something often successfully countered with more speech — is actually a criminal act. (h/t Paul Seamus Ryan)
The decision [PDF] goes through a lot of legal paperwork before arriving at this conclusion, starting out with the question of venue. The court says that because tweets can be received nearly anywhere, the venue is proper, even though Mackey resides in the Southern District of New York, rather than in the Eastern District, where the prosecution is being brought.
Defendant Mackey argues in his reply brief that because the Government has not presented past cases where criminal venue was established by Tweets, communications using Twitter cannot properly support a finding of venue. (Reply at 2.) So narrow a reading of the relevant case law would ignore the interpretative dynamism necessitated by the rapid technological change of our era. As more and more Americans choose to communicate via Twitter and other messaging platforms rather than by phone or email, the judiciary’s understanding of how continuing crimes can be committed through electronic communications must keep pace and evolve. Although the cases discussed above did not deal directly with communications via Twitter, the Second Circuit’s cases on phone calls, emails, text messages, faxes, chat room messages, and wire transfers as overt acts illustrate that the government can establish venue where such electronic communications were sent to or received by individuals in the venue district. Tweets are themselves electronic communications, so the Government may establish venue based on where Tweets are foreseeably received.
The court then handles Mackey’s argument that he wasn’t “fairly warned” that attempting to deter voting by deception was a criminal act, something that violates his due process rights. It’s an important question to raise, since it deals with criminal intent — something that’s essential to criminal conspiracy charges. Here’s where things start looking pretty dicey. The court cites plenty of precedent, but none appears to be on point. Almost all of it deals with politicians, election officials, and others directly involved in tallying votes engaging in criminal acts of voter suppression. There are also several cases where voters engaged in voter fraud by stuffing ballot boxes, forging ballots, and “incorrectly filling out ballots on behalf of illiterate voters.” Almost every case deals with direct interaction with the ballot system, rather than someone just telling voters something that wasn’t true.
This is all fine, says the court. The law can be read to cover Mackey’s acts, and that’s how it’s going to be read by this court.
Defendant Mackey is correct that many–but not all–of the cases above pertain to physical acts such as stuffing a ballot box or counting fraudulent votes. These cases did not, however, rely on the physicality of the acts to reach their holdings. Indeed, many of those cases raised a similar question to the one before the court: whether the statute was “sufficiently broad in its scope to include the offense” charged. Foss v. United States., 266 F. 881, 882 (9th Cir. 1920). Not once has a federal court’s response to that question been defined by the offense’s corporeal tangibility. See e.g., Saylor, 322 U.S. at 388 (deciding that the statute included the charged offense based solely because there was a conspiracy “directed at the personal right of the elector to cast his own vote and to have it honestly counted”). Nor does the statute or the case law offer any reason why a court would rely on that fact.
Maybe the court feels this way, but it’s unclear whether Mackey truly thought he was engaging in a criminal act. Perhaps he might not have engaged in this expansive trolling effort if he thought it was actually a crime, rather than just a supremely shitty thing to do. Plenty of voter-related trolling occurred during the run-up to the election, with social media users deliberately misinforming others about voting dates, the legitimacy of absentee ballots, locations of ballot drop-off points, etc. But it appears Mackey (and some co-conspirators) are the only ones to be criminally charged for engaging in this heinous form of speech.
Mackey’s First Amendment challenge to the application of the law in this way is also dismissed by the federal court. The court says that the First Amendment does protect political speech, but this speech wasn’t political. It was deception intended to deter certain people from casting their votes.
The instant application of Section 241 does not attempt to regulate speech about the substance of what is on the ballot. Instead, it attempts to protect access to the ballot.
While it is possible that regulation of election misinformation or disinformation could, under other circumstances, be unconstitutional as impermissible proscriptions of political speech, this prosecution targets “speech that harms the election process,” rather than speech about a candidate or a candidate’s views.[…]If Defendant Mackey had tweeted false statements about Hillary Clinton’s policy positions, for instance, a different analysis would be necessary. But the issue at bar is whether Tweets telling one candidate’s supporters that they can vote by text or Tweet, therefore making “false statements about election procedures, such as the day the election will be held, the proper place to cast one’s vote, or voting requirements” are proscribable utterances.
The courts sums things up by saying it’s a good law (even though it’s never been used this way before) and it’s fine that the government is using it this way, even though it had other ways of countering Mackey’s deceptive speech.
This compelling interest undoubtedly includes making sure voters have accurate information about how, when, and where to vote. Prosecutions such as the one before this court are one of the few tools at the Government’s disposal for doing so. Counter speech, a typical mode of countering false speech, is unlikely to be of much use in the context of tweets spread across the far reaches of the internet in the days and hours immediately preceding an election.
Yes, it’s true that counter speech during the “days and hours immediately preceding an election” would be of limited utility. But the standard isn’t what works best for the government. An arrest that took place more than four years after the alleged crime was committed isn’t exactly a timely response either. And it’s unlikely to have much of an effect on election disinformation unless the government is willing to treat everyone who engages in this form of speech the same way. Selective prosecution isn’t an effective deterrent. It tends to make people more skeptical of the government and less likely to believe these criminal charges aren’t politically motivated.
A jury may find the government’s acts and this apparent incursion into protected speech too problematic to deliver a guilty verdict. But until it’s in the jury’s hands, certain election disinformation — if disseminated by certain people — is apparently a criminal offense. When something is this vague and selective, it’s not a deterrent. It’s a chilling effect, which is suppression of free speech. And this court, unfortunately, seems fine with that.
Frustrated by factual reality, science, and an independent press, the GOP and its wealthy backers have spent the better part of forty years building an alternative reality propaganda machine across AM radio, local broadcasting (with the help of Sinclair Broadcasting), fake “pink slime” local newspapers, cable news (OANN, Newsmax, Fox), and now the Internet.
While this propaganda machine has adequately insulated the modern Trump and Desantis GOP from the pesky menace of factual reality, there have been some downsides. The GOP’s belief that it no longer has to participate in public debates, for example, has resulted in a crop of insular, unpopular, and strange candidates who don’t have broader appeal — because they’re not participating in factual reality.
Amusingly, at least some Republican advisors appear to have realized this, and are urging the party to spend more time participating in real debates hosted by actual journalists. Or, at least, having debates where actual journalists are in attendance for some window dressing:
A Republican familiar with the conversations said the RNC is considering pairing mainstream outlets with conservative outlets as co-moderators, a regular feature of 2016 debates as well, to address member concerns about bias. The RNC’s proposal request includes a section for networks to fill out that dives into whether they’d be open to partnerships.
But part of the goal, the person said, would be to ensure candidates don’t get “softball questions that aren’t of substance” and that they are forced to “talk about policy and give answers.” The RNC meeting notably comes after a midterms in which a number of candidates popular in conservative media circles struggled to connect with independent voters in the general election.
Semafor, like most mainstream U.S. political outlets, can’t candidly acknowledge that Republicans built a hugely influential and successful propaganda machine, lest it upset sources, advertisers, or event sponsors. So their story kind of amusingly tap dances around the fact that a lot of the party’s problems in the midterms stemmed from out of touch delusion built on the back of a massively successful party propaganda machine.
It’s not clear that the party of Trump and Desantis, whose entire political careers involve agitating and dividing Americans using a rotating platter of unhinged conspiracy, bigotry, and outrage over everything from more energy efficient game consoles to inclusive candy branding, will ever actually listen to the handful of advisors warning about the impact of this isolation. In part because outrage and division is genuinely the only semi-meaningful policies they have.
GOP propaganda exploits a parade of U.S. policy failures across media (consolidation, death of local news), education (poor to no media savviness training), journalism (failure to develop independent funding models for an independent press), and the Internet (centralized social media platforms susceptible to the whims of unhinged billionaires).
But at some point, you’d imagine that the discourse and culture will develop policy fixes for some of these issues, and an immune response to candidates whose entire platform relies on unhinged conspiracies, bottomless outrage over minutiae, and vicious bigotry.
The GOP is hopeful that gerrymandering and propaganda will shield it from both factual and electoral reality for decades to come. And so far, that’s proven to be a solid bet, keeping a party with few substantive policies neck and neck in major races. The problem, again, is that candidates with heads full of pudding, hatred, and conspiracy theories aren’t going to appeal to the public; especially younger Americans who increasingly realize the modern Trump GOP is routinely and violently full of shit.
You might recall how struggling satellite TV network DirecTV recently kicked right wing propaganda channel OANN off of its cable lineup because it simply wasn’t profitable. That prompted weeks of performative hysteria by the GOP about how they were being “unfairly censored,” even prompting involvement of numerous Republican AGs who apparently had nothing better to do.
To be clear, cable companies will air pretty much any monumental pile of garbage if it makes them money, so the idea that this is anything other than just a boring business decision is idiotic. That didn’t stop the House GOP, who immediately proceeded to concoct an elaborate fiction in a letter to DirecTV about how it was part of some nefarious left wing censorship cabal:
“It has recently been revealed that Congressional Democrats and the White House coordinated closely with private companies to de-platform, de-monetize, or otherwise limit the reach of viewpoints they oppose and classify them as ‘misinformation,’” the House Republicans wrote. “As members of the House Republican Conference, we are deeply concerned about this un-democratic assault on free speech.”
The GOP is desperate to protect a propaganda apparatus successfully built over 45 years across AM radio, television, and the Internet. It’s what all the phony support for Big Tech “antitrust reform” is about. It’s what the whining about TikTok is partially about. With an unfavorably saggy demographic shift among young Americans and extremely unpopular policies, propaganda is all the GOP actually has.
DirecTV was quick to issue a statement making it clear they would have kept the channel, but its cost just wasn’t worth extending its contract, and that if right wingers want to consume the channel’s propaganda they can still happily do so via the Internet:
“On multiple occasions, we made it clear to Newsmax that we wanted to continue to offer the network, but ultimately Newsmax’s demands for rate increases would have led to significantly higher costs that we would have to pass on to our broad customer base,” a DirecTV spokesperson told The Daily Beast shortly after midnight on Wednesday.
“Anyone, including our customers, can watch the network for free via NewsmaxTV.com, YouTube.com and on multiple streaming platforms like Amazon Fire TV, Roku and Google Play. We continually evaluate the most relevant programming to provide our customers and expect to fill this available channel with new content.”
Of course back when the GOP had something vaguely resembling a consistent ideology, meddling with the business decisions of major companies would have been frowned at. Now that the party has devolved into authoritarian gibberish and endless victimization porn to distract and agitate the base, it’s just dumb, performative bullshit, all the way down to bone marrow level.
Right after the 2016 election that saw Donald Trump elected President, there was this collective wail among many who were unable to comprehend how this could have happened, searching for someone to blame. Two targets quickly emerged: social media and Russia. Often the two were combined into “Russian trolls on social media.” As we’ve noted, those Russian trolls certainly existed, and certainly were trying to influence the election, but it seemed dubious to us that they had any real effect. As we noted the day after the election, it was silly to claim that social media magically made people vote for Trump.
In the time since then, we’ve seen more and more evidence showing that the impact of social media was really not at all what many people seem to believe. We’ve talked about the studies that have, repeatedly, shown that cable news had way more of an impact than anything that came out of social media, not just for the election, but also for COVID disinfo.
Now there’s a very interesting new study, published in Nature with a long list of researchers (George Eady, Tom Paskhalis, Jan Zilinsky, Richard Bonneau, Jonathan Nagler, and Joshua Tucker), looking at whether or not Russian trolls on social media had any real impact on the 2016 election and the summary is no, they did not.
There is widespread concern that foreign actors are using social media to interfere in elections worldwide. Yet data have been unavailable to investigate links between exposure to foreign influence campaigns and political behavior. Using longitudinal survey data from US respondents linked to their Twitter feeds, we quantify the relationship between exposure to the Russian foreign influence campaign and attitudes and voting behavior in the 2016 US election. We demonstrate, first, that exposure to Russian disinformation accounts was heavily concentrated: only 1% of users accounted for 70% of exposures. Second, exposure was concentrated among users who strongly identified as Republicans. Third, exposure to the Russian influence campaign was eclipsed by content from domestic news media and politicians. Finally, we find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior. The results have implications for understanding the limits of election interference campaigns on social media.
Basically, yes, the trolls showed up and tried to sow discontent. But, the people who interacted with it were always going to vote for Trump anyway, and again, existing media was way, way, way more influential than the Russian trolls on social media.
The full report is all sorts of fascinating, and again shows how little impact the Russian trolls actually had. Especially compared to existing news media and US politicians.
The research does show that those who identified as “strongly Republican” were way more likely to encounter/interact with Russian propaganda, but that’s little surprise since that was a key (but not only) target of Russian propaganda. But, again, those individuals were never going to vote for Hillary Clinton in the first place. The study used various models to determine the impact on voting and found it basically negligible.
As estimates in the first panel indicate, the relationship between the number of posts from Russian foreign influence accounts that users are exposed to and voting for Donald Trump is near zero (and not statistically significant). This is the case whether the outcome is measured as vote choice in the election itself; the ranking of Clinton and Trump on equivalent survey questions across survey waves; and with the broader measure capturing whether voting behavior more generally favored Trump or Clinton through voting abstentions, changes in vote choice, or voting for a third party. The signs on the coefficients in each case are also negative, both for the count and binary measure, a result that would be inconsistent with a relationship of exposure being favorable to Trump. It is also worth noting that none of the other explanatory variables (with the exception of sex in some models) used as controls appear to be statistically significant predictors of the change in voting preferences
As the researchers conclude:
Taking our analyses together, it would appear unlikely that the Russian foreign influence campaign on Twitter could have had much more than a relatively minor influence on individual-level attitudes and voting behavior for four related reasons. First, we find that exposure to posts from Russian foreign influence accounts was concentrated among a small group of users, with only 1% of users accounting for 70% of all exposures. Second, exposure to Russian foreign influence tweets was overshadowed by the amount of exposure to traditional news media and US political candidates. Third, respondents with the highest levels of exposure to posts from Russian foreign influence accounts were those arguably least likely to need influencing: those who identified themselves as highly partisan Republicans, who were already likely favorable to Donald Trump. Fourth, we did not detect any meaningful relationships between exposure to posts from Russian foreign influence accounts and changes in respondents’ attitudes on the issues, political polarization, or voting behavior. Each of these findings is not independently dispositive. Jointly, however, we find concordant evidence between exposure to Russian disinformation—which is both lower and more concentrated than one might expect to be impactful—and the absence of a relationship to changes in attitudes and voting behavior.
The researchers do note that there are some limitations to their research (focused just on tweets, and just on identified Russia influence campaigns), but it does seem noteworthy.
This is a really useful addition to the research out there, though it’s not going to stop the, ahem, disinformation that social media magically impacted the election from continuing to spread. Even if that’s disinformation about disinformation.
For several years we’ve noted how most of the calls to ban TikTok are bad faith bullshit made by a rotating crop of characters that not only couldn’t care less about consumer privacy, but are directly responsible for the privacy oversight vacuum TikTok (and everybody else) exploits.
The Act (pdf), according to the two lawmakers, vaguely attempts to “block and prohibit all transactions from any social media company in, or under the influence of, China, Russia, and several other foreign countries of concern.” It comes on the heels of numerous state bills attempting to ban state government employees from using TikTok on their personal devices.
Rubio’s new federal bill attempts to leverage the authority of the International Emergency Economic Powers Act (IEEA) to ban TikTok from operating domestically here in the States, despite the fact that judges have ruled several times now that the IEAA doesn’t include such authority. Violating the act would result in criminal penalties of up to a $1 million fine and 20 years in prison.
Rubio trots out the now familiar argument that we simply must ban the hugely popular social media app because the Chinese could use it to propagandize children or spy on Americans:
“The federal government has yet to take a single meaningful action to protect American users from the threat of TikTok. This isn’t about creative videos — this is about an app that is collecting data on tens of millions of American children and adults every day. We know it’s used to manipulate feeds and influence elections. We know it answers to the People’s Republic of China. There is no more time to waste on meaningless negotiations with a CCP-puppet company. It is time to ban Beijing-controlled TikTok for good.”
So there are always two underlying claims when it comes to justifying a ban on TikTok. One, that the Chinese could use the app to propagandize children, of which there’s been zero meaningful evidence of at any coordinated scale. The other, more valid but overstated concern, is that TikTok-owner ByteDance will simply funnel U.S. consumer data to the Chinese government for ambiguous surveillance purposes.
Here’s the thing though: for decades the GOP (and more than a few Democrats) have worked tirelessly to erode FTC privacy enforcement authority and funding, while fighting tooth and nail against absolutely any meaningful privacy legislation for the Internet era. That opened the door for countless app makers, data brokers, telecoms, and bad actors from all over the world (including TikTok) to repeatedly abuse this accountability and oversight free for all.
For years, all you had to do to dodge any scrutiny was claim that the data you’re collecting is “anonymized,” a gibberish term with absolutely no meaning. Most anonymized users can be easily identified with just a smattering of additional datasets, allowing companies all around the globe to build detailed profiles of nearly every aspect of consumer behavior, from shopping and browsing habits to real-world movement and behavior patterns. Not even your health or mental health data is safe, really.
Bluntly, it’s because we spent two decades prioritizing making money over consumer safety or market health. The check is long overdue, and you see the impact every time you turn around in the form of another hack, breach, or privacy scandal.
Of course, this free for all was abused by foreign governments. It was never a question that corruption and a lack of market oversight would be exploited by foreign governments. If you actually care about national security, holding all companies and data brokers accountable for privacy abuses should be your priority. A basic, helpful, well-written privacy law should be your priority. A working, staffed, properly funded FTC should be your priority.
The GOP (and several Democrats) aren’t doing that because U.S. companies might lose some money. Instead, they’re pretending that banning a single app somehow fixes the entirety of a much bigger problem. A problem they genuinely helped create by opposing pretty much any meaningful oversight for any data-hoovering operation, provided they pinky swore they weren’t doing anything dodgy with it.
As we’ve noted several times now, you could ban TikTok immediately and the Chinese government could simply buy this (and more) data from a rotating crop of dodgy data brokers and assorted middlemen. As such, banning TikTok doesn’t actually fix any of the problems here, no matter how many times FCC Commissioner Brendan Carr claims otherwise on the TV.
You can also ban TikTok if you genuinely think it helps, but if you’re not doing the other stuff, you’re not actually doing anything. Another TikTok will simply spring up in its place because you haven’t done anything about the underlying conditions that opened the door to U.S. consumer data abuse by foreign governments. In any way. You’ve just put on a dumb play.
If you’re genuinely concerned about national security and privacy, you’d take the time to actually study the bigger problem. Vaguely pretending you’re standing up to the dastardly Chinese helps agitate and excite an often xenophobic GOP base, but what you’re actually doing is comprised of little more than some hand waving and a few farts unless you take meaningful, broader action.
I tend to think the real motivation here is actually just the usual: money. The GOP wants to force ByteDance to offload TikTok to an American billionaire of its choice. If you recall, Trump’s big “solution” for the “TikTok problem” was to sell the entire app to his buddies over at Walmart and Oracle, the latter with a long track record of its own various privacy abuses.
I’d wager this entire performance about TikTok is the lobbying off-gassing of some company that either doesn’t want to compete with TikTok directly (Facebook lobbyists can often be found trying to cause DC moral panics around TikTok), or some company or companies that hope to leverage phony privacy concerns to force ByteDance to sell them one of the most popular apps in tech history.
This is context you’ll find largely omitted from most press coverage of the story. Instead, you can watch as most press outlets unquestioningly frame politicians with an abysmal track record on consumer privacy (Brendan Carr or Marsha Blackburn quickly come to mind) as good faith champions of consumer privacy, despite the documented fact they’re directly responsible for the problem they’re pretending to fix.