I’m not sure we should welcome in our new AI-powered robot overlords determining how elections come about just yet.
The media keeps telling me that deep fakes and generative AI are going to throw all of the important elections this year into upheaval. And maybe it’s true, but to date, we’ve seen very little evidence to support anything serious. There are a lot of questions this year about the impact that generative AI tools will have on elections, but the predictions of the power of these tools still remain greatly exaggerated.
China will attempt to disrupt elections in the US, South Korea and India this year with artificial intelligence-generated content after making a dry run with the presidential poll in Taiwan, Microsoft has warned.
The US tech firm said it expected Chinese state-backed cyber groups to target high-profile elections in 2024, with North Korea also involved, according to a report by the company’s threat intelligence team published on Friday.
“As populations in India, South Korea and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections,” the report reads.
Microsoft said that “at a minimum” China will create and distribute through social media AI-generated content that “benefits their positions in these high-profile elections”.
And, I mean, anything’s possible, and it’s certainly good for companies and individuals alike to be on the lookout, but remember, one of the most important elections for China already happened earlier this year. The election in Taiwan. And it didn’t turn out the way that China wanted. At all.
That doesn’t mean China won’t continue to try to interfere in foreign elections, because of course it will. But it should, at the very least, lead to questions about just how effective these kinds of campaigns to manipulate elections can be.
I mean, part of Microsoft’s announcement was that China tried to use AI to influence the Taiwanese election, and it didn’t seem to have much of an impact.
Microsoft said in the report that China had already attempted an AI-generated disinformation campaign in the Taiwan presidential election in January. The company said this was the first time it had seen a state-backed entity using AI-made content in a bid to influence a foreign election.
A Beijing-backed group called Storm 1376, also known as Spamouflage or Dragonbridge, was highly active during the Taiwanese election. Its attempts to influence the election included posting fake audio on YouTube of the election candidate Terry Gou – who had bowed out in November – endorsing another candidate. Microsoft said the clip was “likely AI generated”. YouTube removed the content before it reached many users.
The Beijing-backed group pushed a series of AI-generated memes about the ultimately successful candidate, William Lai – a pro-sovereignty candidate opposed by Beijing – that levelled baseless claims against Lai accusing him of embezzling state funds. There was also an increased use of AI-generated TV news anchors, a tactic that has also been used by Iran, with the “anchor” making unsubstantiated claims about Lai’s private life including fathering illegitimate children.
Looking at Microsoft’s actual announcement, there’s surprisingly little discussion of why the attempts in Taiwan failed. It certainly talks about increased efforts, but not the rate of success.
There’s no reason not to be careful and to be thinking about these threats. But it seems like a much more interesting bit of research would have been to look at why this was so ineffective in the Taiwanese election, and if there were lessons to learn from that, rather than just hyping up the fear, uncertainty, and doubt about future elections.
The episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our launch sponsor Modulate, the prosocial voice technology company making online spaces safer and more inclusive. In our Bonus Chat at the end of the episode, Modulate’s Director of Marketing Mark Nolan chats with Mike about the the recent launch of the Gaming Safety Coalition, why its important for Modulate to work with other companies to create stronger gaming environments, and the importance of a hybrid approach to T&S.
It seems that across the country, they cannot help but to introduce the absolute craziest, obviously unconstitutional bullshit, and seem shocked when people suggest the bills are bad.
The latest comes from California state Senator Steve Padilla, who recently proposed a ridiculous bill, SB 1228, to end anonymity for “influential” accounts on social media. (I saw some people online confusing him with Alex Padilla, who is the US Senator from California, but they’re different people.)
This bill would require a large online platform, as defined, to seek to verify the name, telephone number, and email address of an influential user, as defined, by a means chosen by the large online platform and would require the platform to seek to verify the identity of a highly influential user, as defined, by asking to review the highly influential user’s government-issued identification.
This bill would require a large online platform to note on the profile page of an influential or highly influential user, in type at least as large and as visible as the user’s name, whether the user has been authenticated pursuant to those provisions, as prescribed, and would require the platform to attach to any post of an influential or highly influential user a notation that would be understood by a reasonable person as indicating that the user is authenticated or unauthenticated, as prescribed.
First off, this is unconstitutional. The First Amendment has been (rightly) read to protect anonymity in most cases — especially regarding election-related information. That’s the whole point of McIntyre v. Ohio. It’s difficult to know what Padilla is thinking, especially given his blatant admission that this bill seeks to target speech regarding elections. There are exceptions to the right to be anonymous, but they are limited to pretty specific scenarios. Cases like Dendrite lay out a pretty strict test for de-anonymizing a person (while limited as a precedent, but adopted in other courts), and it has to only be after a plaintiff demonstrates to a court that the underlying speech is actionable under the law. And not, as in this bill, because the speech is “influential.”
Padilla’s bill recognizes none of that, and almost gleefully makes it clear that he is either ignorant of the legal precedents here, or he doesn’t care. As he lays out in his own press release about the bill, he wants platforms to “authenticate” users because he’s worried about misinformation online about elections (again, that’s exactly what the McIntyre case said you can’t target this way).
“Foreign adversaries hope to harness new and powerful technology to misinform and divide America this election cycle,” said Senator Steve Padilla. “Bad actors and foreign bots now have the ability to create fake videos and images and spread lies to millions at the touch of a button. We need to ensure our content platforms protect against the kind of malicious interference that we know is possible. Verifying the identities of accounts with large followings allows us to weed out those that seek to corrupt our information stream.”
That’s an understandable concern, but an unconstitutional remedy. Anonymous speech, especially political speech, is a hallmark of American freedom. Hell, the very Constitution that this law violates was adopted, in part, due to “influential” anonymous pamphlets.
The bill is weird in other ways as well. It seems to be trying to attack both anonymous influential users and AI-generated content in the same bill, and does so sloppily. It defines “influential users” as someone who where
“Content authored, created, or shared by the user has been seen by more than 25,000 users over the lifetime of the accounts that they control or administer on the platform.”
This is odd on multiple levels. First, “over the lifetime of the account,” would mean a ridiculously large number of accounts will, at some point in the future, reach that threshold. Basically, you make ONE SINGLE viral post, and the social media site has to get your data and you can no longer be anonymous. Second, does Senator Padilla really think it’s wise to require social media sites to have to track “lifetime” views of content? Because that could be a bit of a privacy nightmare.
And then it adds in a weird AI component. This also counts as an “influential user”:
Accounts controlled or administered by the user have posted or sent more than 1,000 pieces of content, whether text, images, audio, or video, that are found to be 90 percent or more likely to contain content generated by artificial intelligence, as assessed by the platform using state-of-the-art tools and techniques for detecting AI-generated content.
So, first, posting 1,000 pieces of AI-generated content hardly makes an account “influential.” There are plenty of AI-posting bots that have little to no followings. Why should they have to be “verified” by platforms? Second, I have a real problem with the whole “if ‘state-of-the-art tools’ identify your content as mostly AI, then you lose your rights to anonymity,” when there’s zero explanation of why, or whether or not these “state-of-the-art tools” are even reliable (hint: they’re not!). Has Padilla run an analysis of these tools?
There are higher thresholds that designate someone as “highly influential”: 100,000 lifetime user views and 5,000 potentially AI-created pieces of content. Under these terms, I would be legally designated “highly influential” on a few platforms (my parents will be so proud). But then, “large online platforms” would be required to “verify” the “influential users’” identity, including the user’s name, phone number, and email, and would be required to “seek” government-issued IDs from “highly influential” users.
There is no fucking way I’m giving ExTwitter my government ID, but under the bill, Elon Musk would be required to ask me for it. No offense, Senator Padilla, but I’m taking the state of California to court for violating my rights long before I ever hand my driver’s license over to Elon Musk at your demand.
While the bill only says that the platforms “shall seek” this info, it would then require them to add a tag “at least as large and as visible as the user’s name” to their profile designating them “authenticated” or “unauthenticated.”
It would then further require that any site allow users to block all content from “unauthenticated influential or highly influential” users.
It even gets down to the level of product management, in that it tells “large online platforms” how it has to handle showing content from “unauthenticated” influential users:
(1) A large online platform shall attach to any post of an influential or highly influential user a notation that would be understood by a reasonable person as indicating that the user is authenticated or unauthenticated.
(2) For a post from an unauthenticated influential or highly influential user, the notation required by paragraph (1) shall be visible for at least two seconds before the rest of the post is visible and then shall remain visible with the post.
Again, there is so much problematic about this bill. Anyone who knows anything about anonymity would know this is so far beyond what the Constitution allows, that it should be an embarrassment for Senator Padilla, who should pull this bill.
And, on top of anything else, this would become a massive target for anyone who wants to identify anonymous users. Companies are going to get hit with a ton of subpoenas or other legal demands for information on people, which they’ll have collected, because someone had a post go viral.
Senator Padilla should be required to read Jeff Kosseff’s excellent book, “The United States of Anonymous,” as penance, and to publish a book report that details the many ways in which his bill is an unconstitutional attack on free speech and anonymity.
Yes, it’s reasonable to be concerned about manipulation and a flood of AI content. But, we don’t throw out basic constitutional principles based on such concerns. Tragically, Senator Padilla failed at this basic test of constitutional civics.
There are a lot of elections worldwide, and these events invariably raise significant concerns regarding potential manipulation, particularly in light of emerging technologies such as generative AI. To help address these concerns, we are reintroducing our innovative “election threatcasting” game, Threatcast 2024, which has been designed to help users anticipate and counteract such threats.
In 2019, Mozilla commissioned us to develop Threatcast 2020, a unique election simulation game. The game was crafted to empower groups of participants, under our facilitation, to delve into the intricate dynamics of new technologies, disinformation campaigns, and public manipulation tactics that could influence the 2020 election. We successfully ran in-person sessions late in 2019. With the onset of the pandemic, our team worked to adapt the game for online play while preserving the interactive, facilitated experience. We then hosted numerous sessions online, which proved to be valuable in the lead-up to the 2020 election, providing useful, actionable insights into the electoral process and its vulnerabilities.
Now that it’s 2024, and with a significant number of pivotal elections on the horizon, we are making it available again, in an updated manner. The updates are focused on keeping up with the times, exploring new technologies like generative AI, and the very different social media ecosystem than four years ago. We’re also expanding the focus as well, as the 2020 game was heavily focused on disinformation, and the new version covers many different types of election manipulation beyond just disinformation.
If you’re interested in hiring us to run a Threatcast 2024 session for your company, organization, event, or any other purpose, let us know. It’s a great (and very fun) way to explore potential threats that may emerge during this critical election period. Whether you’re an internet platform seeking to build up your defenses against emerging risks, or you’re looking for a unique and insightful team-building exercise, our Threatcast 2024 is the perfect tool to equip and enlighten your team.
The game immerses players in diverse roles, allowing them to test out an array of strategies—and corresponding countermeasures—to gain an in-depth understanding of how manipulation tactics could unfold across a multitude of scenarios.
The game is currently designed to reflect the intricate dynamics of US elections and is ideally suited for groups of 15 to 30 participants. However, we pride ourselves on our flexibility and are fully prepared to tailor the game to accommodate the unique needs of specific groups. Whether it involves adjusting to different group sizes, incorporating particular constraints, or adapting the game for election scenarios outside of the US, we are committed to providing a personalized experience. Additionally, we offer the versatility of facilitating the game both in person and online to ensure accessibility and convenience for all participants.
They say that if you stand for nothing, you’ll fall for anything. So today, I’m drawing a line in the sand and standing up for free speech. Let every enemy of freedom know, let every would-be tyrant be warned, and let every petty dictator take notice: If you want Twitter to censor its users, just send me an email.
From the very beginning of Elon Musk’s foray into being a social media magnate, we pointed out that he had no fucking clue what it meant to support free speech on such a site. Supporting free speech does not mean simply “allowing troll accounts I like that were suspended for violating the rules back online.” But that seems to be Musk’s entire understanding of free speech.
For example, we’ve also noted, repeatedly, that this tweet a year ago from Musk shows someone who has not actually thought about what it means to stand up for actual free speech:
Because, that means that you’re willing to bow down to any censorial authoritarian country — something that the old Twitter (the one Musk insists did not support free speech) regularly fought back against.
And, so far, Musk has shown a willingness to bow down to authoritarian censors. Every time he’s had a chance to take a stand, he’s folded. Whereas old Twitter refused to take down any tweets from activists and journalists in India, filed a lawsuit against the government, and publicly resisted demands that it pull down criticism of President Modi, Elon caved immediately and blocked some content from activists and journalists worldwide, not just in India.
The latest is yet another example of that. Just as the Turkish election was about to take place, the government demanded that Twitter censor content critical of authoritarian strongman, gollum-lookalike, and world’s most thin skinned leader, Recep Erdogan. And Elon caved.
Now, the old Twitter actually had a history of pushing back against such demands, and even took the Turkish government to court after the government tried to fine the company for refusing to take down content. That wasn’t the only time. We had another story of the old Twitter refusing to block a newspaper’s feed, despite demands from the Turkish government. Back in 2014, Erdogan got so mad at Twitter that he officially blocked it from the entire country, but the citizenry got so angry that the ban was quickly reversed.
In other words, the old Twitter fought regularly over this stuff and went to court.
And Elon just folded.
And when people called him out on this, he (as per usual) got childish and defensive. Here he is insulting Matt Yglesias over this:
Yglesias is actually making a good point here. For all the talk of the Twitter Files, which Musk promised us would show the US government demanding Twitter censor people (when it showed nothing of the sort), here’s an example of a literal government demanding literal censorship, and Musk just rolls right over.
Musk’s response is nonsense. Again, the old Twitter had a long history of fighting exactly these cases as linked above. This is why we’ve pointed out over and over again that the old Twitter was one of the staunchest defenders of actual free speech and that Musk (on day one) fired the people who were the most avid free speech defenders at the company. They might have been able to tell him how to better deal with these situations.
And it’s not like people didn’t try to warn him. This issue was literally “Level Nine” of the speed run lesson plan I gave Elon. Except, even then, I thought that Elon would have the principles to first try to stand up against such authoritarian censors, but apparently I overestimated his willingness to actually fight for free speech.
Wikipedia’s Jimmy Wales highlighted this as well, noting how Wikipedia had received similar orders, but fought them (and won):
Also, note the contrast when some other governments told Elon to remove Russian propagandists. Then he refused, claiming to be a free speech absolutist. Why is this different?
And, of course, Musk’s loudest fans are defending this move, because they have no principles at all. Free speech means having principles and pushing back when governments demand you pull down content that does not violate your policies. It means standing up to governments, not bowing down to them, and letting them push you around.
So, let me ask those defending this move by Musk: are you really suggesting that caving to authoritarian threats to censor content does more than fighting back against those threats? If you say, as Musk does above, that allowing some speech in Turkey is better than being blocked entirely, then how does that same argument not apply to other actions by Twitter to remove some content (such as abusive and harassing content) that might otherwise drive users away?
With this latest move, Musk has screamed loud and clear to any censorial government out there that they just need to threaten to block Twitter and he’ll fold like a cheap suit. Meanwhile, he’ll lie and insist that the US government was censoring content, even as the Twitter Files only showed reports about accounts that might have actually violated Twitter’s polices, and the company regularly pushed back on those and refused to remove the accounts.
But for some reason he was up in arms about that, whereas here he thinks someone’s “brain fell out of their head” for simply wondering when we’ll see the “Twitter Files” for Musk’s negotiations with the Turkish government.
Once again, don’t let anyone get away with suggesting that Musk supports free speech. He clearly does not. He supports accounts that he likes being able to use a website he owns. That’s it.
So here’s the deal. If you think the Twitter Files are still something legit or telling or powerful, watch this 30 minute interview that Mehdi Hasan did with Matt Taibbi (at Taibbi’s own demand):
Hasan came prepared with facts. Lots of them. Many of which debunked the core foundation on which Taibbi and his many fans have built the narrative regarding the Twitter Files.
We’ve debunked many of Matt’s errors over the past few months, and a few of the errors we’ve called out (though not nearly all, as there are so, so many) show up in Hasan’s interview, while Taibbi shrugs, sighs, and makes it clear he’s totally out of his depth when confronted with facts.
Since the interview, Taibbi has been scrambling to claim that the errors Hasan called out are small side issues, but they’re not. They’re literally the core pieces on which he’s built the nonsense framing that Stanford, the University of Washington, some non-profits, the government, and social media have formed an “industrial censorship complex” to stifle the speech of Americans.
The errors that Hasan highlights matter a lot. A key one is Taibbi’s claim that the Election Integrity Partnership flagged 22 million tweets for Twitter to take down in partnership with the government. This is flat out wrong. The EIP, which was focused on studying election interference, flagged less than 3,000 tweets for Twitter to review (2,890 to be exact).
And they were quite clear in their report on how all this worked. EIP was an academic project to track election interference information and how it flowed across social media. The 22 million figure shows up in the report, but it was just a count of how many tweets they tracked in trying to follow how this information spread, not seeking to remove it. And the vast majority of those tweets weren’t even related to the ones they did explicitly create tickets on.
In total, our incident-related tweet data included 5,888,771 tweets and retweets from ticket status IDs directly, 1,094,115 tweets and retweets collected first from ticket URLs, and 14,914,478 from keyword searches, for a total of 21,897,364 tweets.
Tracking how information spreads is… um… not a problem now is it? Is Taibbi really claiming that academics shouldn’t track the flow of information?
Either way, Taibbi overstated the number of tweets that EIP reported by 21,894,474 tweets. In percentage terms, the actual number of reported tweets was 0.013% of the number Taibbi claimed.
Okay, you say, but STILL, if the government is flagging even 2,890 tweets, that’s still a problem! And it would be if it was the government flagging those tweets. But it’s not. As the report details, basically all of the tickets in the system were created by non-government entities, mainly from the EIP members themselves (Stanford, University of Washington, Graphika, and Digital Forensics Lab).
This is where the second big error that Taibbi makes knocks down a key pillar of his argument. Hasan notes that Taibbi falsely turned the non-profit Center for Internet Security (CIS) into the government agency the Cybersecurity and Infrastructure Security Agency (CISA). Taibbi did this by assuming that when someone at Twitter noted information came from CIS, they must have meant CISA, and therefore he appended the A in brackets as if he was correcting a typo:
Taibbi admits that this was a mistake and has now tweeted a correction (though this point was identified weeks ago, and he claims he only just learned about it). I’ve seen Taibbi and his defenders claim that this is no big deal, that he just “messed up an acronym.” But, uh, no. Having CISA report tweets to Twitter was a key linchpin in the argument that the government was sending tweets for Twitter to remove. But it wasn’t the government, it was an independent non-profit.
The thing is, this mistake also suggests that Taibbi never even bothered to read the EIP report on all of this, which lays out extremely clearly where the flagged tweets came from, noting that CIS (which was not an actual part of the EIP) sent in 16% of the total flagged tweets. It even pretty clearly describes what those tweets were:
Compared to the dataset as a whole, the CIS tickets were (1) more likely to raise reports about fake official election accounts (CIS raised half of the tickets on this topic), (2) more likely to create tickets about Washington, Connecticut, and Ohio, and (3) more likely to raise reports that were about how to vote and the ballot counting process—CIS raised 42% of the tickets that claimed there were issues about ballots being rejected. CIS also raised four of our nine tickets about phishing. The attacks CIS reported used a combination of mass texts, emails, and spoofed websites to try to obtain personal information about voters, including addresses and Social Security numbers. Three of the four impersonated election official accounts, including one fake Kentucky election website that promoted a narrative that votes had been lost by asking voters to share personal information and anecdotes about why their vote was not counted. Another ticket CIS reported included a phishing email impersonating the Election Assistance Commission (EAC) that was sent to Arizona voters with a link to a spoofed Arizona voting website. There, it asked voters for personal information including their name, birthdate, address, Social Security number, and driver’s license number.
In other words, CIS was raising pretty legitimate issues: people impersonating election officials, and phishing pages. This wasn’t about “misinformation.” These were seriously problematic tweets.
There is one part that perhaps deserves some more scrutiny regarding government organizations, as the report does say that a tiny percentage of reports came from the GEC, which is a part of the State Department, but the report suggests that this was probably less than 1% of the flags. 79% of the flags came from the four organizations in the partnership (not government). Another 16% came from CIS (contrary to Taibbi’s original claim, not government). That leaves 5%, which came from six different organizations, mostly non-profits. Though it does list the GEC as one of the six organizations. But the GEC is literally focused entirely on countering (not deleting) foreign state propaganda aimed at destabilizing the US. So, it’s not surprising that they might call out a few tweets to the EIP researchers.
Okay, okay, you say, but even so this is still problematic. It was still, as a Taibbi retweet suggests, these organizations who are somehow close to the government trying to silence narratives. And, again, that would be bad if true. But, that’s not what the information actually shows. First off, we already discussed how some of what they targeted was just out and out fraud.
But, more importantly, regarding the small number of tweets that EIP did report to Twitter… it never suggested what Twitter should do about them, and Twitter left the vast majority of them up. The entire purpose of the EIP program, as laid out in everything that the EIP team has made clear from before, during, and after the election, was just to be another set of eyes looking out for emerging trends and documenting how information flows. In the rare cases (again less than 1%) where things looked especially problematic (phishing attempts, impersonation) they might alert the company, but made no effort to tell Twitter how to handle them. And, as the report itself makes clear, Twitter left up the vast majority of them:
We find, overall, that platforms took action on 35% of URLs that we reported to them. 21% of URLs were labeled, 13% were removed, and 1% were soft blocked. No action was taken on 65%. TikTok had the highest action rate: actioning (in their case, their only action was removing) 64% of URLs that the EIP reported to their team.)
They don’t break it out by platform, but across all platforms no action was taken on 65% of the reported content. And considering that TikTok seemed quite aggressive in removing 64% of flagged content, that means that all of the other platforms, including Twitter, took action on way less than 35% of the flagged content. And then, even within the “took action” category, the main action taken was labeling.
In other words, the top two main results of EIP flagging this content were:
Nothing
Adding more speech
The report also notes that the category of content that was most likely to get removed was the out and out fraud stuff: “phishing content and fake official accounts.” And given that TikTok appears to have accounted for a huge percentage of the “removals” this means that Twitter removed significantly less than 13% of the tweets that EIP flagged for them. So not only is it not 22 million tweets, it’s that EIP flagged less than 3,000 tweets, and Twitter ignored most of them and removed probably less than 10% of them.
When looked at in this context, basically the entire narrative that Taibbi is pushing melts away.
The EIP is not part of the “industrial censorship complex.” It’s a mostly academic group that was tracking how information flows across social media, which is a legitimate area of study. During the election they did exactly that. In the tiny percentage of cases where they saw stuff they thought was pretty worrisome, they’d simply alert the platforms with no push for the platforms to take any action, and (indeed) in most cases the platforms took no action whatsoever. In a few cases, they added more speech.
In a tiny, tiny percentage of the already tiny percentage, when the situation was most extreme (phishing, fake official accounts) then the platforms (entirely on their own) decided to pull down that content. For good reason.
That’s not “censorship.” There’s no “complex.” Taibbi’s entire narrative turns to dust.
There’s a lot more that Taibbi gets wrong in all of this, but the points that Hasan got him to admit he was wrong about are literally core pieces in the underlying foundation of his entire argument.
At one point in the interview, Hasan also does a nice job pointing out that the posts that the Biden campaign (note: not the government) flagged to Twitter were of Hunter Biden’s dick pics, not anything political (we’ve discussed this point before) and Taibbi stammers some more and claims that “the ordinary person can’t just call up Twitter and have something taken off Twitter. If you put something nasty about me on Twitter, I can’t just call up Twitter…”
Except… that’s wrong. In multiple ways. First off, it’s not just “something nasty.” It’s literally non-consensual nude photos. Second, actually, given Taibbi’s close relationship with Twitter these days, uh, yeah, he almost certainly could just call them up. But, most importantly, the claim about “the ordinary” person not being able to have non-consensual nude images taken off the site? That’s wrong.
You can. There’s a form for it right here. And I’ll admit that I’m not sure how well staffed Twitter’s trust & safety team is to handle those reports today, but it definitely used to have a team of people who would review those reports and take down non-consensual nude photos, just as they did with the Hunter Biden images.
As Hasan notes, Taibbi left out this crucial context to make his claims seem way more damning than they were. Taibbi’s response is… bizarre. Hasan asks him if he knew that the URLs were nudes of Hunter Biden and Taibbi admits that “of course” he did, but when Hasan asks him why he didn’t tell people that, Taibbi says “because I didn’t need to!”
Except, yeah, you kinda do. It’s vital context. Without it, the original Twitter Files thread implied that the Biden campaign (again, not the government) was trying to suppress political content or embarrassing content that would harm the campaign. The context that it’s Hunter’s dick pics is totally relevant and essential to understanding the story.
And this is exactly what the rest of Hasan’s interview (and what I’ve described above) lays out in great detail: Taibbi isn’t just sloppy with facts, which is problematic enough. He leaves out the very important context that highlights how the big conspiracy he’s reporting is… not big, not a conspiracy, and not even remotely problematic.
He presents it as a massive censorship operation, targeting 22 million tweets, with takedown demands from government players, seeking to silence the American public. When you look through the details, correcting Taibbi’s many errors, and putting it in context, you see that it was an academic operation to study information flows, who sent the more blatant issues they came across to Twitter with no suggestion that they do anything about them, and the vast majority of which Twitter ignored. In some minority of cases, Twitter applied its own speech to add more context to some of the tweets, and in a very small number of cases, where it found phishing attempts or people impersonating election officials (clear terms of service violations, and potentially actual crimes), it removed them.
There remains no there there. It’s less than a Potemkin village. There isn’t even a façade. This is the Emperor’s New Clothes for a modern era. Taibbi is pointing to a naked emperor and insisting that he’s clothed in all sorts of royal finery, whereas anyone who actually looks at the emperor sees he’s naked.
I wrote last week about the bizarrely bad House Oversight hearing that was supposed to expose how Twitter, the deep state, and the, um “Biden Crime Family” conspired to suppress the NY Post’s story about Hunter Biden’s laptop. Of course, wishful thinking does not make facts, and we already know that story is totally false. The hearing not only reconfirmed that the GOP’s fantasy scenario never happened, instead it revealed that the Trump White House actually demanded tweets that insulted the President get taken down and that Twitter bent over backwards to give Trump more leeway, even after he broke clear rules. It was something of a disaster hearing for the GOP.
But, one of the craziest bits of the hearing came from new Congressional Rep. Anna Paulina Luna, who worked for Turning Point USA and PragerU before being elected. Her five minutes has garnered some extra attention for being even crazier than either Reps. Lauren Boebert or Marjorie Taylor Greene, both of whom had pretty crazy rants.
In particular, Rep. Luna (who has been facing some interesting news reporting of late) made some claims about there being a conspiracy between Twitter and the government to communicate via “the private cloud server”… Jira.
Of course, as anyone with even the slightest bit of understanding about, well, anything, would tell you, it’s that Jira is an issue and project tracking software, normally used for things like bug tracking. Luna claimed this was a violation of the 1st Amendment, because she apparently hasn’t the slightest clue how the 1st Amendment actually works.
From the transcript (helpfully provided by Tech Policy Press, though we’ve corrected it based on the video), you can see former Twitter exec Yoel Roth’s confusion over all this. For anyone who understands this, you can recognize Roth’s confusion because he recognizes that she’s completely misconstruing Jira and what it does. But, to Rep. Luna, she seems to think she’s caught Roth out in a giant conspiracy.
Rep. Anna Luna (R-FL):
Mr. Roth. Mr. Roth, have you communicated with government officials ever on a platform called Jira? Yes or no? Real quick answer, we’re on the clock, yes or no?
Yoel Roth:
Not to the best of my recollection.
Rep. Anna Luna (R-FL):
Not to your recollection. Great. Have, if you did in the event, communicate who would’ve had access to this platform.
Yoel Roth:
That’s the nature of my confusion. JIRA’s…
Rep. Anna Luna (R-FL):
Okay. Did you ever speak to government officials on Jira regarding taking down social media posts?
Yoel Roth:
Again, not to the best of my recollection.
Rep. Anna Luna (R-FL):
Can you explain to me why the federal government would ever have interest in communicating through Jira? Mind you, a private cloud server with social media companies without oversight to censor American voices? I wanna let you know that this is a violation of the First Amendment and the federal government is colluding with social media companies to censor Americans. Mr. Chairman, I ask for unanimous consent to submit these graphics into record. And Mr. Roth, I’m gonna refresh your memory for you this flow chart.
Rep. James Comer (R-KY):
Without objection so ordered.
Rep. Anna Luna (R-FL):
Thank you chair. This flow chart shows the following Federal agency’s social media companies, Twitter, leftist, nonprofits, and organizations communicating regarding their version of misinformation using Jira, a private cloud server. On this chart, I wanna annotate that the Department of Homeland Security, which has a following branches, cybersecurity and infrastructure security agency, also known as CISA Countering Foreign Intelligence Task Force, now known as the Misinfo, Disinfo and Mal-information, MDM, this was again, used against the American people. The Election Partnership Institute or Election Integrity Partnership, EIP, which includes the following, Stanford Internet Observatory, University of Washington Center for Informed Public, Graphika and Atlantic Council’s Digital Forensic Research Lab. And potentially according to what we found on the final report by EIP, the DNC, the Center for Internet Security, CIS- a nonprofit funded by DHS, the National Association of Secretaries of State, also known as NASS and the National Association of State Election Directors, NASED.
And in this case, because there are other social media companies involved, Twitter, what do all of these groups though, have in common? And I’m going to refresh your memory. They were all communicating on a private cloud server known as Jira. Now, the screenshot behind me, which is an example of one of thousands shows on November 3rd, 2020, that you, Mr. Roth, a Twitter employee, were exchanging communications on Jira, a private cloud server with CISA, NASS, NASED, and Alex Stamos, who now works at Stanford and is a former security of security officer at Facebook to remove a posting. Do you now remember communicating on a private cloud server to remove a posting? Yes or no?
Yoel Roth:
I wouldn’t agree with the characteristics.
Rep. Anna Luna (R-FL):
I don’t care if you agree. Do you, this is, this is your stuff, yes or no? Did you communicate with a private entity, the government agency on a private cloud server? Yes or no?
Yoel Roth:
The question was, if I…
Rep. Anna Luna (R-FL):
Yes or no? Yeah, I’m on time. Yes or no?
Yoel Roth:
Ma’am, I don’t believe I can give you a yes or no.
Rep. Anna Luna (R-FL):
Well, I’m gonna tell you right now that you did and we have proof of it. This ladies and gentlemen, is joint action between the federal government and a private company to censor and violate the First Amendment. This is also known, and I’m so glad that there’s many attorneys on this panel, joint state actors, it’s highly illegal. You are all engaged in this action, and I want you to know that you will be all held accountable. Ms. Gadde, are you still on CISA’s Cybersecurity Advisory Council? Yes or no?
Vijaya Gadde:
Yes, I am.
Rep. Anna Luna (R-FL):
Okay. For those who have said that this is a pointless hearing, and I just wanna let you guys all know, we found that Twitter was indeed communicating with the federal government to censor Americans. I’d like to remind you that this was all in place before January 6th. So, to say that these mechanisms weren’t in place, and to make it about January 6th, I wanna let you know that you guys were actually in control of all of the content and clearly have proof of that. Now, if you don’t think that this is important to your constituents and the American people from those saying that this was a pointless hearing, I suggest you find other jobs. Chairman, I yield my time.
If you actually want to watch all this play out, it’s at 5 hours and 31 minutes in this video (the link should take you to that point). You can see how proud Luna is of herself as she thinks she’s proven “joint state action” and found the secret “Jira private cloud server” where social media and government actors colluded to censor people.
The problem, of course, is that none of this is even remotely true. Whether Luna knows it’s not true, has very stupid staffers who told her something false, or if they just don’t care because it sounds good… I don’t know. I do know that Luna has continued to take a victory lap on this nonsense, including claiming on Steve Bannon’s podcast that she caught Roth “lying” under oath to a member of Congress, and she insisted that the panelist’s stunned faces were not because they were realizing just how confused Luna was about all this, but (she said) because they all wanted to immediately text their lawyers about how in trouble they were.
So, let’s debunk all of this nonsense. And, I won’t even bother digging into the fact that at the time of this supposed smoking gun, Trump was in office, and his hand appointed director ran CISA. There’s so much other dumb stuff, I don’t even have time to spend any more time on that.
Now, once again, Jira is a ticketing system, and a widely used one. It is not a “private cloud server” for “communicating.”
All of the details of what’s going on here were totally public already. The Election Integrity Partnership, which was a private project run by the Stanford Internet Observatory, UW Center for an Informed Public, Graphika, and the Digital Forensic Research Lab, have been quite open and public about what they did to try to track and monitor election mis- and dis-information.
They released a big report, called The Long Fuse in 2021 that details how they used Jira to track possible election disinfo vectors. They used it internally, but they were also able to “tag” in different organizations if they thought it was necessary. This is described pretty clearly and publicly in the report on page 18 and 19:
To illustrate the scope of collaboration types discussed above, the following case
study documents the value derived from the multistakeholder model that the
EIP facilitated. On October 13, 2020, a civil society partner submitted a tip via
their submission portal about well-intentioned but misleading information in a
Facebook post. The post contained a screenshot (See Figure 1.4).
In their comments, the partner stated, “In some states, a mark is intended
to denote a follow-up: this advice does not apply to every locality, and may
confuse people. A local board of elections has responded, but the meme is
being copy/pasted all over Facebook from various sources.” A Tier 1 analyst
investigated the report, answering a set of standardized research questions,
archiving the content, and appending their findings to the ticket. The analyst
identified that the text content of the message had been copied and pasted
verbatim by other users and on other platforms. The Tier 1 analyst routed
the ticket to Tier 2, where the advanced analyst tagged the platform partners
Facebook and Twitter, so that these teams were aware of the content and could
independently evaluate the post against their policies. Recognizing the potential
for this narrative to spread to multiple jurisdictions, the manager added in the
CIS partner as well to provide visibility on this growing narrative and share the
information on spread with their election official partners. The manager then
routed the ticket to ongoing monitoring. A Tier 1 analyst tracked the ticket until
all platform partners had responded, and then closed the ticket as resolved.
According to two different people I spoke to at the EIP, this Tier 2 setup, where companies got tagged in happened rarely. Instead, these tickets were mostly just used internally for EIP’s own research efforts. But, either way, note the issue. This is not government employees telling social media to take down posts. This is the EIP, basically a bunch of disinformation researchers, conducting research, and escalating issues to companies to be “independently evaluated against their policies.”
Now, as for the “smoking gun” which Luna showed where she claimed she’s proven “state action,” it’s very blurry and impossible to see in the C-SPAN video, and she didn’t tweet it either. Perhaps because it kinda debunks her entire argument.
The screenshot also isn’t anything secret. It was part of EIP’s own presentation explaining how the EIP worked! In this 12 minute video, Stanford’s Alex Stamos explains the whole process, and at 4 minutes and 14 seconds, he shows a specific example, which appears to be the blurry example that Luna claimed was her smoking gun. Except when you look at it, you see it’s actually an item that (1) EIP found and highlighted (not government officials) of actual election disinfo (someone claiming to be a poll worker burning ballots for anyone who voted for Trump). (2) They tagged in Yoel Roth from Twitter, who rather than just take it down, actually pushed back saying “Is there any evidence establishing that this was a hoax.” (3) EIP then reached out to the relevant election board to see if they had any proof that it was a hoax, and (4) them getting back a press release from the Election Board saying it was a hoax.
That is… not the government colluding to censor Americans. Nor is it Yoel Roth communicating with government officials. It’s EIP (not a gov’t org) raising a potential issue that clearly violates Twitter’s policies, but rather than immediately taking it down, Roth wants actual evidence. That then causes EIP to reach out to other orgs who can speak to the government officials and find out if there’s any further evidence.
In other words, nothing shown in the screenshot is Yoel communicating with government officials (only with EIP). Nothing shown is government officials demanding Twitter censor anyone. Instead, it shows private actors flagging some potentially consequential election disinfo. Finally, nothing in it shows that Twitter is quick to censor content based on these requests, rather it shows Yoel’s sole communication in the chain pushing back on what seems to be pretty clear disinfo, but demanding actual evidence that it’s false before he is willing to take action. Also, none of it was secret! EIP literally posted it themselves to brag about how their system worked to share useful information about election disinfo.
Once again, America, I beg you: elect better people.
Much like the company’s dedication to women, AT&T’s dedication to not funding people eager to overthrow democracy appears to be somewhere between inconsistent and nonexistent. And the company certainly isn’t alone.
Shortly after January 6 a number of companies, including telecom giants like AT&T, publicly crowed about how they’d be ceasing all funding to politicians who supported the attack on the Capitol and the overturning of, you know, democracy. Of course that promise was never worth all that much, given that the umbrella lobbying orgs companies like AT&T used never really stopped financing terrible people.
Initially, AT&T made a big stink about how it had suspended funding to all 147 Republicans who voted to overturn the 2020 election. But not only did AT&T not actually suspend funding via its numerous policy and lobbying tendrils, it didn’t even really ever stop funding insurrectionists directly.
A more recent breakdown of campaign financing by Bloomberg found several things. One, big telecom, which has largely been forgotten about during the myopic, multi-year DC policy fixation on “big tech,” was far and away among the biggest donors to insurrectionists and election conspiracy theorists:
Bloomberg found that companies like Comcast not only didn’t pause donations to election deniers long, they ramped up the funding of those candidates right before the midterms, as it was working tirelessly to keep the FCC mired permanently in partisan gridlock so it can’t do any of the things the public wants (restoring net neutrality, restoring media consolidation rules).
As is usually the case, Comcast didn’t much want to talk about why it throws money at people trying to destroy democracy:
By the end of the 2021, the Philadelphia-based cable giant had not only resumed giving to those candidates, but increased its contributions throughout 2022 to $365,500, becoming the second-biggest donor to election deniers among the tech and telecom firms. Comcast didn’t respond to a request for comment.
The mainstream press being, well, the mainstream press, Bloomberg chooses to inform its readers that telecom companies are lobbying radical anti-democracy insurrectionists and conspiracy theorists to “boost broadband rollout”:
Conversely, telecom companies and semiconductor makers want more government aid for programs to boost broadband rollout and domestic manufacturing. That requires developing relationships with newly empowered Republicans.
Most of the federal COVID and infrastructure broadband funding has already been assigned. And most of the dictating of who gets that money is being determined on the state level, not the federal. Bloomberg just somehow forgets to mention that AT&T and Comcast have been working tirelessly, arm in arm with the GOP, to keep the FCC mired in partisan gridlock in perpetuity.
All told, most of the claims by corporations that they’d be more discerning about their campaign financing in the wake of January 6 wound up being bullshit. And any interest in campaign finance reform in the wake of this giant middle finger to democracy appears similarly hollow.
The Brazilian government — under the “leadership” of Donald Trump Mutual Admiration Society member Jair Bolsonaro — has been steadily cracking down on free speech under the guise of saving the public from “fake news” and other misinformation.
Over the past few years, it has ramped up efforts to eradicate content and reporting that it calls “fake news,” a term that refers to criticism of the ruling party, criticism of the ruling party’s efforts, punching holes in official narratives, or debunking the ruling party’s favored conspiracy theories.
In early 2018, it handed over the job of policing social media platforms to the actual police. The federal police were given permission to bring guns to a word fight to ensure compliance with demands that anything the government declared “fake” be removed as close to immediately as possible. The federal police seemed to relish this new directive, stating that it would continue to police social media whether or not the proposed censorship law was passed by Brazil’s government.
Since then, even more mandates have been handed down to social media services to make it easier for the government to track and trace critics and dissidents. A 2020 “fake news” proposal would have forced service providers to collect and retain a ton of data and metadata indefinitely for examination by the government (which means the federal police) whenever it felt something was “fake” and/or (even more vaguely) threatened national security.
In 2021, the legislation was altered to remove logging requirements and the collection of users’ national ID information before allowing them to open accounts. While that aspect of the proposed legislation got a bit better, the rest of the “fake news” law got much, much worse. It mandated unmasking of users by social media services, granted the government permission to simply shut down troublesome parts of the internet to quell dissent, and allowed the government to pretend IP addresses alone were capable of accurately identifying users who spread so-called “fake news.”
The Superior Electoral Court (TSE) unanimously approved rules to maintain the integrity of the upcoming electoral process by fighting against the spread of misinformation that may compromise the fierce presidential campaign between far-right incumbent Jair Bolsonaro and leftist challenger Luiz Inacio Lula da Silva, as well as the elections overall.
The president of the TSE, Minister Alexandre de Moraes, declared that once the collegiate decides that a particular post contains disinformation content, it will be removed, together with all other identical publications. He emphasized that after “[v]erifying that that content has been repeated, there will be no need for a new representation or judicial decision, there will be an extension and immediate withdrawal of these fraudulent news.”
While the involvement of the court suggests an impartial review of alleged “fake news,” the increasing focus on what President Bolsonaro believes is fake news suggests something else. The court is here to serve the laws that are in place, rather than simply protect the citizens of Brazil from government overreach.
There is no carrot for social media services. Only a very expensive stick. Content the court declares illegal needs to be removed within two hours. Services face fines of $19,000/hour for every hour (I assume pro-rated fines are also in place) the content remains visible past that point.
The laws Bolsonaro thought might deter criticism of him and his party are now being used against him, which is its own form of justice, I guess. But it is also limiting political debate and appears to be restricting journalists from reporting on the candidates’ sordid pasts/presents.
The TSE has already ordered some disinformation videos to be taken down, including ones that say Lula consorts with Satan and Bolsonaro embraces cannibalism. The campaigns have also been ordered by the court to pull online ads saying the leftist will legalize abortion and the incumbent entertains pedophilia.
[…]
The Bolsonaro camp has complained that the TSE has told it not to run ads calling Lula “corrupt” and a “thief” because bribery convictions that put him in jail were later annulled by the Supreme Court.
Brazilian broadcasters have also said they have been prohibited from using the words “ex-convict,” “thief” or “corrupt” when speaking about Lula. The broadcaster lobby ABERT protested that such decisions were interfering with freedom of expression.
By contrast, Bolsonaro allies complain that the TSE has not stopped opponents from accusing the president of “genocide” for his handling of the COVID-19 pandemic that killed 680,000 Brazilians.
Fun stuff all around. An authoritarian is learning what terrible laws can do when you’re forced to allow your political opponents to avail themselves of the same (dubious) protections. Unfortunately, it’s not just the party in power or the party planning to take power by accusing opponents of cannibalism that are being constrained here. The fines and additional scrutiny are likely provoking proactive content moderation by platforms, which means content that isn’t technically illegal is being buried because it cuts too close the vague language of the law. And journalists are finding it more difficult to report on candidates because the court has declared some words off limits.
Even if Bolsonaro is finding himself a bit hamstrung by his own legal mandates, he at least has to be happy it’s resulted in a pretty effective chilling effect on social media services and journalism outlets. Not every win is a blowout. But a win, no matter how ugly, is still a win.
Perhaps Trump was as surprised by his victory as millions of Americans were. But millions of Americans simply went on with their lives, hoped for the best, started dying in droves, and then took him for a ride to the farm golf course in the 2020 election. An attempted insurrection followed and somehow millions of Americans still believe the best thing for the country is Grover Cleveland 2.0. And I mean that pretty much literally.
Cleveland, like a growing number of Northerners and nearly all white Southerners, saw Reconstruction as a failed experiment, and was reluctant to use federal power to enforce the 15th Amendment of the U.S. Constitution, which guaranteed voting rights to African Americans…
Although Cleveland had condemned the “outrages” against Chinese immigrants, he believed that Chinese immigrants were unwilling to assimilate into white society.Secretary of State Thomas F. Bayard negotiated an extension to the Chinese Exclusion Act, and Cleveland lobbied the Congress to pass the Scott Act, written by Congressman William Lawrence Scott, which prevented the return of Chinese immigrants who left the United States. The Scott Act easily passed both houses of Congress, and Cleveland signed it into law on October 1, 1888.
The sorest winner in presidential history still thinks a federal court should force Hillary Clinton to pay him actual money for allegedly conspiring against him to… allow him to ascend to the Oval Office following the Electoral College vote.
Long story short, Trump sued Hillary Clinton over an election he won. His allegations were, well, seemingly incomprehensible. Trump’s lawyers are being paid per word or per ream of paper. But, either way, the court is mostly unable to figure out if Trump’s complaining contains any actionable complaints. From the decision [PDF]:
Plaintiff’s theory of this case, set forth over 527 paragraphs in the first 118 pages of the Amended Complaint, is difficult to summarize in a concise and cohesive manner. It was certainly not presented that way.
This is followed up by a judicial sigh of resignation.
Nevertheless, I will attempt to distill it here.
So shall I. Trump alleged Clinton “colluded with a hostile foreign entity” to elicit “spurious opposition research” (referring to the Christopher Steele dossier) and another report claiming to tie Trump hotels to Russian financiers via DNS records (a report that was immediately debunked by everyone with an understanding of how internet traffic works).
According to Trump and his lawyers, this is irrefutable evidence the Clinton campaign conspired to prevent him from winning an election he won. The court is far less certain this is evidence of anything. When it comes to legal claims, quality is preferable to quantity — something Trump’s lawyers clearly don’t understand.
Plaintiff’s Amended Complaint is 193 pages in length, with 819 numbered paragraphs. It contains 14 counts, names 31 defendants, 10 “John Does” described as fictitious and unknown persons, and 10 “ABC Corporations” identified as fictitious and unknown entities. Plaintiff’s Amended Complaint is neither short nor plain, and it certainly does not establish that Plaintiff is entitled to any relief.
Brevity isn’t always wit, but in this case, brevity would be preferable to sprawling narratives with no cognizable claims. Longer is not better. An amending a complaint to make it longer, but no better, doesn’t do much else but waste the court’s time. The length of the complaints is only part of the problem. The real problems are the complete lack of anything actionable.
More troubling, the claims presented in the Amended Complaint are not warranted under existing law. In fact, they are foreclosed by existing precedent, including decisions of the Supreme Court.
Tossing in a belated, unsupported wire fraud claim in an attempt to salvage doomed RICO claims only made things worse for Trump. (Emphasis in the original.)
Not only does Plaintiff lack standing to complain about an alleged scheme to defraud the news media, but his lawyers ignore the Supreme Court’s holdings that the federal wire fraud statute prohibits only deceptive schemes to deprive the victim of money or property. It is necessary to show not only that a defendant engaged in deception, but that an object of the fraud was property.
An election is not “property.” While it might be possible to insinuate that depriving Trump of the presidency deprived him of money, the dozens of pages and hundreds of paragraphs failed to show how this alleged conspiracy deprived Trump of anything. He still won the election. And, on top of that, it did not deprive Trump of money or property. He retained his (frequently overstated) wealth. And he did not lose any of his property. Instead, he gained a brand new, rent-free address located in the one of the most upscale neighborhoods of the nation’s capital.
It’s just one benchslap after another for Trump and his legal reps.
Many of the Amended Complaint’s characterizations of events are implausible because they lack any specific allegations which might provide factual support for the conclusions reached.
Back to criticizing the convolutions and length of the complaints:
Plaintiff has annotated the Amended Complaint with 293 footnotes containing references to various public reports and findings. He is not required to annotate his Complaint; in fact, it is inconsistent with Rule 8’s requirement of a short and plain statement of the claim. But if a party chooses to include such references, it is expected that they be presented in good faith and with evidentiary support. Unfortunately, that is not the case here.
Citing a DOJ Inspector General’s report on alleged election interference? Possibly good. Misstating its conclusions? Definitely bad.
Plaintiff and his lawyers are of course free to reject the conclusion of the Inspector General. But they cannot misrepresent it in a pleading.
Never a subheading anyone filing a complaint wants to see in a court’s response to a motion to dismiss.
Shotgun Pleading
Doubling down on bad pleadings is even worse.
To say that Plaintiff’s 193-page, 819-paragraph Amended Complaint is excessive in length would be putting things mildly. And to make matters worse, the Amended Complaint commits the “mortal sin” of incorporating by reference into every count all the general allegations and all the allegations of the preceding counts.
This subheading? Also seriously bad news for unserious lawyers filing unserious complaints on behalf of an extremely unserious person.
Fictitious Defendants
Much more discussion follows, mainly because the sprawling complaints have forced the court to address a ton of facially invalid arguments. Among those is the statute of limitations for RICO claims, which, at four years, gave the Trump people until October 2021, at the latest, to file. They chose not to do so until 2022. His lawyers claimed Trump should have rolling tolling, applicable until the end of his presidency in January 2021.
The court says this simply isn’t the case. The president was or should have been aware of the incidents and reporting underlying this federal complaint since October 2017. That he chose to sue several months after the statute of limitations had expired is on him. The court can’t (and won’t) save him.
And so it continues for another 25 pages. There is nothing in here Trump can continue to pursue. It’s a shutout. The entire thing (both the original and amended complaints) are dismissed with prejudice as far as the non-federal defendants are concerned. The lawsuit is still (barely) alive (dismissed without prejudice) in terms of federal defendants. But, given Trump’s inability to amend a complaint cohesively, concisely, or comprehensibly, it will only be a matter of time before those claims are dismissed definitively by this same court.
Trump won. Maybe he should just enjoy the win, rather than claim people conspired against him to keep him out of an office he ultimately held for four deeply regrettable years.