It will come as no surprise to any regular reader here when I say that Nintendo is roughly the most annoyingly draconian protector of IP in the video game space. At this point, Techdirt posts discussing Nintendo’s copyright and trademark antics are legion. Notable among those posts for the purposes of this discussion are several online gaming tournaments that Nintendo has allowed to exist, often without a license, but which Nintendo has still been willing to shut down over the use of 3rd party tools that make it possible to stream older games on current hardware and over the internet better. Those shutdowns over the use of tools that have nothing to do with copyright infringement might seem ridiculous to you, but then you simply don’t know just how iron-fisted Nintendo likes to be when it comes to controlling anything that has to do with their products.
But what Nintendo just did to the Smash World Tour is a whole different animal. SWT has always operated as an unlicensed tournament with hundreds of events, at which Nintendo has averted its legal gaze. In 2021, Nintendo announced that a company called Panda Global had become Nintendo’s officially licensed partner for Super Smash Bros. tournaments. SWT reached out to Nintendo asking if that meant it had to shut down, but were told last year that the Panda Global deal was not exclusive. With that, SWT attempted to apply for its own license to continue its tournament.
While licensing discussions continued in early 2022, organizers say the 2022 Smash World Tour was launched without an official license, partly because “we did not have the full scope of our proposal sorted with Nintendo in advance.” But the organizers say they did seek a license for the December championships, submitting an application in April.
Meanwhile, Smash Tour organizers say the CEO of Panda Global started trying to undermine their tour by “tell[ing] organizers we were definitely not coming back in 2022, and if we did, we’d get shut down shortly after announcement.” After Panda Global initially demanded exclusivity for any individual events associated with them, many tournaments operated jointly as part of both the licensed Panda Cup and the unlicensed Smash World Tour in 2022 (Panda Global has not responded to a request for comment from Ars Technica).
During most of this year, while all of that was happening, SWT was still attempting to get licensed through Nintendo, but the talks hit a wall when Nintendo basically stopped responding. Finally, the two sides got back together this past Fall and continued talks about getting licensed.
And then, well…
Then, last Wednesday, they said Nintendo told them in no uncertain terms that they would not be getting a commercial license and that the days of Nintendo tolerating their operation without one “were now over.”
In a statement provided to Kotaku late Tuesday, Nintendo said that despite “continuous conversations” and “deep consideration,” the company was “unable to come to an agreement with SWT for a full circuit in 2023.” That said, Nintendo contends that it “did not request any changes to or cancelation of remaining events in 2022, including the 2022 Championship event, considering the negative impact on the players who were already planning to participate.”
That 2022 championship was slated to take place in December. SWT has since canceled it. As part of the communications around the cancellation, SWT organizers are also calling bullshit on Nintendo’s claims that no requests to shut down the 2022 championship event were even discussed.
In a follow-up statement, though, Smash World Tour cites a written statement from Nintendo saying that tournaments are “expected to secure such a license well in advance of any public announcement” and that the company “will not be able to grant a license for the Smash World Tour Championship 2022 or any Smash World Tour activity in 2023.”
So, where does that leave us? Well, to be clear, Nintendo can keep unlicensed tournaments from happening if it so chooses. It can also make decisions on when to let unlicensed tournaments slide as capriciously as it likes, from year to year.
But as always seems to be the case with this company, Nintendo also went about it in roughly as haphazardly as possible, and with a completely blind eye towards the timeline and the people it was impacting with its decisions. It could have licensed 2022 for free or for very cheaply, just to get this one tourney off as a farewell. It could have communicated better with SWT and gotten further down the licensing route than it did. It could have offered clear guidance to any tourney organizers on what it takes to get licensed.
But Nintendo didn’t do any of that. Instead, they simply told SWT out of the blue to shut it all down and then said publicly that it didn’t really do that. And that’s about as Nintendo as it gets.
Musk and company claim they’re working on upgraded satellites that are less obtrusive to scientists, but it’s Musk, so who knows if those solutions actually materialize. Musk isn’t alone in his low-orbit satellite ambitions. Numerous other companies, including Jeff Bezos’ Blue Origin, are planning to fling tens of thousands of these low-orbit satellite “megaconstallations” into the heavens.
One 2020 paper argued that the approval of these low-orbit satellites by the FCC technically violated the environmental law embedded in the 1970 U.S. National Environmental Policy Act (NEPA). Scientific American notes how the FCC has thus far sidestepped NEPA’s oversight, thanks to a “categorical exclusion” the agency was granted in 1986 — long before LEO satellites were a threat.
Last week yet another study emerged from the U.S. Government Accountability Office (GAO, full study here), recommending that the FCC at least revisit the issue:
“We think they need to revisit [the categorical exclusion] because the situation is so different than it was in 1986,” says Andrew Von Ah, a director at the GAO and one of the report’s two lead authors. The White House Council on Environmental Quality (CEQ) recommends that agencies “revisit things like categorical exclusions once every seven years,” Von Ah says. But the FCC “hasn’t really done that since 1986.”
Despite the fact that low-earth orbit solutions like Starlink generally lack the capacity to be meaningfully disruptive to the country’s broadband monopolies, and are, so far, too expensive to address one of the biggest obstacles to adoption (high prices due to said monopolies), the FCC has generally adopted a “we’re too bedazzled by the innovation to bother” mindset until recently.
The FCC this year did recently decide to roll back nearly a billion in Trump-era subsidies for Starlink (in part because the company misled regulators about coverage, but also because the FCC doubted they’d be able to deliver promised speeds and coverage). And the FCC did recently enact laws tightening up requirements for discarding older, failed satellites to address “space junk.”
But taking a tougher stand here would require the FCC taking a bold stance on whether or not NEPA actually applies to the “environment” of outer space and low-Earth orbit, which remains in debate. This is an agency that can’t even be bothered to publicly declare with any confidence that telecom monopolies exist or are a problem, so it seems pretty unlikely they’d want to wade into such controversy.
Like a lot of Musk efforts (like the fatal public potential of misrepresented “full self driving” technology), the issue has been simplistically framed as one of innovation versus mean old pointless government bureaucracy. This simplistic distortion has resulted in zero meaningful oversight as problems mount, something that impacts not just the U.S. (where most launches occur), but every nation on the planet:
“Our society needs space,” says Didier Queloz, an astronomer and Nobel laureate at the University of Cambridge. “I have no problem with space being used for commercial purposes. I just have a problem that it’s out of control. When we started to see this increase in satellites, I was shocked that there are no regulations. So I was extremely pleased to hear that there has been an awareness that it cannot continue like that.”
I’d expect this issue gets punted into the bowels of agency policy purgatory. Even if the agency does act it will be years from now, and unlikely to apply to the satellite licenses already doled out to companies like Starlink and Amazon. And while there are several bills aimed at tightening up restrictions in the space, it seems unlikely any of them are going to survive a dysfunctional and corrupt Congress.
That means that the light pollution caused by LEO satellites will continue to harm scientific researchers, who’ve been forced to embrace expensive, temporary solutions to the problem that are very unlikely to scale effectively as even more LEO companies set their sights on the heavens.
Hello! Someone has referred you to this post because you’ve said something quite wrong about Twitter and how it handled something to do with Hunter Biden’s laptop. If you’re new here, you may not know that I’ve written a similar post for people who are wrong about Section 230. If you’re being wrong about Twitter and the Hunter Biden laptop, there’s a decent chance that you’re also wrong about Section 230, so you might want to read that too! Also, these posts are using a format blatantly swiped from lawyer Ken “Popehat” White, who wrote one about the 1st Amendment. Honestly, you should probably read that one too, because there’s some overlap.
Now, to be clear, I’ve explained many times before, in other posts, why people who freaked out about how Twitter handled the Hunter Biden laptop story are getting confused, but it’s usually been a bit buried. I had already started a version of this post last week, since people keep bringing up Twitter and the laptop, but then on Friday, Elon (sorta) helped me out by giving a bunch of documents to reporter Matt Taibbi.
So, let’s review some basics before we respond to the various wrong statements people have been making. Since 2016, there have been concerns raised about how foreign nation states might seek to interfere with elections, often via the release of hacked or faked materials. It’s no secret that websites have been warned to be on the lookout for such content in the leadup to the election — not with demands to suppress it, but just to consider how to handle it.
Partly in response to that, social media companies put in place various policies on how they were going to handle such material. Facebook set up a policy to limit certain content from trending in its algorithm until it had been reviewed by fact-checkers. Twitter put in place a “hacked materials” policy, which forbade the sharing of leaked or hacked materials. There were — clearly! — some potential issues with that policy. In fact, in September of 2020 (a month before the NY Post story) we highlighted the problems of this very policy, including somewhat presciently noting the fear that it would be used to block the sharing of content in the public interest and could be used against journalistic organizations (indeed, that case study highlights how the policy was enforced to ban DDOSecrets for leaking police chat logs).
The morning the NY Post story came out there was a lot of concern about the validity of the story. Other news organizations, including Fox News, had refused to touch it. NY Post reporters refused to put their name on it. There were other oddities, including the provenance of the hard drive data, which apparently had been in Rudy Giuliani’s hands for months. There were concerns about how the data was presented (specifically how the emails were converted into images and PDFs, losing their header info and metadata).
The fact that, much later on, many elements of the laptops history and provenance were confirmed as legitimate (with some open questions) is important, but does not change the simple fact that the morning the NY Post story came out, it was extremely unclear (in either direction) except to extreme partisans in both camps.
Based on that, both Twitter and Facebook reacted somewhat quickly. Twitter implemented its hacked materials policy in exactly the manner that we had warned might happen a month earlier: blocking the sharing of the NY Post link. Facebook implemented other protocols, “reducing its distribution” until it had gone through a fact check. Facebook didn’t ban the sharing of the link (like Twitter did), but rather limited the ability for it to “trend” and get recommended by the algorithm until fact checkers had reviewed it.
To be clear, the decision by Twitter to do this was, in our estimation, pretty stupid. It was exactly what we had warned about just a month earlier regarding this exact policy. But this is the nature of trust & safety. People need to make very rapid decisions with very incomplete information. That’s why I’ve argued ever since then that while the policy was stupid, it was no giant scandal that it happened, and given everything, it was not a stretch to understand how it played out.
Also, importantly, the very next day Twitter realized it fucked up, admitted so publicly, and changed the hacked materials policy saying that it would no longer block links to news sources based on this policy (though it might add a label to such stories). The next month, Jack Dorsey, in testifying before Congress, was pretty transparent about how all of this went down.
All of this seemed pretty typical for any kind of trust & safety operation. As I’ve explained for years, mistakes in content moderation (especially at scale) are inevitable. And, often, the biggest reason for those mistakes is the lack of context. That was certainly true here.
Yet, for some reason, the story has persisted for years now that Twitter did something nefarious, engaging in election interference that was possibly at the behest of “the deep state” or the Biden campaign. For years, as I’ve reported on this, I’ve noted that there was literally zero evidence to back any of that up. So, my ears certainly perked up last Friday when Elon Musk said that he was about to reveal “what really happened with the Hunter Biden story suppression.”
Certainly, if there was evidence of something nefarious behind closed doors, that would be important and worth covering. If it was true that through discussions I’ve had with dozens of Twitter employees over the past few years every single one of them lied about what happened, well, that would also be useful for me to know.
And then Taibbi revealed… basically nothing of interest. He revealed a few internal communications that… simply confirmed everything that was already public in statements made by Twitter, Jack Dorsey’s Congressional testimony, and in declarations made as part of a Federal Elections Commission investigation into Twitter’s actions. There were general concerns about foreign state influence campaigns, including “hack and leak” in the lead up to the election, and there were questions about the provenance of this particular data, so Twitter made a quick (cautious) judgment call and implemented a (bad) policy. Then it admitted it fucked up and changed things a day later. That’s… basically it.
And, yet, the story has persisted over and over and over again. Incredibly, even after the details of Taibbi’s Twitter thread revealed nothing new, many people started pretending that it had revealed something major, with even Elon Musk insisting that this was proof of some massive 1st Amendment violation:
Now, apparently more files are going to be published, so something may change, but so far it’s been a whole lot of utter nonsense. But when I say that both here on Techdirt and on Twitter, I keep seeing a few very, very wrong arguments being made. So, let’s get to the debunking:
1. If you said Twitter’s decision to block links to the NY Post was election interference…
You’re wrong. Very much so. First off, there was, in fact, a complaint to the FEC about this very point, and the FEC investigated and found no election interference at all. It didn’t even find evidence of it being an “in-kind” contribution. It found no evidence that Twitter engaged in politically motivated decision making, but rather handled this in a non-partisan manner consistent with its business objectives:
Twitter acknowledges that, following the October 2020 publication of the New York Post
articles at issue, Twitter blocked users from sharing links to the articles. But Twitter states that
this was because its Site Integrity Team assessed that the New York Post articles likely contained
hacked and personal information, the sharing of which violated both Twitter’s Distribution of
Hacked Materials and Private Information Policies. Twitter points out that although sharing
links to the articles was blocked, users were still permitted to otherwise discuss the content of the
New York Post articles because doing so did not directly involve spreading any hacked or
personal information. Based on the information available to Twitter at the time, these actions
appear to reflect Twitter’s stated commercial purpose of removing misinformation and other
abusive content from its platform, not a purpose of influencing an election
All of this is actually confirmed by the Twitter Files from Taibbi/Musk, even as both seem to pretend otherwise. Taibbi revealed some internal emails in which various employees (going increasingly up the chain) discussed how to handle the story. Not once does anyone in what Taibbi revealed suggest anything even remotely politically motivated. There was legitimate concern internally about whether or not it was correct to block the NY Post story, which makes sense, because they were (correctly) concerned about making a decision that went too far. I mean, honestly, the discussion is not only without political motive, but shows that the trust & safety apparatus at Twitter was concerned with getting this correct, including employees questioning whether or not these were legitimately “hacked materials” and questioning whether other news stories on the hard drive should get the same treatment.
There are more discussions of this nature, with people questioning whether or not the material was really “hacked” and initially deciding on taking the more cautious approach until they knew more. Twitter’s Yoel Roth notes that “this is an emerging situation where the facts remain unclear. Given the SEVERE risks here and lessons of 2016, we’re erring on the side of including a warning and preventing this content from being amplified.”
Again, exactly as has been noted, given the lack of clarity Twitter reasonably decided to pump the brakes until more was known. There was some useful back-and-forth among employees — the kind that happens in any company regarding major trust & safety decisions, in which Twitter’s then VP of comms questioned whether or not this was the right decision. This shows a productive discussion — not anything along the lines of pushing for any sort of politically motivated outcome.
And then deputy General Counsel Jim Baker (more on him later, trust me…) chimes in to again highlight exactly what everyone has been saying: that this is a rapidly evolving situation, and it makes sense to be cautious until more is known. Baker’s message is important:
I support the conclusion that we need more facts to assess whether the materials were hacked. At this stage, however, it is reasonable for us to assume that they may have been and that caution is warranted. There are some facts that indicate that the materials may have been hacked, while there are others indicating that the computer was either abandoned and/or the owner consented to allow the repair shop to access it for at least some purposes. We simply need more information.
Again, all of this is… exactly what everyone has said ever since the day after it happened. This was an emerging story. The provenance was unclear. There were some sketchy things about it, and so Twitter enacted the policy because they just weren’t sure and didn’t have enough info yet. It turned out to be a bad call, but in content moderation, you’re going to make some bad calls.
What is missing entirely is any evidence that politics entered this discussion at all. Not even once.
2. But Twitter’s decision to “suppress” the story was a big deal and may have swung the election to Biden!
I’m sorry, but there remains no evidence to support that silly claim either. First off, Twitter’s decision actually seemed to get the story a hell of a lot more attention. Again, as noted above, Twitter did nothing to stop discussion of the story. It only blocked links to one story in the NY Post, and only for that one day. And the very fact that Twitter did this (and Facebook took other action) caused a bit of a Streisand Effect (hey!) which got the underlying story a lot more attention because of the decisions by those two companies.
The reality, though, is that the story just wasn’t that big of a deal for voters. Hunter Biden wasn’t the candidate. His father was. Everyone already pretty much knew that Hunter is a bit of a fuckup and clearly personally profiting off of the situation, but there was no actual big story in the revelations (I mean, yeah, there are still some people who insist there are, but they’re the same people who misunderstood the things we’re debunking here today). And, if we’re going to talk about kids of Presidents profiting off of their last name, well, there’s a pretty long list to go down….
But don’t take my word for it, let’s look at the evidence. As reporter Philip Bump recently noted, there’s actual evidence in Google search trends that Twitter and Facebook’s decision really did generate a lot more interest in the story. It was well after both companies took action that searches on Google for Hunter Biden shot upward:
Also, soon after, Twitter reversed its policy, and there was widespread discussion of the laptop in the next three weeks leading up to the election. The brief blip in time in which Twitter and Facebook limited the story seemed to have only fueled much more interest in it, rather than “suppressing” it.
Indeed, another document in the “Twitter Files” highlights how a Democratic member of the House, Ro Khanna, actually reached out to Twitter to point this out and to question Twitter’s decision (if this was really a big Democratic conspiracy, you’d think he’d be supportive of the move, rather than critical of it, but the reverse was true.) Rep. Khanna’s email to Twitter noted:
I say this as a total Biden partisan and convinced he didn’t do anything wrong. But the story has now become more about censorship than relatively innocuous emails and it’s become a bigger deal than it would have been.
So again, the evidence actually suggests that the story wasn’t suppressed at all. It got more attention. It didn’t swing the election, because most people didn’t find the story particularly revealing.
3. The government pressured Twitter/Facebook to block this story, and that’s a huge 1st Amendment violation / treason / crime of the century / etc.
Yeah, so, that’s just not true. I’ve spent years calling out government pressure on speech, from Democrats (and more Democrats) to Republicans (and more Republicans). So I’m pretty focused on watching when the government goes over the line — and quick to call it out. And there remains no evidence at all of that happening here. At all. Taibbi admits this flat out:
Incredibly, I keep seeing people on Twitter claim that Taibbi said the exact opposite. And you have people like Glenn Greenwald who insist that Taibbi only meant “foreign” governments here, despite all the evidence to the contrary. If he had found evidence that there was US government pressure here… why didn’t he post it? The answer: because it almost certainly does not exist.
Some people point to Mark Zuckerberg’s appearance over the summer on Joe Rogan’s podcast as “proof” that the FBI directed both companies to suppress the story, but that’s not at all what Zuckerberg said if you listened to his actual comments. Zuckerberg admits that they make mistakes, and that it feels terrible when they do. He goes into a pretty detailed explanation of some of how trust & safety works in determining whether or not a user is authentic. Then Rogan asks about the laptop story, and Zuckerberg says:
So, basically, the background here, is the FBI basically came to us, some folks on our team, and were like “just so you know, you should be on high alert, we thought there was a lot of Russian propaganda in the 2016 election, we have it on notice, basically, that there’s about to be some kind of dump that’s similar to that. So just be vigilant.”
This does not say that the FBI came to Facebook and said “suppress the Hunter Biden laptop story.” It was just a general warning that the FBI had intelligence that there might be some foreign influence operations, and to “be vigilant.”
This is nearly identical to what Twitter’s then head of “site integrity,” Yoel Roth, noted in his declaration in the FEC case discussed above:
law enforcement agencies communicated that they expected ‘hack-and-leak operations’ by state actors might occur
in the period shortly before the 2020 presidential election . . . . I also learned in these meetings that there were
rumors that a hack-and-leak operation would involve Hunter Biden.”
Basically the FBI is saying, in general, they have some intelligence that this kind of attack may happen, so be careful. It did not say to censor the info. It didn’t involve any threats. It wasn’t specifically about the laptop story.
And, in fact, as of earlier this week, we now have the FBI’s version of these events as well! That’s because of the somewhat silly lawsuit that Missouri and Louisiana filed against the Biden administration over Twitter’s decision to block the NY Post story. Just this week, Missouri released the deposition of FBI agent, Elvis Chan, who is often found at the center of conspiracy theories regarding “government censorship.”
And Chan tells basically the same story with a few slight differences, mostly in terms of framing. Specifically, Chan says that he never told the companies to “expect” a hack and leak attack, but rather to be aware of the possibility, slightly contradicting Roth’s declaration:
Yeah, I don’t know what Mr. Roth meant or meant, but what I’m letting you know is that from my recollection — I don’t believe we would have worded it so strongly to say that we expected there to be hacks. I would have worded it to say that there was the potential for hacks, and I believe that is how anyone from our side would have framed the comment.
And the reason I believe that is because I and the FBI, for that matter the U.S. intelligence community, was not aware of any successful hacks against political organizations or political campaigns.
You don’t think that intelligence officials described it in the way that Mr. Roth does here in this sentence in the affidavit?
Yeah, I would not have — I do not believe that the intelligence community would have expected it. I said that they would have been concerned about the potential for it.
In the deposition, Chan repeats (many, many times) that he wouldn’t have used the language saying such an effort would be “expected” but that it was something to look out for.
He also doesn’t recall Hunter Biden’s name even coming up, though he does say they warned them to be on the lookout for discussions on “hot button” issues, and notes that the companies themselves would often ask about certain scenarios:
So from my recollection, the social media companies, who include Twitter, would regularly ask us, “Hey, what kind of content do you think the nation state actors, the Russians would post,” and then they would provide examples. Like, “Would it be X” or “Would it be Y” or “Would it be Z.” And then we — I and then the other FBI officials would say, “We believe that the Russians will take advantage of any hot-button issue.” And we — I do not remember us specifically saying “Hunter Biden” in any meeting with Twitter.
Later on he says:
Yeah, in my estimation, we never discussed Hunter Biden specifically with Twitter. And so the way I read that is that there are hack-and-leak operations, and then at the time — at the time I believe he flagged one of the
potential current events that were happening ahead of the elections.
You believe that he, Yoel Roth, flagged Hunter Biden in one of these meetings?
No. I believe — I don’t believe he flagged it during one of the meetings. I just think that — so I don’t know. I cannot read his mind, but my assessment is because I don’t remember discussing Hunter Biden at any of the meetings with Twitter, that we didn’t discuss it.
So this would have been something that he would have just thought of as a hot-button issue on his own that happened in October.
He goes into great detail about meeting with tons of companies, but notes that mostly he’d talk to them about cybersecurity threats, not disinformation. He talks a bit about Russian disinformation campaigns, highlighting the well known Internet Research Agency, which specialized in pushing divisive messaging on US social media platforms. However, he basically confirms that he never discussed the laptop with anyone at any of these companies, and the deposition makes it pretty clear that if anyone at the FBI would have done so, it either would have been Chan himself or done with Chan’s knowledge.
As for the NY Post story, and the laptop itself, he notes he found out about it through the media, just like everyone else. And then he says that he didn’t talk with anyone at Twitter or Facebook about it, despite being their main contact on these kinds of issues.
Q. It’s your testimony that those news articles are the first time that you became aware that — you became aware of Hunter Biden’s laptop in any connection?
Yes. I don’t remember if it was a New York Post article or if it was another media outlet, but it was on multiple media outlets, and I can’t remember which article I read.
And before that day, October 14th, 2020, were you aware — were you aware of Hunter Biden — had anyone ever mentioned Hunter Biden’s laptop to you?
Do you know if anyone at Twitter reached out to anyone at the FBI to check or verify anything about the Hunter Biden story?
I am not aware of any communications between Yoel Roth and the FBI about this topic.
Are you aware of any communications between anyone at Twitter and anyone in the federal government about the decision to suppress content relating to the Hunter Biden laptop story once the story had broken?
I am not aware of Mr. Roth’s discussions with any other federal agency. As I mentioned, I am not aware of any discussions with any FBI employees about this topic as well. But I only know who I know. So I don’t — he may have had these conversations, but I was not aware of it.
You mentioned Mr. Roth. How about anyone else at Twitter, did anyone else at Twitter reach out, to your knowledge, to anyone else in the federal government?
So I can only answer for the FBI. To my knowledge, I am not aware of any Twitter employee reaching out to any FBI employee regarding this topic.
How about Facebook, other than that meeting you referred to where an analyst asked the FBI to comment on the Hunter Biden investigation, are you aware of any communications between anyone at Facebook and anyone at the FBI related to the Hunter Biden laptop story?
How about any other social media platform?
How about Apple or Microsoft?
Basically, the exact same story emerges no matter how you look at it. The FBI, along with CISA, would have various meetings with internet companies mainly to warn them about cybersecurity (i.e., hacking) threats, but also generally mentioned the possibility of hack and leak attempts with a general warning to be on the lookout for such things, and that they may touch on “hot button” social and news topics. Nowhere is there any indication of pressure or attempts to tell the companies what to do, or how they should handle it. Just straight up information sharing.
When you look at all three statements — Zuckerberg’s, Roth’s, and Chan’s — basically the same not-very-interesting story emerges. The US government had some general meetings that happen with lots of big companies to warn them about various potential cybersecurity threats, and the issue of hack-and-leak campaigns as a general possibility came up with no real specifics and no warnings.
And no one communicated with the companies directly about the NY Post story.
Given all that, I honestly don’t see how there’s any reasonable concern here. There’s certainly no clear 1st Amendment concern. There appears to be zero in the way of government involvement or pressure. There’s no coercion or even implied threats. There’s literally nothing at all (no matter how Missouri’s Attorney General completely misrepresents it).
Indeed, the only thing revealed so far that might be concerning regarding the 1st Amendment is that Taibbi claimed that the Trump administration allegedly made demands of Twitter.
If the Trump administration actually had sent requests to “remove” tweets (as Taibbi claims in an earlier tweet) that would most likely be a 1st Amendment issue. However, Taibbi reveals no such requests, which is really quite remarkable. It is also possible that Taibbi is overselling these claims, because this is a part of a discussion that we’ll get to in the next section, regarding Twitter’s flagging tools, which anyone (including you or me) can use to flag content for Twitter to review to see if it violates the company’s terms of service. While there are certainly some concerns about the government’s use of such tools, unless there’s some sort of threat or coercion, and as long as Twitter is free to judge the content for itself and determine how to handle it under its own terms, there’s probably no 1st Amendment issue.
Indeed, some people have highlighted the fact that the government gets “special treatment” in having its flags reviewed. But, from people I’ve spoken to, that actually goes against the “1st Amendment violation!” argument, because many social media companies set up special systems for government agents not to enable “moar censorship!” but because they know they have to be extra vigilant in reviewing those requests so as not to take down content mistakenly based on a government request.
So, sorry, so far there appears to be no government intrusion, and certainly no 1st Amendment violation.
4. The Biden campaign / Democrats demanded Twitter censor the NY Post! And that’s a 1st Amendment violation / treason / the crime of the century / etc.
So, again, the only way that there’s a 1st Amendment violation is if the government issued the demand. And in October of 2020, the Biden campaign and the Democratic National Committee… were not the government. The 1st Amendment does not restrict their ability, as private citizens (even while campaigning for public office) to flag content for Twitter to review against its policies. Hilariously, Elon Musk seems kinda confused about how time works. That tweet that we screenshotted about about the “1st Amendment” violation is in response to an internal email that Taibbi revealed about what Taibbi (misleadingly) says are “requests from connected actors to delete tweets” followed by a screenshot of Twitter employees listing out some tweets saying “more to review from the Biden team” and someone responding “handled these.”
There was then the next tweet which was a similar set of two tweets sent over from the Democratic National Committee (as compared to the Biden campaign in the first one). This includes a tweet from the actor James Woods, which the Twitter team calls special attention to for being “high profile.”
Except, as a few enterprising folks discovered when looking up those tweets listed, they were… basically Hunter Biden nude images that were found on the laptop hard drive, which clearly violated Twitter’s terms of service (and likely violated multiple state laws regarding the sharing of nonconsensual nude images). This includes the James Woods tweet, which included a fake Biden campaign ad that showed a naked picture of Hunter Biden lying on a bed with his (only slightly blurred) penis quite visible. I’m not going to share a link to the image.
A good investigative reporter might have looked up what was in those tweets before posting a conspiratorial post implying that these were attempts by the campaign to remove the NY Post story or some other important information. But Taibbi did not. Nor has he commented on it since.
On top of that, while Taibbi claims that these were “requests to delete,” as the Twitter email quite clearly says, these are for Twitter to “review.” In other words, these were flagged for Twitter to review if they violate Twitter’s policies as the naked images clearly do.
So, there’s clearly no 1st Amendment concern here because, despite Musk’s understanding of the space-time continuum, the Biden administration was not in the White House in October of 2020. Second, even if we’re concerned about political campaigns asking for content to be deleted, flagging content for companies to review to see if they violate policies is not (in any way) the same as demanding it be deleted. Anyone can flag content. And then the company reviews it and makes a determination.
Even more importantly, nothing revealed so far suggests that the campaign had anything to say to Twitter regarding the NY Post story or any story regarding the laptop. Literally the only concerns raised were about the naked pictures.
Finally, as noted above, the only other Democrat mentioned so far in the Twitter files is Rep. Ro Khanna who told Twitter it was wrong to stop the links to the NY Post article, and urged them to rescind the decision in the name of free speech. That does not sounds like the Democrats secretly pressuring the company to block the story. It kinda sounds like the exact opposite.
So despite what everyone keeps yelling on Twitter (including Elon Musk) this still doesn’t appear to be evidence of “censorship” or even “suppression of the Hunter Biden laptop story.” It’s just focused on the nonconsensual sharing of Hunter’s naked images.
As a side note, Woods has now said he’s going to sue over this, though for the life of me I have no idea what sort of claim he thinks he has, or how it’s going to go over in court when he claims his rights were violated when he was unable to share Hunter’s dick pic.
5. But Jim Baker! He worked for the FBI! And he was in charge of the Twitter files! Clearly he’s covering up stuff!
Here we are ripping from the stupidity headlines. This one came out just last night as Taibbi added a “supplement” to the Twitter files, again seemingly confused about how basically anything works. According to Taibbi in a very unclear and awkwardly worded thread, he and Bari Weiss (another opinion columnist who Musk has decided to share the files with) were having some sort of “complication” in accessing the files. Taibbi claims that Twitter’s Deputy General Counsel, Jim Baker, was reviewing the files, and somehow this was as problem (he does not explain why or how, though there’s a lot of conjecture).
Baker is, in fact, the former General Counsel at the FBI. It made news when he was hired.
Baker was subject to a bunch of conspiracy theory stuff a few years ago regarding the FBI and some of the sillier theories regarding the Trump campaign, including the Steele Dossier and the even sillier “Alfa Bank” story (which had always been silly and lots of people, including us, had mocked when it came out).
But despite all that, there’s really little evidence that Baker has done anything particularly noteworthy here. The stuff about his actions while at the FBI is totally overblown partisan hackery. People talk about the so-called “criminal investigation” he faced for his work looking into Russian interference in the 2020 election, but that appears to be something mostly cooked up by extreme Trumpists in the House and appears to have gone nowhere. And, yes, he was a witness at the Michael Sussman trial, which was sorta connected to the Alfa Bank stuff, but his testimony supported John Durham, not Michael Sussman, in that he claimed that Sussman made a false statement to him, which the entire case hinged on (and, for what it’s worth, the trial ended in acquittal).
In other words, almost all of the FBI-related accusations against Baker are entirely “guilt by association” type claims, with nothing at all legitimate to back them up.
As for Twitter, we already highlighted Baker’s email that Taibbi revealed, which shows a normal, thoughtful, cautious discussion of a normal trust & safety debate, with nothing even remotely political.
The latest claims from Taibbi and Weiss also don’t make much sense. Elon Musk has told his company to hand over a bunch of internal documents to reporters. Any corporate lawyer would naturally do a fairly standard document review before doing so to make sure that they’re not handing over any private information or something else that might create legal issues for Musk. And since a large chunk of the legal team has left the company, it wouldn’t be all that surprising if the task ended up on Baker’s desk.
Now, you can argue (as Taibbi and others now imply) that there’s some massive conflict of interest here, but, uh… that’s not at all clear, and not really how conflict of interest works. And, again, there’s little indication that Baker had a major role here at all, beyond being one of many who weighed in on this matter (and did so in a perfectly reasonable manner).
Honestly, Baker not reviewing the documents first would have potentially put him in legal jeopardy for not doing the very basic function of his job in making sure the company he worked for didn’t put itself in serious legal jeopardy by revealing things that might create huge liabilities for Musk and the company.
Either way, late Tuesday, Musk announced that Baker had “exited” from the company, and when asked by a random Twitter user if he had been “asked to explain himself first” Musk claimed that Baker’s “explanation was… unconvincing.”
And perhaps there’s something more here that will be revealed by Weiss now that the shackles have been removed. But, based on what’s been stated so far, a perfectly plausible explanation is that Musk confronted Baker wanting to know why he was holding back the files and what his role was in “suppressing” the NY Post story. And Baker told him, truthfully, that his role was exactly as was revealed in the email (giving his general thoughts on the proper approach to handling the story) and that he was reviewing documents because that’s his job, and Musk got mad and fired him.
Somewhat incredibly, Musk also seemed to imply he only learned of Baker’s involvement on Sunday.
Some people are claiming that Musk is saying he only discovered that Baker worked for him on Sunday, which is possible but seems unlikely. Conspiracy theorists had pointed out Baker’s role at the company to Musk as far back as April. A more charitable explanation is that Musk only discovered that Baker was handling the document review on Sunday. And I guess that’s plausible but, again, really only reflects extremely poorly on Musk.
If he’s going to reveal internal documents to reporters, especially ones that Musk himself keeps claiming implicate him in potential criminal liability (yes, it happened before his time, but Musk purchased the liabilities of the company as well), it’s not just perfectly normal, but kinda necessary to have lawyers do some document review. Again, as a more charitable explanation, perhaps Musk just wanted a different lawyer to do the review, and my only answer there is maybe he shouldn’t have gotten rid of so many lawyers from the legal team. Might have helped.
So, look, there could be a possible issue here, but given how much has been totally misrepresented throughout this whole process, without any actual evidence to support the “Jim Baker mastermind” theory, it’s difficult to take it even remotely seriously when there’s a perfectly normal, non-nefarious explanation to how all of this went down.
The absence of evidence is not evidence that there’s a coverup. It might just be evidence that you’re prone to believing in unsubstantiated conspiracy theories, though.
6. Still, all this proved that Twitter is “illegally” biased towards Democrats!
Taibbi made a big deal out of the fact that Twitter employees overwhelmingly donated to Democrats in their political contributions, which is not exactly new or surprising. Musk commented on this as well, suggesting sarcastically it was proof of bias at Twitter, but left out that among the companies in the chart he was commenting on… was also Tesla, where over 90% of employee donations went to Democrats.
But, more importantly, it’s not surprising in the least. Employees of many companies lean left. Executives (who donate way more money) tend to lean right. I mean, you can look at a similar chart of executive donations that shows they overwhelmingly go to Republicans. Neither is illegal, or even a problem. It’s just reality.
And companies making editorial decisions are… in fact… allowed to have bias in their political viewpoints. I would bet that if you looked at donations by employees at the NY Post or Fox News, they would generally favor Republicans. Indeed, imagine what would happen if someone took over Fox News and suddenly started revealing (1) communications between Fox News execs and Republican politicians and campaigns and (2) internal editorial meeting notes regarding what to promote. Don’t you think it would be way more biased than what the Twitter files revealed?
Here’s the important point on that: Fox News’ clear bias is not illegal either. And, indeed, if Democrats in Congress held hearings on “Fox News’ bias” and demanded that its top executives appear and explain their editorial decision making in promoting GOP talking points, people should be outraged over the clear intimidation factor, which would obviously be problematic from a 1st Amendment angle. Yet I don’t expect people to get all that worked up about the same thing happening to Twitter, even though it’s actually the same issue.
Companies are allowed to be biased. But the amazing thing revealed in the Twitter files is just how little evidence there is that any bias was a part of the debate on how to handle this stuff. Everything appeared to be about perfectly reasonable business decisions.
And… that’s it. I fear that this story is going to live on for years and years and years. And the narrative full of nonsense is already taking shape. However, I like to work off of actual facts and evidence, rather than fever dreams and misinterpretations. And I hope that you’ll read this and start doing the same.
Update: So we had this post about SF supervisors approving the killer robots in their initial vote, and had a note at the end that it still needed one more round of approvals by the Supervisors… and apparently widespread protests last night convinced the board to drop the proposal! The original (mostly obsolete) post is below.
For a while, the city of San Francisco appeared to be on the cutting edge of civil rights. It responded to the exponential growth of the facial recognition tech industry by banning use of the unproven, often-biased tech by government agencies, including the San Francisco Police Department.
This progressive take on policing was short-lived. The 2019 ban is no longer making headlines. Instead, a move towards a West Coast police state dominates reporting about the city and its legislators, who have apparently decided that because crime exists, freedoms and liberties need to be back-burnered for the time being.
The first indication that things were sliding extremely off the rails in San Francisco was the city’s decision to give the SFPD on-demand access to live feeds from privately owned security cameras. This intrusion on personal property was justified by a blog post from Mayor London Breed, who claimed it only made since because crimes were still happening. Apparently, “exigent circumstances” were no longer enough. To “protect public safety responsibly,” San Francisco cops needed to be able to ride piggyback on private feeds whenever they deemed it necessary to do so.
Because that just wasn’t totalitarian enough, city legislators proposed another increase in police powers. Killer robots, they said, seemingly unaware of the public’s everlasting opposition to government-deployed automatons armed with deadly weapons. Literally every dystopian bit of popular culture says this is a bad idea.
Supervisors in San Francisco voted Tuesday to give city police the ability to use potentially lethal, remote-controlled robots in emergency situations — following an emotionally charged debate that reflected divisions on the politically liberal board over support for law enforcement.
The vote was 8-3, with the majority agreeing to grant police the option despite strong objections from civil liberties and other police oversight groups.
Those aligning themselves with Terminators 0-1000 had their excuses.
Supervisor Connie Chan, a member of the committee that forwarded the proposal to the full board, said she understood concerns over use of force but that “according to state law, we are required to approve the use of these equipments. So here we are, and it’s definitely not a easy discussion.”
Wait a minute. State law says city supervisors must approve non-human deployment of deadly force? That seems… well, incredibly unlikely. This sounds like someone trying to wash their hands of the whole issue, but with the blood of city residents rather than anything that would actually make their hands less dirty.
The SFPD also “understands” the concerns of citizens. And it promises residents will not be shot to death by its city-approved killer robots. They’ll only be blown the fuck up.
The San Francisco Police Department said it does not have pre-armed robots and has no plans to arm robots with guns. But the department could deploy robots equipped with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspect” when lives are at stake, SFPD spokesperson Allison Maxie said in a statement.
Huh. It looks like the SFPD misspelled “kill” at least three times in its statement. I’m not sure how you “contact” someone with an explosive, but when the Unabomber did it, it was a federal crime. “Incapacitate” is just another way to pronounce “kill.” And “disorient” only makes sense if it means the explosives will make someone incapable of orienting themselves… you know, like when they’re reduced to chunks of flesh that require a mop-up team using actual mops.
This is supposed to make people feel better about allowing armed killers with zero calculable feelings to roll up on crime scenes with a metal fistful of C-4.
Supervisors amended the proposal Tuesday to specify that officers could use robots only after using alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through those alternative means. Only a limited number of high-ranking officers could authorize use of robots as a deadly force option.
Oh. OK. So the “amendment” shifts almost everything to the discretion of officers who will always claim they tried to de-escalate the hell out of the scene and got the shift commander on the horn before sending in a deadly blend of CPUs and explosives to “subdue” the suspect into a bloody paste incapable of alleging civil rights violations. If it’s found none of the things cops asserted prior to disintegrating a suspect are true, they’ll still be able to ask for immunity. At worst, they’ll be indemnified by the city — the same city that said killer robots are definitely something that’s needed as the city (despite some recent spikes in certain crime) enjoys historical lows in crime rates.
Here’s the thing: if you don’t want cops to get in trouble by deploying new deadly force methods without clear justification, the best thing you can do is NOT GIVE THEM THAT OPTION. Allowing cops to use remote-controlled bombs to, um, defuse situations will only result in a whole lot of post-facto forgiveness requests — pleas for mercy after they’ve already rendered someone incapable of being identified by their loved ones. There’s no way any police department in the nation can say it’s earned the trust to use something like this responsibly. Until officers can stop murdering people on the regular, the last thing they should be given access to is more ways to kill.
That said, this proposal isn’t the law just yet. The Supervisors need to vote on this again before it heads to Mayor Breed’s desk for signature.
iScanner turns your device into a powerful digital office and even more. It makes high quality scans of documents, educational materials, and to-do lists, and helps to edit, markup, and share them. The scanner app also can count similar objects and solve math problems and equations. Scan anything you need using your iPhone or iPad device. It’s on sale for $40.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Phew. As we’ve noted over the past few weeks, there’s been a big push by some in Congress over the last couple of weeks to sneak in some really terrible bills, among them JCPA, KOSA, INFORM, and SHOP SAFE. We’ve covered the problems with each of these bills and the very serious problem with trying to slip them into year end “must pass” bills like the NDAA, often skipping over several levels of congressional process while doing so.
Last night Congress came to an agreement on the NDAA and released a 4400 page draft. And, somewhat amazingly, none of the bills we talked about ended up making it in! Much of this was due to people speaking out and calling their Senators and Representatives.
It’s a stupid, stupid process, but because of the nature of it, Congress will often try to slip in “non-controversial” bills just to get them over the finish line. All the talk and buzz over the last few weeks about these bills was really Congress “testing the waters” to see if they could sneak the bills through this way. People speaking up made it clear that including them would create controversy, and thus helped keep them out of this bill.
Of course, I’m sure there’s a lot of other garbage in the bill as well (there always is), but for the moment, the worst bills that we were most concerned with seem to have been kept out.
That said, this congressional session isn’t over yet, and there’s still the other big year end “must pass” bill: the omnibus spending bill. That one is also prone to adding questionable laws like these. Hopefully the controversy from this past week about them will help keep them out of the next bill as well… but we can’t be sure until the bill is finally released.
Anker, the popular maker of device chargers and the Eufy smart camera line, proudly proclaims on its website that user data will be stored locally, “never leaves the safety of your home,” footage only gets transmitted with “end-to-end” military-grade encryption, and that the company will only send that footage “straight to your phone.”
Yeah, about that.
Security researcher Paul Moore and a hacker named Wasabi have discovered that few if any of those claims are true, and that it’s possible to stream video from a Eufy camera, from across the country, with no encryption at all simply by connecting to a unique address at Eufy’s cloud servers using the free VLC Media Player.
When we asked Anker point-blank to confirm or deny that, the company categorically denied it. “I can confirm that it is not possible to start a stream and watch live footage using a third-party player such as VLC,” Brett White, a senior PR manager at Anker, told me via email.
Except it’s not only possible, it’s been repeatedly proven (though there’s no evidence yet of this having been exploited in the wild and it only works on cameras that are in an awakened state). Users really only need a camera’s serial number, which they can obtain from the box or sometimes guess. An attacker could also exploit and access cameras he donated to Good Will or other thrift stores.
The discovery comes after a decade of “smart” hardware device makers having a fairly abysmal track record on security and privacy despite websites that routinely claim the opposite. From TVs that fail to encrypt your home conversations to refrigerators that leak your email credentials, the sector is rife with problems that somehow still don’t get the kind of scrutiny they deserve.
Despite Anker being a Chinese-based company, you won’t hear any of the same national security hyperventilation over these kinds of issues routinely found in this and other Chinese-made “smart” home technologies. Those kinds of freak outs are, apparently, singularly reserved for social media services like TikTok, and only if such complaints can get you on television.