And while Musk’s fans have been (hilariously, frankly) trying to defend these decisions by (1) claiming this is somehow “different” because it’s about “safety” — an argument we cleanly debunked this morning — and (2) saying it’s okay because the “liberal” media are now screaming about censorship and free speech, so it’s all hilarious since everyone is switching positions. Except, I haven’t seen much of that supposed “switch.” Lots of people are pointing out that the reasons stated for these suspensions have been silly. And many more people are highlighting how hypocritical the statements and decisions made by Musk are. But most people readily recognize that he has every right to make dumb and hypocritical decisions.
There are a few, however, who do seem to be taking it further. And they should stop, because it’s nonsense. First up we have the EU, where the VP of the European Commission, Vera Jourova, is warning Musk that there will be consequences.
That’s her saying:
News about arbitrary suspension of journalists on Twitter is worrying. EU’s Digital Services Act requires respect of media freedom and fundamental rights. This is reinforced under our #MediaFreedomAct. @elonmusk should be aware of that. There are red lines. And sanctions, soon.
But being banned from private property doesn’t impact “media freedom or fundamental rights.” And it’s silly for Jourova to claim otherwise. No one has a “right” to be on Twitter. And even if the journalism bans are pathetic and silly (and transparently vindictive and petty) that doesn’t mean he’s violated anyone’s rights.
Some in the US are making similar claims, even though the 1st Amendment (backed up by Section 230) clearly protects Musk’s ability to ban whoever he wants for any reason whatsoever. Yet Jason Kint, the CEO of Digital Context Next, a trade organization of “digital media companies” — but which, in practice, often seems notably aligned with the desires of Rupert Murdoch’s news organizations — demanded Congressional hearings if Musk did not “fix this within an hour” (referencing the journalist suspensions).
But that’s silly. Again, his decisions are protected by the 1st Amendment. It’s his property. He can kick anyone out. Just like Fox News can choose not to put anyone on air who would call bullshit on “the big lie” or Rupert Murdoch. That’s their editorial freedom.
And I’d bet that if Congress hauled Lachlan Murdoch in for a hearing to demand he explain to them his editorial decision making practices for Fox News, Kint would be highlighting the massive 1st Amendment-connected chilling effects this would have on any of his member news organizations.
We can mock Musk’s decisions. We can highlight how nonsensical they are. We can pick apart his excuses and the ramblings of his fans and point out how inconsistent they are. But Musk has every right to do this, and that’s exactly how it should be. Getting government involved with editorial decisions leads down a dangerous road.
We’ve written a bunch of posts concerning KOSA, the Kids Online Safety Act, which is one of those moral panic kinds of bills that politicians and the media love to get behind, without really understanding what they mean, or the damage they’d do. We’ve covered how it will lead to greater surveillance of children (which doesn’t seem likely to make them safer), how the vague language in the bill will put kids at greater risk, how the “parental tools” provision will be used to harm children, and a variety of other problems with the bill as well. There’s a reason why over 90 different organizations asked Congress not to slip it into a year-end must pass bill.
And while it didn’t make it into the NDAA bill, there are still some efforts to put it in the year end omnibus spending bill. Indeed, the sponsors of the bill quietly released a new version a few days ago that actually does fix some of the most egregious problems of the original. But… it’s still a mess, as TechFreedom’s Ari Cohn explained in a thread on Mastodon.
As his thread notes, there are still concerns about knowing which users are teenagers. The original bill would have effectively mandated age verification, which comes with massive privacy concerns. The new version changes it to cases where a site knows or should know that a user is under 18. But, what constitutes knowledge in that case, and what trips the standard for “should know?” The end result will still be a strong incentive for dodgy age verification, just so sites don’t need to go through the litigation hassle of proving that they didn’t, or shouldn’t, have known the age of their users.
But, the much bigger problem is that the bill still has a “duty of care” component. This was core to the original bill so it’s no surprise that it remains in place. As we’ve discussed for years, the “duty of care” is a “friendly sounding” way of violating the 1st Amendment. In this context, the bill requires sites to magically know if a kid is going to come to some harm from accessing some sort of content on their website. And, given the litigious nature of the US, as soon as any harm comes to anyone under the age of 18, websites will get sued (no matter how loosely they were connected to the actual harm), and they will have to litigate over and over again whether or not they met their “duty of care.”
The end result, most likely, is that websites basically start blocking any kind of controversial content, no matter how legal — and we’re right back to the issue of Congress trying to turn the internet into Disneyland, which is not healthy, takes away both parental and child autonomy, and does not prepare people for the real world.
The new KOSA tries to claim this won’t happen, because it says that nothing in the bill should be “construed to require a covered platform to prevent or preclude any minor from deliberately and independently search for, or specifically requesting, content.” But that won’t make any difference at all, if under the duty of care, the minors find that content, and then can later tie some future harm back to that content. So the real world effect of the law is absolutely going to be to stifle that legal content.
Even worse, lawyers will always stretch things as far as possible to make it possible to sue big pockets, even if they’re very distant from the actual harm. And, as Cohn, notes, the bill is so vaguely worded that the “harm” that can be sued over doesn’t even have to be connected to a minor accessing that content on the site. Rather… under the law as written, it appears that if there are minors on the site and separately, some harm occurs related to some content… KOSA is triggered. So even as the bill is supposedly about protecting children, as written, it can be used if it’s adults who are harmed in some manner, loosely tied to content on the site.
There’s a lot more in Ari’s thread that shows just how dangerous this bill is, even as its backers pretend they’ve fixed all the problems. And yet, Senators Richard Blumenthal and Marsha Blackburn are pleading with their colleagues to put it into the must pass omnibus spending bill.
This is a bill that will give them headlines, allowing them to pretend they’re helping kids, but which actually do tremendous harm to kids, parents, free speech and the internet. Sneaking it into a must pass bill suggests, yet again, that they know the bill is too weak to go through the normal process. The rest of Congress should not allow it to pass in this current state.
The US Supreme Court has a big year ahead with lots of weighty matters to consider in 2023. But the seriousness of their job doesn’t mean we can’t celebrate each justice’s special day! If you would like to know when to fill your heart with warm birthday wishes for your favorite justice, here are all their birthdays in this handy convenient form.
First up next year is the Chief Justice, with Chief Justice Roberts celebrating his birthday on January 27. Being born in 1955, he will turn 68. Then, hot on his heels, the very next day is Justice Barrett’s birthday, with her, a 1972 baby, turning 51. It falls on a Saturday, though, so perhaps there will be cupcakes in chambers to celebrate both on Friday?
Then while we are celebrating Lincoln’s birthday it will also be time to celebrate Justice Kavanaugh’s birthday too, who, having been born in 1965, will be turning 58. After that, April Fool’s! Because back in 1950 Justice Alito was born on April 1 and now will be 73.
At the end of the month we’ll fete the 63rd birthday of Justice Kagan, honoring her birth on April 28, 1960. After that, the justices will all be hard at work, penning many of their most important decisions from the term, and hopefully in time that Justice Thomas and Justice Sotomayor can enjoy their 75th and 69th birthdays without stress on June 23 and June 25, respectively, with him having been born in 1948 and her in 1954 (again, one falls on the weekend, so the best day for cupcakes definitely seems to be that Friday).
But it’s possible that Justices Gorsuch and Jackson will have to go without cupcakes at court, what with their 56th and 53rd birthdays falling during the summer, before the justices may have reconvened to hear cases for the next term. Justice Gorsuch celebrates his birth on August 29, 1967, and Justice Jackson celebrates hers on September 14, 1970.
In any case, if you would like to know the best days to exude festive birthday vibes towards the Supreme Court, now you do. But you’d better copy down this information now, because Congress is poised to make this post illegal.
The problem is the DAJSPA (Daniel Anderl Judicial Security and Privacy Act), which is back and currently glued onto this year’s must-pass NDAA bill, where Congress likes to put lots of bad bills that would never pass scrutiny if their colleagues actually had a chance to think about and separately vote on them. Because if they did stop to think about this bill, they might then notice the glaring First Amendment problem in how it prohibits the sharing of truthful and otherwise lawful information that the public is entitled to know, and may even need to know. (As far as this post is concerned, see Sec. 5933(1)(A), which makes judges’ information subject to a prohibition against sharing, (4)(A) making the bill reach Supreme Court justices, and (2)(A)(viii) covering the full date of birth as data no one will be allowed to share. See also Sec. 5934(d)(1)(B)(i) with the basic prohibition against sharing this information, and (f) supplying penalties, if one does anyway, although note that some of this language may be in flux, but the core prohibition so far is not.)
It is true, of course, that the motivation behind this bill comes from the genuine and serious concern of wanting to make sure our federal judges and their families are safe. Our constitutional order depends on them being able to dispense justice without fear of harm, and there is absolutely no quarrel with Congress generally wanting to put policy measures in place to make sure the judiciary can’t be disrupted by threats of violence. The issue is with the specific measure chosen, which is neither constitutional nor effective.
This law attempts to forbid the sharing of publicly available, truthful information, which is not a prohibition the First Amendment can tolerate. It is anathema to the Constitution to restrict discussion of public information, including and especially about government officials. Yet that’s what this bill does: hobble civilian oversight over public officials by taking away access to the information needed to do it. And it would do so without delivering any measurable increase in safety, because security via obscurity only creates the illusion of security – those determined to do harm will still be able to discover what they need to do it. (Including because this bill only makes it illegal to share this information online, which is also an unconstitutional distinction between online and offline speech, which is all supposed to be protected. Although it’s not like banning the sharing of this information in any form would make this bill any better.)
But even superficially, some of our judges and justices are the most significant public figures in American life with tremendous influence and power over millions of lives. Surely the idea that a law could prevent us from tweeting or tooting about their birthdays in a way that gives away their ages (which is, at minimum, relevant for senior status) should serve as a pretty clear indicator of the significant problems with this bill. Because if legislation stands to produce this sort of absurd by-product, then it’s inevitably producing a lot more ill-considered consequences we also can’t afford, as any law that tries to prevent the sharing of truthful information always will.
To forestall immediate disaster this bill, at minimum, needs to be removed from any must-pass legislation that Congress intends to get through without further consideration before the end of the year. But it is doubtful that even any further consideration will ever be able to produce language that can successfully avoid the huge problems it portends, because at its very core what this bill intends to do is such a direct affront to what the First Amendment protects, and why.
Hilarious Update: A few minutes ago, the @ElonJet account returned to Twitter, but that came about 20 minutes after Elon himself justified the ban, saying it violated the company’s (new) doxxing policies (see the original update to this story at the bottom. Hilarious Update 2: And, a couple hours later, the account was suspended a second time. Anyway, in the meantime, the original post is here:
Earlier this week, we wrote about how Elon Musk had secretly applied the strongest visibility filter (what some people insist on calling “shadowbanning”) to the ElonJet account on Twitter (which automatically noted where Elon’s private plane was flying), which he had promised not to ban due to his apparent “commitment to free speech.”
The guy behind ElonJet had revealed, based on a Twitter insider whistleblower that new head of trust and safety, Ella Irwin had demanded the heaviest “visibility filter” be applied to ElonJet (VF stands for “visibility filter.”)
On Wednesday morning, despite all of this the @ElonJet account was suspended:
Later in the day, the guy behind it, Jack Sweeney, announced that all of his accounts, including his personal account had been banned as well:
Over on Mastodon, Sweeney shared the message Twitter sent him to explain why he was banned. It… seems like made up bullshit.
If you can’t see that it says:
Your account, JxckSweeney has been suspended for violating the Twitter Rules.
Specifically, for:
Violating our rules against platform manipulation and spam.
Now, as with any content moderation decision, we do like to keep reminding people that decisions are often made for reasons that aren’t entirely public or clear. So perhaps there’s some other explanation for this. Or it was a mistake. Or somehow Elon’s big new plans to stop spam (which banned tons of legitimate accounts earlier this week) are responsible for this.
But, on the whole, it once again raises some pretty obvious concerns about how the “new management” is handling these kinds of things — and the pure hypocrisy of Elon.
Indeed, the fact that anyone linking to the ElonJet account on Facebook or Instagram are being warned that it “may be unsafe” indicates that this is deliberate suppression:
Of course, all this has done in practice is call way, way, way more attention to Elon’s hypocrisy on this topic. Tons of major media publications are covering the story. And even the “Community Notes” feature, which Elon keeps talking up and renamed from the former “Birdwatch” has now attached a note on Elon’s old tweet promising not to remove the account:
So, hey, when do we get the “Twitter Files” with the internal discussion on this removal? C’mon Matt Taibbi and Bari Weiss. Do some real reporting for once.
Update: So… apparently yesterday, Elon’s Twitter quietly updated its policy against sharing or private info to explicitly block sharing live location info:
It does not appear this policy is particularly well thought out, as it would appear to ban any tweet of anyone’s live information. So, reporting on the location of, say, the President. Or posting a selfie with some friends. Or, just noting that you saw so-and-so at this or that location. All now violates Twitter’s policies. Oh yeah, also, Twitter’s built in functionality that allows users to post their own location.
In the old days, at least, policy changes like this would have a team of people thinking through the consequences. But these days, it seems to just be based on Elon’s whims and little else.
Constitutional scholar Steve Vladeck has a short, but useful article over at MSNBC highlighting that Musk’s “misunderstanding of free speech is a problem for us all.” And it raises an issue that deserves some discussion. Because, before this, I’d basically just thought his misunderstanding of free speech mostly just meant that he was making a mess of things for himself. But I’m now realizing it’s a much bigger problem for everyone else.
Much of the article just covers ground that we’ve covered before, on how little Musk actually understands about free speech. And only at the end does it get into why this is a larger problem:
It’s not exactly news that an eccentric billionaire with no formal legal training has no idea how the First Amendment works. But there are two problems in this case that make it newsworthy. First, Musk himself has claimed that one of his goals for Twitter is to increase “free speech” on the platform. If his definition of “free speech” is radically different from the current state of general public (and constitutional) discourse, it sure would behoove him to explain how — and why. And second, because of his impact and influence, Musk’s patent misunderstandings of what the First Amendment does and doesn’t protect (and to whom it does and doesn’t apply), and the actions he wrongly takes (or doesn’t take) in response, perpetuate misunderstandings among those who believe he has some special claim to expertise on the subject. As the Twitter Files document dumps continue, it’s clear that Musk not only lacks such expertise, he also seems wholly uninterested in developing one.
I think this is mostly correct. The fact that Musk has such a completely fucked up understanding of free speech means that no one can really say what he’s planning to do with the platform he now owns. To date, he’s shown no real inclination to define free speech.
But, I’d argue the problems go beyond what Vladeck has laid out here. As I’ve detailed in the past, Twitter was one of the strongest defenders of internet free speech in courts and in discussions with regulators and policymakers. The company was more willing to stand on principle than almost any other large internet company I can think of. The company would fight on for free speech where others caved.
It is not at all clear if Musk’s very confused definition of free speech still includes any of that.
So it’s not just that Musk is confused and contributing to the miseducation of the public, but that those of us who have spent decades fighting for actual free speech online have lost a very important ally in that fight.
A deadly fire in an Urumqi apartment complex has led to something rarely seen in China: massive protests across the nation against the Chinese government’s actually draconian COVID restrictions. Most of the city of Urumqi is on lockdown, with residents banned from leaving their homes. These restrictions may have contributed to the death toll. Witnesses (and one video) claimed lockdown barriers prevented fire trucks from getting to the scene of the fire.
Elsewhere in the country, people have been forced to sleep at work due to quarantine conditions. Others have been bused from their homes to quarantine facilities. Meanwhile, COVID numbers continue to climb, suggesting the recently instituted “zero COVID” policies aren’t actually addressing the problem.
Starting in Urumqi, protests soon spread across the country. Faced with open expressions of anger, the Chinese government is reacting the way it always reacts when it is faced with dissent: by increasing the footprint of its jackboot.
Internet and phone use is heavily regulated (and heavily surveiled) in China. Whatever was already working is being intensified. And whatever hasn’t been applied yet is being put into motion. No longer will it take creating or sharing content the government doesn’t like to earn police visits, criminal charges, or both. Now, as CNN reports, it will only take a nearly passive sign of approval directed at content the Chinese government dislikes to attract the government’s negative attention.
Internet users in China will soon be held liable for liking posts deemed illegal or harmful, sparking fears that the world’s second largest economy plans to control social media like never before.
China’s internet watchdog is stepping up its regulation of cyberspace as authorities intensify their crackdown on online dissent amid growing public anger against the country’s stringent Covid restrictions.
The new rules come into force from Dec. 15, as part of a new set of guidelines published by the Cyberspace Administration of China (CAC) earlier this month.
The Chinese government would prefer an airtight stranglehold, and this is just some expected tightening of its grip. As the government has certainly noticed, the more it tries to censor, the more creative citizens are when circumventing the efforts. Rotated videos, screenshots of content, coded language, unexpected communication platforms… all of these help keep citizens one step ahead of the censors.
So, the rules continue to roll out. And they get more extreme with every iteration.
The regulationis an updatedversion of one previously published in 2017. For the first time, it states that “likes” of public posts must be regulated, along with other types of comments. Public accounts must also actively vet every comment under their posts.
However, the rules didn’t elaborate on what kind of content would be deemedillegal or harmful.
This vagueness is a feature, not a bug. You’ll know you’ve violated the new rules when uniformed officers swing by the house to inform you that you’ve violated them. The solution is to stop liking other people’s posts: winning by not playing.
But there’s an upside to China’s ever-expanding censorship programs, especially when they’re trailing ever-expanding dissent. Even China’s massive surveillance apparatus can’t possibly hope to catch them all.
However, analysts also questioned how practical it would be to carry out the newest rules, given that public anger is widespread and strict enforcement of these censorship requirements would consume significant resources.
“It is almost impossible to stop the spread of protest activities as the dissatisfaction continues to spread. The angry people can come up with all sorts of ways to communicate and express their feelings,” Cheng said.
The Chinese government has the power. But it also has billions of people to keep an eye on. Dissent will never be completely silenced. And as long as that’s true, there’s still hope for the nation.
It’s beginning to get quite comical watching various rightwing folks and groups bang on constantly about how pro-free-speech they are, often talking about situations that have nothing to do with free speech, only to engage in anti-speech behavior themselves. The new owner of a mountain of debt that is called Twitter has become something of an emblem for this sort of thing, shouting into the void about free speech only to turn on his heel the moment he gets an ounce of criticism.
The point is these people are largely nothing but hypocrites at best or folks completely without understanding for what free speech is, does, and means at worst. And because this sort of thing is contagious, we’ve begun to see this hypocrisy propagate throughout a certain wing of the political spectrum in America.
According to this email, it states that the person who issued the copyright strike is from Censored TV, a right-leaning video platform that recently interviewed Kanye West. The specific person the claim came from is Censored TV CTO Ray Aguilar, who claims he’s struck Hasan’s Twitch account as one of Censored TV’s videos featured in his latest stream.
Of course, Hasan has criticized the decision via his personal Twitch account, as he believes the copyright strike was issued to silence criticism of the far right. In a tweet earlier today, Hasan claims that Aguilar is using Twitch’s copyright system to “takedown someone covering and criticizing their silly ideas” and labels the move “absolutely pathetic.”
So, what’s going on here? Well, Hasan hosted a Twitch stream in which he covered a variety of topics, most of them political in nature. As part of the stream, he played the Ye interview from Censored TV — oh, the goddamned irony — and then built upon it with commentary. And if that sounds like something very specific to you, that something would be fair use.
Do the Censored TV folks know about fair use? They certainly should. And, whatever your thoughts on that or your political leanings, stuff like this should make your blood boil.
Not exactly the commitment to free speech that’s been espoused when you’re laughing about getting perfectly protected speech chopped down with a fraudulent copyright claim.
And so you should keep this sort of thing in mind when you encounter anyone, regardless of political leanings, claiming to want something dumb, horrible, or otherwise undesirable on private internet platforms in the name of free speech. Some of us actually keep to that committment.
In writing online about a case about online expression, I’ll open with a reference to some more online expression: the popular meme featuring the caption, “The worst person you know just made a great point.” And that’s where we are with this case just heard by the US Supreme Court: 303 Creative v. Elenis, where a homophobic website designer does not want to be forced by Colorado law to have to make websites for same sex weddings.
And she should not be forced to, because no matter what one thinks of her, or her views on gay marriage, the First Amendment should prevent anyone from ever being forced by the government to make a website they don’t want to make. It should prohibit such compelled speech regardless of the views implicated, their political popularity, or their social, moral, or ethical merit. And by “should” it is not merely a question of what the First Amendment ought to do, but, as discussed below, what the Supreme Court has already found it to do, and therefore should continue to find it to do.
So hers should not be a hard case to resolve in her favor. Unfortunately, as oral argumentrevealed, the heightened emotions surrounding her specific views, both for and against, are making it a hard case. By treating it as a referendum on gay rights the First Amendment analysis is ending up unduly complicated, entangled in questions about other constitutional rights, including several others found within the First Amendment itself, even though they only serve to obscure the otherwise obvious constitutional problem with the Colorado law. And one danger with this case is that, if the justices don’t tease apart the different analytical threads successfully, they could do some real doctrinal damage to all the rights the First Amendment protects, especially if they are motivated, as they adjudicate her constitutional challenge, to craft a result that tries to vindicate (or repudiate) the substance of this particular web designer’s views.
For instance, the subject of gay marriage often tends to invoke the First Amendment right of freedom of religion. But this right would be a bad basis upon which to resolve this case. Part of the reason it would be a bad idea is because the Supreme Court has, of late, already upended long-standing establishment clause doctrine to effectively preference certain religious views over others. A ruling in her favor for religious freedom reasons would continue that practice and produce an unstable result (and, at this point, so would a ruling against her, also on this basis). It would also be unnecessary to involve freedom of religion because there are far more compelling constitutional reasons to find in her favor. The only thing religion has to do with this case is that it turns out to be what informs her expressive views. But none of her views, or what informs them, is actually relevant. Because any law that targets these particular views (anti-gay marriage) could just as easily target any other views (for example, pro-gay marriage), regardless of what motivation informed them, religious or otherwise. And even if the court were to say that laws can’t target views informed by religion, it would still be a problem if laws could target views informed by any other reason. Even framed just in terms of online expression, either people are free to choose what websites to make, or they are not. If we instead hinge that freedom to choose what websites to make, or what views to express within them, on why people would make those expressive choices, then the constitutional right for people to choose what they want to say, whether through their websites or via any another expressive means, will already have been lost.
Meanwhile, another analytical red herring for the Court to resist here is the First Amendment right of freedom of association. It may however at first seem relevant because this right is often implicated where there are questions of discrimination, particularly in the offering of goods and services, because the freedom of association, which essentially is the right to discriminate, is often in tension with the right not to be discriminated against. The Court spent much of the oral argument exploring whether a professional web designer could refuse to provide web design services, but this line of analysis, important though it is in other contexts, is an irrelevant distraction here. One key reason it doesn’t belong here is because there appears to be evidence that the web designer in fact does provide design services to gay clientele; the issue in this case is only that she does not want to code websites celebrating gay weddings she does not wish to celebrate (or, indeed, potentially any even heterosexual wedding she also does not wish to celebrate), and regardless of the sexual orientation (or race, or religion, etc.) of the party contracting for her services. In other words, this case is only about the Colorado law trying to require her to create certain online expression she doesn’t want to create – it has nothing to do with her not wanting to serve any clientele she wasn’t inclined to serve, at which point contemplating the bounds of her right of association would be more salient. But here the issue is not about whom she wants to associate with but what she wants to say for them. While the Supreme Court might have the appetite to address now how public accommodations law needs to behave in the shadow of this First Amendment associative right, especially with respect to the provision of personal services, it is important for it to resist that temptation here. It is only the issue of mandated expression that is worth the Court’s attention and ripe for adjudication.
And it is indeed ripe: while some have criticized the pre-enforcement challenge of the Colorado law, because ordinarily a plaintiff can only challenge a law that has already caused an injury, standing doctrine has long recognized how constitutionally untenable it would be to allow laws to create expressive injury and then have the courts say “oops.” When it comes to free expression, pre-enforcement challenges are often necessary and therefore permitted. While some justices fretted at oral argument that the pre-enforcement challenge left only a sparse record for review, it is independently important for the Supreme Court not to be deterred by the posture of this case and to reaffirm the ability to bring pre-enforcement challenges of laws that threaten free expression.
Especially because, at its core, this case is only a speech case. Ultimately the constitutional admonition to “make no law… abridging the freedom of speech” is the only part of the First Amendment that should be operative here, to prohibit the Colorado law and protect the speech rights of any web developer anywhere. Whether, however, the Court zeroes in on it will depend on whether it can recognize the speech issues implicated by website design and how expressive the act of coding a website is, including through the acts of generally authoring it as a vehicle for conveying certain expression and also literally writing the code that conveys it. And we all need to hope that it does, because one of the other significant dangers with this case is that if the Court does not acknowledge the speech impingements at the heart of this case, or see coding a website as somehow less expressive an activity than, say, typesetting a newspaper, it would then leave online expression much less constitutionally protected than offline expression (including for the online version of the newspaper). Per the Court in Reno v. ACLU, online expression is not supposed to be less protected. But unless it sees the Colorado law as being the equivalent of the old Florida law that had once tried to force newspapers to run op-eds it did not want to run – which the Supreme Court in Miami Herald v. Tornillo found was unconstitutional – it will be less protected. (And perhaps worse, it could turn Tornillo on its head and now open the door to a state law that wants to force the Miami Herald to run op-eds celebrating gay marriage. Or, if it could do that, then potentially it could also invite laws requiring op-eds condemning gay marriage too.)
Thus no matter how worthy the pro-marriage view, the bottom line in this case is that it is a view that law would be trying to force people to express, and it is that forcing that should be anathema to the First Amendment’s free expression clause. For good reason, too, because a less robust First Amendment doesn’t just hurt the views that we should detest; it also makes vulnerable those views that are best. It is why the Supreme Court was right to find the First Amendment protected the right of Nazis to march in Skokie despite their odious views, and it is why it also should find the First Amendment protects the right of the web designer to maintain her bigotry here, because even if the First Amendment is found in a particular case to protect the expression of a hateful view, what it is actually doing is ensuring that everyone remains equally free to stand against it. After all, this case isn’t just about this web designer and her intolerant views; it is about making sure that no one ever need to fear being forced by law to express any message about which they disagree, including hateful ones they don’t share.
And so we cannot let our relative opinion for the litigants who have brought these constitutional concerns before the Court outweigh our concern for the constitutional principles at stake. Nor can the Court itself, regardless of whether any particular justice favors or disfavors the web designer’s particular animus. Instead the Court needs to keep its focus on how laws like Colorado’s impinge on the rights of free expression everyone depends on, no matter what their views.
Hello! Someone has referred you to this post because you’ve said something quite wrong about Twitter and how it handled something to do with Hunter Biden’s laptop. If you’re new here, you may not know that I’ve written a similar post for people who are wrong about Section 230. If you’re being wrong about Twitter and the Hunter Biden laptop, there’s a decent chance that you’re also wrong about Section 230, so you might want to read that too! Also, these posts are using a format blatantly swiped from lawyer Ken “Popehat” White, who wrote one about the 1st Amendment. Honestly, you should probably read that one too, because there’s some overlap.
Now, to be clear, I’ve explained many times before, in other posts, why people who freaked out about how Twitter handled the Hunter Biden laptop story are getting confused, but it’s usually been a bit buried. I had already started a version of this post last week, since people keep bringing up Twitter and the laptop, but then on Friday, Elon (sorta) helped me out by giving a bunch of documents to reporter Matt Taibbi.
So, let’s review some basics before we respond to the various wrong statements people have been making. Since 2016, there have been concerns raised about how foreign nation states might seek to interfere with elections, often via the release of hacked or faked materials. It’s no secret that websites have been warned to be on the lookout for such content in the leadup to the election — not with demands to suppress it, but just to consider how to handle it.
Partly in response to that, social media companies put in place various policies on how they were going to handle such material. Facebook set up a policy to limit certain content from trending in its algorithm until it had been reviewed by fact-checkers. Twitter put in place a “hacked materials” policy, which forbade the sharing of leaked or hacked materials. There were — clearly! — some potential issues with that policy. In fact, in September of 2020 (a month before the NY Post story) we highlighted the problems of this very policy, including somewhat presciently noting the fear that it would be used to block the sharing of content in the public interest and could be used against journalistic organizations (indeed, that case study highlights how the policy was enforced to ban DDOSecrets for leaking police chat logs).
The morning the NY Post story came out there was a lot of concern about the validity of the story. Other news organizations, including Fox News, had refused to touch it. NY Post reporters refused to put their name on it. There were other oddities, including the provenance of the hard drive data, which apparently had been in Rudy Giuliani’s hands for months. There were concerns about how the data was presented (specifically how the emails were converted into images and PDFs, losing their header info and metadata).
The fact that, much later on, many elements of the laptops history and provenance were confirmed as legitimate (with some open questions) is important, but does not change the simple fact that the morning the NY Post story came out, it was extremely unclear (in either direction) except to extreme partisans in both camps.
Based on that, both Twitter and Facebook reacted somewhat quickly. Twitter implemented its hacked materials policy in exactly the manner that we had warned might happen a month earlier: blocking the sharing of the NY Post link. Facebook implemented other protocols, “reducing its distribution” until it had gone through a fact check. Facebook didn’t ban the sharing of the link (like Twitter did), but rather limited the ability for it to “trend” and get recommended by the algorithm until fact checkers had reviewed it.
To be clear, the decision by Twitter to do this was, in our estimation, pretty stupid. It was exactly what we had warned about just a month earlier regarding this exact policy. But this is the nature of trust & safety. People need to make very rapid decisions with very incomplete information. That’s why I’ve argued ever since then that while the policy was stupid, it was no giant scandal that it happened, and given everything, it was not a stretch to understand how it played out.
Also, importantly, the very next day Twitter realized it fucked up, admitted so publicly, and changed the hacked materials policy saying that it would no longer block links to news sources based on this policy (though it might add a label to such stories). The next month, Jack Dorsey, in testifying before Congress, was pretty transparent about how all of this went down.
All of this seemed pretty typical for any kind of trust & safety operation. As I’ve explained for years, mistakes in content moderation (especially at scale) are inevitable. And, often, the biggest reason for those mistakes is the lack of context. That was certainly true here.
Yet, for some reason, the story has persisted for years now that Twitter did something nefarious, engaging in election interference that was possibly at the behest of “the deep state” or the Biden campaign. For years, as I’ve reported on this, I’ve noted that there was literally zero evidence to back any of that up. So, my ears certainly perked up last Friday when Elon Musk said that he was about to reveal “what really happened with the Hunter Biden story suppression.”
Certainly, if there was evidence of something nefarious behind closed doors, that would be important and worth covering. If it was true that through discussions I’ve had with dozens of Twitter employees over the past few years every single one of them lied about what happened, well, that would also be useful for me to know.
And then Taibbi revealed… basically nothing of interest. He revealed a few internal communications that… simply confirmed everything that was already public in statements made by Twitter, Jack Dorsey’s Congressional testimony, and in declarations made as part of a Federal Elections Commission investigation into Twitter’s actions. There were general concerns about foreign state influence campaigns, including “hack and leak” in the lead up to the election, and there were questions about the provenance of this particular data, so Twitter made a quick (cautious) judgment call and implemented a (bad) policy. Then it admitted it fucked up and changed things a day later. That’s… basically it.
And, yet, the story has persisted over and over and over again. Incredibly, even after the details of Taibbi’s Twitter thread revealed nothing new, many people started pretending that it had revealed something major, with even Elon Musk insisting that this was proof of some massive 1st Amendment violation:
Now, apparently more files are going to be published, so something may change, but so far it’s been a whole lot of utter nonsense. But when I say that both here on Techdirt and on Twitter, I keep seeing a few very, very wrong arguments being made. So, let’s get to the debunking:
1. If you said Twitter’s decision to block links to the NY Post was election interference…
You’re wrong. Very much so. First off, there was, in fact, a complaint to the FEC about this very point, and the FEC investigated and found no election interference at all. It didn’t even find evidence of it being an “in-kind” contribution. It found no evidence that Twitter engaged in politically motivated decision making, but rather handled this in a non-partisan manner consistent with its business objectives:
Twitter acknowledges that, following the October 2020 publication of the New York Post
articles at issue, Twitter blocked users from sharing links to the articles. But Twitter states that
this was because its Site Integrity Team assessed that the New York Post articles likely contained
hacked and personal information, the sharing of which violated both Twitter’s Distribution of
Hacked Materials and Private Information Policies. Twitter points out that although sharing
links to the articles was blocked, users were still permitted to otherwise discuss the content of the
New York Post articles because doing so did not directly involve spreading any hacked or
personal information. Based on the information available to Twitter at the time, these actions
appear to reflect Twitter’s stated commercial purpose of removing misinformation and other
abusive content from its platform, not a purpose of influencing an election
All of this is actually confirmed by the Twitter Files from Taibbi/Musk, even as both seem to pretend otherwise. Taibbi revealed some internal emails in which various employees (going increasingly up the chain) discussed how to handle the story. Not once does anyone in what Taibbi revealed suggest anything even remotely politically motivated. There was legitimate concern internally about whether or not it was correct to block the NY Post story, which makes sense, because they were (correctly) concerned about making a decision that went too far. I mean, honestly, the discussion is not only without political motive, but shows that the trust & safety apparatus at Twitter was concerned with getting this correct, including employees questioning whether or not these were legitimately “hacked materials” and questioning whether other news stories on the hard drive should get the same treatment.
There are more discussions of this nature, with people questioning whether or not the material was really “hacked” and initially deciding on taking the more cautious approach until they knew more. Twitter’s Yoel Roth notes that “this is an emerging situation where the facts remain unclear. Given the SEVERE risks here and lessons of 2016, we’re erring on the side of including a warning and preventing this content from being amplified.”
Again, exactly as has been noted, given the lack of clarity Twitter reasonably decided to pump the brakes until more was known. There was some useful back-and-forth among employees — the kind that happens in any company regarding major trust & safety decisions, in which Twitter’s then VP of comms questioned whether or not this was the right decision. This shows a productive discussion — not anything along the lines of pushing for any sort of politically motivated outcome.
And then deputy General Counsel Jim Baker (more on him later, trust me…) chimes in to again highlight exactly what everyone has been saying: that this is a rapidly evolving situation, and it makes sense to be cautious until more is known. Baker’s message is important:
I support the conclusion that we need more facts to assess whether the materials were hacked. At this stage, however, it is reasonable for us to assume that they may have been and that caution is warranted. There are some facts that indicate that the materials may have been hacked, while there are others indicating that the computer was either abandoned and/or the owner consented to allow the repair shop to access it for at least some purposes. We simply need more information.
Again, all of this is… exactly what everyone has said ever since the day after it happened. This was an emerging story. The provenance was unclear. There were some sketchy things about it, and so Twitter enacted the policy because they just weren’t sure and didn’t have enough info yet. It turned out to be a bad call, but in content moderation, you’re going to make some bad calls.
What is missing entirely is any evidence that politics entered this discussion at all. Not even once.
2. But Twitter’s decision to “suppress” the story was a big deal and may have swung the election to Biden!
I’m sorry, but there remains no evidence to support that silly claim either. First off, Twitter’s decision actually seemed to get the story a hell of a lot more attention. Again, as noted above, Twitter did nothing to stop discussion of the story. It only blocked links to one story in the NY Post, and only for that one day. And the very fact that Twitter did this (and Facebook took other action) caused a bit of a Streisand Effect (hey!) which got the underlying story a lot more attention because of the decisions by those two companies.
The reality, though, is that the story just wasn’t that big of a deal for voters. Hunter Biden wasn’t the candidate. His father was. Everyone already pretty much knew that Hunter is a bit of a fuckup and clearly personally profiting off of the situation, but there was no actual big story in the revelations (I mean, yeah, there are still some people who insist there are, but they’re the same people who misunderstood the things we’re debunking here today). And, if we’re going to talk about kids of Presidents profiting off of their last name, well, there’s a pretty long list to go down….
But don’t take my word for it, let’s look at the evidence. As reporter Philip Bump recently noted, there’s actual evidence in Google search trends that Twitter and Facebook’s decision really did generate a lot more interest in the story. It was well after both companies took action that searches on Google for Hunter Biden shot upward:
Also, soon after, Twitter reversed its policy, and there was widespread discussion of the laptop in the next three weeks leading up to the election. The brief blip in time in which Twitter and Facebook limited the story seemed to have only fueled much more interest in it, rather than “suppressing” it.
Indeed, another document in the “Twitter Files” highlights how a Democratic member of the House, Ro Khanna, actually reached out to Twitter to point this out and to question Twitter’s decision (if this was really a big Democratic conspiracy, you’d think he’d be supportive of the move, rather than critical of it, but the reverse was true.) Rep. Khanna’s email to Twitter noted:
I say this as a total Biden partisan and convinced he didn’t do anything wrong. But the story has now become more about censorship than relatively innocuous emails and it’s become a bigger deal than it would have been.
So again, the evidence actually suggests that the story wasn’t suppressed at all. It got more attention. It didn’t swing the election, because most people didn’t find the story particularly revealing.
3. The government pressured Twitter/Facebook to block this story, and that’s a huge 1st Amendment violation / treason / crime of the century / etc.
Yeah, so, that’s just not true. I’ve spent years calling out government pressure on speech, from Democrats (and more Democrats) to Republicans (and more Republicans). So I’m pretty focused on watching when the government goes over the line — and quick to call it out. And there remains no evidence at all of that happening here. At all. Taibbi admits this flat out:
Incredibly, I keep seeing people on Twitter claim that Taibbi said the exact opposite. And you have people like Glenn Greenwald who insist that Taibbi only meant “foreign” governments here, despite all the evidence to the contrary. If he had found evidence that there was US government pressure here… why didn’t he post it? The answer: because it almost certainly does not exist.
Some people point to Mark Zuckerberg’s appearance over the summer on Joe Rogan’s podcast as “proof” that the FBI directed both companies to suppress the story, but that’s not at all what Zuckerberg said if you listened to his actual comments. Zuckerberg admits that they make mistakes, and that it feels terrible when they do. He goes into a pretty detailed explanation of some of how trust & safety works in determining whether or not a user is authentic. Then Rogan asks about the laptop story, and Zuckerberg says:
So, basically, the background here, is the FBI basically came to us, some folks on our team, and were like “just so you know, you should be on high alert, we thought there was a lot of Russian propaganda in the 2016 election, we have it on notice, basically, that there’s about to be some kind of dump that’s similar to that. So just be vigilant.”
This does not say that the FBI came to Facebook and said “suppress the Hunter Biden laptop story.” It was just a general warning that the FBI had intelligence that there might be some foreign influence operations, and to “be vigilant.”
This is nearly identical to what Twitter’s then head of “site integrity,” Yoel Roth, noted in his declaration in the FEC case discussed above:
“[F]ederal
law enforcement agencies communicated that they expected ‘hack-and-leak operations’ by state actors might occur
in the period shortly before the 2020 presidential election . . . . I also learned in these meetings that there were
rumors that a hack-and-leak operation would involve Hunter Biden.”
Basically the FBI is saying, in general, they have some intelligence that this kind of attack may happen, so be careful. It did not say to censor the info. It didn’t involve any threats. It wasn’t specifically about the laptop story.
And, in fact, as of earlier this week, we now have the FBI’s version of these events as well! That’s because of the somewhat silly lawsuit that Missouri and Louisiana filed against the Biden administration over Twitter’s decision to block the NY Post story. Just this week, Missouri released the deposition of FBI agent, Elvis Chan, who is often found at the center of conspiracy theories regarding “government censorship.”
And Chan tells basically the same story with a few slight differences, mostly in terms of framing. Specifically, Chan says that he never told the companies to “expect” a hack and leak attack, but rather to be aware of the possibility, slightly contradicting Roth’s declaration:
Yeah, I don’t know what Mr. Roth meant or meant, but what I’m letting you know is that from my recollection — I don’t believe we would have worded it so strongly to say that we expected there to be hacks. I would have worded it to say that there was the potential for hacks, and I believe that is how anyone from our side would have framed the comment.
And the reason I believe that is because I and the FBI, for that matter the U.S. intelligence community, was not aware of any successful hacks against political organizations or political campaigns.
You don’t think that intelligence officials described it in the way that Mr. Roth does here in this sentence in the affidavit?
Yeah, I would not have — I do not believe that the intelligence community would have expected it. I said that they would have been concerned about the potential for it.
In the deposition, Chan repeats (many, many times) that he wouldn’t have used the language saying such an effort would be “expected” but that it was something to look out for.
He also doesn’t recall Hunter Biden’s name even coming up, though he does say they warned them to be on the lookout for discussions on “hot button” issues, and notes that the companies themselves would often ask about certain scenarios:
So from my recollection, the social media companies, who include Twitter, would regularly ask us, “Hey, what kind of content do you think the nation state actors, the Russians would post,” and then they would provide examples. Like, “Would it be X” or “Would it be Y” or “Would it be Z.” And then we — I and then the other FBI officials would say, “We believe that the Russians will take advantage of any hot-button issue.” And we — I do not remember us specifically saying “Hunter Biden” in any meeting with Twitter.
Later on he says:
Yeah, in my estimation, we never discussed Hunter Biden specifically with Twitter. And so the way I read that is that there are hack-and-leak operations, and then at the time — at the time I believe he flagged one of the
potential current events that were happening ahead of the elections.
You believe that he, Yoel Roth, flagged Hunter Biden in one of these meetings?
No. I believe — I don’t believe he flagged it during one of the meetings. I just think that — so I don’t know. I cannot read his mind, but my assessment is because I don’t remember discussing Hunter Biden at any of the meetings with Twitter, that we didn’t discuss it.
So this would have been something that he would have just thought of as a hot-button issue on his own that happened in October.
He goes into great detail about meeting with tons of companies, but notes that mostly he’d talk to them about cybersecurity threats, not disinformation. He talks a bit about Russian disinformation campaigns, highlighting the well known Internet Research Agency, which specialized in pushing divisive messaging on US social media platforms. However, he basically confirms that he never discussed the laptop with anyone at any of these companies, and the deposition makes it pretty clear that if anyone at the FBI would have done so, it either would have been Chan himself or done with Chan’s knowledge.
As for the NY Post story, and the laptop itself, he notes he found out about it through the media, just like everyone else. And then he says that he didn’t talk with anyone at Twitter or Facebook about it, despite being their main contact on these kinds of issues.
Q. It’s your testimony that those news articles are the first time that you became aware that — you became aware of Hunter Biden’s laptop in any connection?
Yes. I don’t remember if it was a New York Post article or if it was another media outlet, but it was on multiple media outlets, and I can’t remember which article I read.
And before that day, October 14th, 2020, were you aware — were you aware of Hunter Biden — had anyone ever mentioned Hunter Biden’s laptop to you?
No.
[….]
Do you know if anyone at Twitter reached out to anyone at the FBI to check or verify anything about the Hunter Biden story?
I am not aware of any communications between Yoel Roth and the FBI about this topic.
Are you aware of any communications between anyone at Twitter and anyone in the federal government about the decision to suppress content relating to the Hunter Biden laptop story once the story had broken?
I am not aware of Mr. Roth’s discussions with any other federal agency. As I mentioned, I am not aware of any discussions with any FBI employees about this topic as well. But I only know who I know. So I don’t — he may have had these conversations, but I was not aware of it.
You mentioned Mr. Roth. How about anyone else at Twitter, did anyone else at Twitter reach out, to your knowledge, to anyone else in the federal government?
So I can only answer for the FBI. To my knowledge, I am not aware of any Twitter employee reaching out to any FBI employee regarding this topic.
/
How about Facebook, other than that meeting you referred to where an analyst asked the FBI to comment on the Hunter Biden investigation, are you aware of any communications between anyone at Facebook and anyone at the FBI related to the Hunter Biden laptop story?
No.
How about any other social media platform?
No.
How about Apple or Microsoft?
No.
Basically, the exact same story emerges no matter how you look at it. The FBI, along with CISA, would have various meetings with internet companies mainly to warn them about cybersecurity (i.e., hacking) threats, but also generally mentioned the possibility of hack and leak attempts with a general warning to be on the lookout for such things, and that they may touch on “hot button” social and news topics. Nowhere is there any indication of pressure or attempts to tell the companies what to do, or how they should handle it. Just straight up information sharing.
When you look at all three statements — Zuckerberg’s, Roth’s, and Chan’s — basically the same not-very-interesting story emerges. The US government had some general meetings that happen with lots of big companies to warn them about various potential cybersecurity threats, and the issue of hack-and-leak campaigns as a general possibility came up with no real specifics and no warnings.
And no one communicated with the companies directly about the NY Post story.
Given all that, I honestly don’t see how there’s any reasonable concern here. There’s certainly no clear 1st Amendment concern. There appears to be zero in the way of government involvement or pressure. There’s no coercion or even implied threats. There’s literally nothing at all (no matter how Missouri’s Attorney General completely misrepresents it).
Indeed, the only thing revealed so far that might be concerning regarding the 1st Amendment is that Taibbi claimed that the Trump administration allegedly made demands of Twitter.
If the Trump administration actually had sent requests to “remove” tweets (as Taibbi claims in an earlier tweet) that would most likely be a 1st Amendment issue. However, Taibbi reveals no such requests, which is really quite remarkable. It is also possible that Taibbi is overselling these claims, because this is a part of a discussion that we’ll get to in the next section, regarding Twitter’s flagging tools, which anyone (including you or me) can use to flag content for Twitter to review to see if it violates the company’s terms of service. While there are certainly some concerns about the government’s use of such tools, unless there’s some sort of threat or coercion, and as long as Twitter is free to judge the content for itself and determine how to handle it under its own terms, there’s probably no 1st Amendment issue.
Indeed, some people have highlighted the fact that the government gets “special treatment” in having its flags reviewed. But, from people I’ve spoken to, that actually goes against the “1st Amendment violation!” argument, because many social media companies set up special systems for government agents not to enable “moar censorship!” but because they know they have to be extra vigilant in reviewing those requests so as not to take down content mistakenly based on a government request.
So, sorry, so far there appears to be no government intrusion, and certainly no 1st Amendment violation.
4. The Biden campaign / Democrats demanded Twitter censor the NY Post! And that’s a 1st Amendment violation / treason / the crime of the century / etc.
So, again, the only way that there’s a 1st Amendment violation is if the government issued the demand. And in October of 2020, the Biden campaign and the Democratic National Committee… were not the government. The 1st Amendment does not restrict their ability, as private citizens (even while campaigning for public office) to flag content for Twitter to review against its policies. Hilariously, Elon Musk seems kinda confused about how time works. That tweet that we screenshotted about about the “1st Amendment” violation is in response to an internal email that Taibbi revealed about what Taibbi (misleadingly) says are “requests from connected actors to delete tweets” followed by a screenshot of Twitter employees listing out some tweets saying “more to review from the Biden team” and someone responding “handled these.”
There was then the next tweet which was a similar set of two tweets sent over from the Democratic National Committee (as compared to the Biden campaign in the first one). This includes a tweet from the actor James Woods, which the Twitter team calls special attention to for being “high profile.”
Except, as a few enterprising folks discovered when looking up those tweets listed, they were… basically Hunter Biden nude images that were found on the laptop hard drive, which clearly violated Twitter’s terms of service (and likely violated multiple state laws regarding the sharing of nonconsensual nude images). This includes the James Woods tweet, which included a fake Biden campaign ad that showed a naked picture of Hunter Biden lying on a bed with his (only slightly blurred) penis quite visible. I’m not going to share a link to the image.
A good investigative reporter might have looked up what was in those tweets before posting a conspiratorial post implying that these were attempts by the campaign to remove the NY Post story or some other important information. But Taibbi did not. Nor has he commented on it since.
On top of that, while Taibbi claims that these were “requests to delete,” as the Twitter email quite clearly says, these are for Twitter to “review.” In other words, these were flagged for Twitter to review if they violate Twitter’s policies as the naked images clearly do.
So, there’s clearly no 1st Amendment concern here because, despite Musk’s understanding of the space-time continuum, the Biden administration was not in the White House in October of 2020. Second, even if we’re concerned about political campaigns asking for content to be deleted, flagging content for companies to review to see if they violate policies is not (in any way) the same as demanding it be deleted. Anyone can flag content. And then the company reviews it and makes a determination.
Even more importantly, nothing revealed so far suggests that the campaign had anything to say to Twitter regarding the NY Post story or any story regarding the laptop. Literally the only concerns raised were about the naked pictures.
Finally, as noted above, the only other Democrat mentioned so far in the Twitter files is Rep. Ro Khanna who told Twitter it was wrong to stop the links to the NY Post article, and urged them to rescind the decision in the name of free speech. That does not sounds like the Democrats secretly pressuring the company to block the story. It kinda sounds like the exact opposite.
So despite what everyone keeps yelling on Twitter (including Elon Musk) this still doesn’t appear to be evidence of “censorship” or even “suppression of the Hunter Biden laptop story.” It’s just focused on the nonconsensual sharing of Hunter’s naked images.
As a side note, Woods has now said he’s going to sue over this, though for the life of me I have no idea what sort of claim he thinks he has, or how it’s going to go over in court when he claims his rights were violated when he was unable to share Hunter’s dick pic.
5. But Jim Baker! He worked for the FBI! And he was in charge of the Twitter files! Clearly he’s covering up stuff!
Here we are ripping from the stupidity headlines. This one came out just last night as Taibbi added a “supplement” to the Twitter files, again seemingly confused about how basically anything works. According to Taibbi in a very unclear and awkwardly worded thread, he and Bari Weiss (another opinion columnist who Musk has decided to share the files with) were having some sort of “complication” in accessing the files. Taibbi claims that Twitter’s Deputy General Counsel, Jim Baker, was reviewing the files, and somehow this was as problem (he does not explain why or how, though there’s a lot of conjecture).
Baker is, in fact, the former General Counsel at the FBI. It made news when he was hired.
Baker was subject to a bunch of conspiracy theory stuff a few years ago regarding the FBI and some of the sillier theories regarding the Trump campaign, including the Steele Dossier and the even sillier “Alfa Bank” story (which had always been silly and lots of people, including us, had mocked when it came out).
But despite all that, there’s really little evidence that Baker has done anything particularly noteworthy here. The stuff about his actions while at the FBI is totally overblown partisan hackery. People talk about the so-called “criminal investigation” he faced for his work looking into Russian interference in the 2020 election, but that appears to be something mostly cooked up by extreme Trumpists in the House and appears to have gone nowhere. And, yes, he was a witness at the Michael Sussman trial, which was sorta connected to the Alfa Bank stuff, but his testimony supported John Durham, not Michael Sussman, in that he claimed that Sussman made a false statement to him, which the entire case hinged on (and, for what it’s worth, the trial ended in acquittal).
In other words, almost all of the FBI-related accusations against Baker are entirely “guilt by association” type claims, with nothing at all legitimate to back them up.
As for Twitter, we already highlighted Baker’s email that Taibbi revealed, which shows a normal, thoughtful, cautious discussion of a normal trust & safety debate, with nothing even remotely political.
The latest claims from Taibbi and Weiss also don’t make much sense. Elon Musk has told his company to hand over a bunch of internal documents to reporters. Any corporate lawyer would naturally do a fairly standard document review before doing so to make sure that they’re not handing over any private information or something else that might create legal issues for Musk. And since a large chunk of the legal team has left the company, it wouldn’t be all that surprising if the task ended up on Baker’s desk.
Now, you can argue (as Taibbi and others now imply) that there’s some massive conflict of interest here, but, uh… that’s not at all clear, and not really how conflict of interest works. And, again, there’s little indication that Baker had a major role here at all, beyond being one of many who weighed in on this matter (and did so in a perfectly reasonable manner).
Honestly, Baker not reviewing the documents first would have potentially put him in legal jeopardy for not doing the very basic function of his job in making sure the company he worked for didn’t put itself in serious legal jeopardy by revealing things that might create huge liabilities for Musk and the company.
Either way, late Tuesday, Musk announced that Baker had “exited” from the company, and when asked by a random Twitter user if he had been “asked to explain himself first” Musk claimed that Baker’s “explanation was… unconvincing.”
And perhaps there’s something more here that will be revealed by Weiss now that the shackles have been removed. But, based on what’s been stated so far, a perfectly plausible explanation is that Musk confronted Baker wanting to know why he was holding back the files and what his role was in “suppressing” the NY Post story. And Baker told him, truthfully, that his role was exactly as was revealed in the email (giving his general thoughts on the proper approach to handling the story) and that he was reviewing documents because that’s his job, and Musk got mad and fired him.
Somewhat incredibly, Musk also seemed to imply he only learned of Baker’s involvement on Sunday.
Some people are claiming that Musk is saying he only discovered that Baker worked for him on Sunday, which is possible but seems unlikely. Conspiracy theorists had pointed out Baker’s role at the company to Musk as far back as April. A more charitable explanation is that Musk only discovered that Baker was handling the document review on Sunday. And I guess that’s plausible but, again, really only reflects extremely poorly on Musk.
If he’s going to reveal internal documents to reporters, especially ones that Musk himself keeps claiming implicate him in potential criminal liability (yes, it happened before his time, but Musk purchased the liabilities of the company as well), it’s not just perfectly normal, but kinda necessary to have lawyers do some document review. Again, as a more charitable explanation, perhaps Musk just wanted a different lawyer to do the review, and my only answer there is maybe he shouldn’t have gotten rid of so many lawyers from the legal team. Might have helped.
So, look, there could be a possible issue here, but given how much has been totally misrepresented throughout this whole process, without any actual evidence to support the “Jim Baker mastermind” theory, it’s difficult to take it even remotely seriously when there’s a perfectly normal, non-nefarious explanation to how all of this went down.
The absence of evidence is not evidence that there’s a coverup. It might just be evidence that you’re prone to believing in unsubstantiated conspiracy theories, though.
6. Still, all this proved that Twitter is “illegally” biased towards Democrats!
Taibbi made a big deal out of the fact that Twitter employees overwhelmingly donated to Democrats in their political contributions, which is not exactly new or surprising. Musk commented on this as well, suggesting sarcastically it was proof of bias at Twitter, but left out that among the companies in the chart he was commenting on… was also Tesla, where over 90% of employee donations went to Democrats.
But, more importantly, it’s not surprising in the least. Employees of many companies lean left. Executives (who donate way more money) tend to lean right. I mean, you can look at a similar chart of executive donations that shows they overwhelmingly go to Republicans. Neither is illegal, or even a problem. It’s just reality.
And companies making editorial decisions are… in fact… allowed to have bias in their political viewpoints. I would bet that if you looked at donations by employees at the NY Post or Fox News, they would generally favor Republicans. Indeed, imagine what would happen if someone took over Fox News and suddenly started revealing (1) communications between Fox News execs and Republican politicians and campaigns and (2) internal editorial meeting notes regarding what to promote. Don’t you think it would be way more biased than what the Twitter files revealed?
Here’s the important point on that: Fox News’ clear bias is not illegal either. And, indeed, if Democrats in Congress held hearings on “Fox News’ bias” and demanded that its top executives appear and explain their editorial decision making in promoting GOP talking points, people should be outraged over the clear intimidation factor, which would obviously be problematic from a 1st Amendment angle. Yet I don’t expect people to get all that worked up about the same thing happening to Twitter, even though it’s actually the same issue.
Companies are allowed to be biased. But the amazing thing revealed in the Twitter files is just how little evidence there is that any bias was a part of the debate on how to handle this stuff. Everything appeared to be about perfectly reasonable business decisions.
And… that’s it. I fear that this story is going to live on for years and years and years. And the narrative full of nonsense is already taking shape. However, I like to work off of actual facts and evidence, rather than fever dreams and misinterpretations. And I hope that you’ll read this and start doing the same.
We’ve talked about the mess that is the UK’s Online Safety Bill a few times now, focusing mostly on the extremely serious concerns over requiring websites to take down “legal but harmful” speech, which is a ridiculous and impossible to meet standard that would lead to massive over-blocking of perfectly reasonable content. Many people, including activists pushing for this bill, seem to think that there’s some magic wand that can be waved to determine what content is “harmful” and then magically remove it.
That’s not how any of this works. There are a ton of different judgment calls that need to be made, often lacking the relevant context. Rules against “harmful” speech often run into all sorts of problems, including the removals of friends joking around with each other, or people calling out abuses by others.
So it’s good to see that the current UK government has responded to the concerns raised by many that the bill would lead to censorship. The part about “legal but harmful” speech has been removed from the bill. While, as you can see in that article, this is leading to some angry complaints from censorial activists, it’s the correct move.
That said, none of this magically makes the bill acceptable. It still has tremendous problems, including with overly broad censorship via some of its rules around “protecting children.” Like California’s similar Age Appropriate Design Code (which supporters claim was modeled on the already existing UK AADC, but was really more modeled on the Online Safety Bill), it creates some impossible standards to try to force websites to magically figure out what harms might occur, and magically stop them.
That means that sites will still need to make use of dangerous and intrusive (and privacy violating) age verification tools, which will do real damage to people.
Indeed, you could argue that the bill appears to both require and prohibit age verification technology. It requires it by demanding that websites understand if children (including teenagers) are using their site. It prohibits it by telling websites to carefully analyze any new feature that might cause harm and seek to prevent the harm. The only way to do that with age verification is… to not use it.
I don’t see how any site can comply with this law since the law itself is self-contradictory.
It sure would be nice if parents, politicians, and the media stopped blaming websites for anything bad that happens, including parental failings. Sometimes bad stuff happens. Blaming tech companies for that is not just a cop out, it’s actively avoiding looking inward at where the real problems came from.