Soon after Elon Musk took over Twitter, he insisted that stopping child sexual abuse material (CSAM) was his top priority, and while some of his fans insisted that he had magically done so, the fact is that he fired nearly the entire team that was handling that issue, meaning that CSAM was running rampant on the site, and the company seemed to be doing little about it.
I’m guessing that all of the stories about this resulted in the folks at the Stanford Internet Observatory (SIO) researching how well Twitter was handling known CSAM images. As you may know, the “standard” for most big sites is to use a tool managed by Microsoft called PhotoDNA, which has hashes of a large database of known CSAM images, as determined by the National Center on Missing and Exploited Children (NCMEC). PhotoDNA has its issues, but the one thing it’s generally pretty good at is catching and stopping attempts to reupload images in its database.
So, SIO ran an experiment, in which they hooked up a PhotoDNA system to search Twitter and see if it found any such known images. The team at SIO wasn’t checking the images, but had them sent to NCMEC.
Making sure you have no PhotoDNA matches on your site is basically table stakes for any decently large internet platform that hosts images or video. If you can’t stop images in PhotoDNA, you’re failing, badly. And Twitter failed badly.
In just over two months, from March 12 to May 20, the researchers’ system detected more than 40 images posted to Twitter that were previously flagged as child sexual abuse material, based on a data set of roughly 100,000 tweets, said David Thiel, chief technologist of the Stanford Internet Observatory and a co-author of the report.
The appearance of the images on Twitter was striking because they had been previously flagged as child sexual abuse material, or CSAM, and were part of databases companies can use to screen content posted to their platforms, the researchers said. “This is one of the most basic things you can do to prevent CSAM online, and it did not seem to be working,” Thiel said.
Dealing with CSAM beyond PhotoDNA is a much bigger challenge, but the fact that the company couldn’t even do the basics correctly is terrifying.
In a thread on Bluesky, Renee Diresta, who worked on the research noted that they tried to reach out to Twitter to alert them that their PhotoDNA setup was missing stuff, but initially couldn’t find anyone to talk to, which is another strikes against Elon’s trust & safety team, as basically every mid- to large internet company has at least someone who knows people at SIO. It’s bad if a company doesn’t.
Eventually, the SIO team had to find a “third-party intermediary” to reintroduce them to Twitter, and somehow that finally got someone at the company to pay attention and fix the issue.
Having no remaining Trust and Safety contacts at Twitter, we approached a third-party intermediary to arrange a briefing. Twitter was informed of the problem, and the issue appears to have been resolved as of May 20.
Again, there are reasons why you have a strong trust & safety department, and that includes being able to deal with illegal content like CSAM. Yet, despite Musk claiming it was the company’s top priority, they completely fell down on the job.
On Monday of this week, new CEO Linda Yaccarino officially became CEO of Twitter. Of course, basically no one believes that she’s actually the CEO. Everyone knows that Elon Musk, who in the past has mocked the “CEO” title anyway, is still in charge. He still owns the company and is executive chairman, meaning that he can still fire Yaccarino whenever he wants. And he’s still managing the company and its products. Any honest look at what’s happening here would recognize that Yaccarino’s role is somewhere between “VP of Marketing & Advertising” and “the person we sent to meetings when Elon has pissed off someone important.” At best, she’s poised atop that glass cliff, awaiting the inevitable shove from Elon.
Still, Yaccarino, who had just interviewed Elon on stage in front of advertisers a few weeks before she was hired, spent years leading NBC’s ad sales. She has tons of relationships in the ad business, and it seems obvious that her entire role is to try to sell ads. Because it’s obvious by now that Elon’s best skill at Twitter is scaring away advertisers by being completely clueless and ignorant of how to be a good human being. We’ve written about how Elon himself has admitted to driving away 40% of the company’s advertisers. But more recently he claimed the advertisers were mostly back.
Discussing Twitter’s finances, Mr Musk said the company is now “roughly breaking even”, as most of its advertisers have returned.
Except… that’s bullshit. First of all, the company was “roughly breaking even” before he took over. But at a much higher revenue run rate (approximately $5 billion a year). Elon came in and added approximately $1.3 billion per year in debt interest payments, and then drove away a huge number of advertisers. He did cut costs by firing basically anyone who knew how to do anything, but that’s hardly a recipe for success
As for the advertisers returning? Nope. On Monday the NY Times got its hands on an internal document saying that Twitter’s ad revenue had dropped 59% year over year. And even its internal sales projections are falling short, meaning that it can’t even meet the lowered expectations the company is setting for itself:
But Twitter’s U.S. advertising revenue for the five weeks from April 1 to the first week of May was $88 million, down 59 percent from a year earlier, according to an internal presentation obtained by The New York Times. The company has regularly fallen short of its U.S. weekly sales projections, sometimes by as much as 30 percent, the document said.
And, a few hours after that story came out, Elon effectively admitted that he had just lied to the BBC in claiming the advertisers came back. In trying to make himself sound noble during his ridiculous discussion with conspiracy theory nonsense peddler RFK Jr., Elon admitted that advertisers haven’t actually been coming back.
In a Twitter Spaces conversation with long-shot presidential candidate and anti-vaccine activist Robert F. Kennedy Jr., the Twitter CEO said it was “frankly a struggle for Twitter to break even” because of the loss of ad dollars since he took the helm. More than half of Twitter’s top advertisers suspended ads this winter, but this is the first time Musk has publicly acknowledged the extent of the damage….
Musk tried to frame this as the “cost” of supporting free speech, but as we’ve shown over and over again, Musk doesn’t care one bit about “free speech.” He cares about spreading the speech he supports. Also, the advertisers aren’t leaving because of Elon’s “commitment to free speech.” They’re leaving because he’s a liability to their brand.
The NY Times report notes that even the big advertisers who are still on Twitter are spending a lot less:
Some of Twitter’s biggest advertisers — including Apple, Amazon and Disney — have been spending less on the platform than last year, three former and current Twitter employees said. Large specialized “banner” ads on Twitter’s trends page, which can cost $500,000 for 24 hours and are almost always bought by large brands to promote events, shows or movies, are often going unfilled, they said.
Twitter has also run into public relations snafus with big advertisers like Disney. In April, Twitter mistakenly gave a gold check mark — a badge meant to signify a paying advertiser — to the @DisneyJuniorUK account, which Disney doesn’t own. The account posted racial slurs, leading Disney officials to demand from Twitter an explanation and assurances that it wouldn’t happen again, two people with knowledge of the situation said.
And of course it doesn’t help that the person at Twitter who had been in charge of “brand safety and ad quality” quit last week.
The report also notes that, in Elon’s desperate need for revenue, the company is truly scraping the bottom of the barrel of advertisers:
In one week last month, four of Twitter’s top 10 U.S. advertisers were online gambling and fantasy sports betting companies, according to one presentation. Twitter has also started allowing ads for cannabis accessories, including “bongs, vapes, rolling paper,” as well as erectile dysfunction products and services, according to internal emails.
Adult content, which is permitted on Twitter, has become a concern among the company’s sales staff. When some employees tried to drum up interest from advertisers for Mother’s Day, they found that potential sponsored search terms, like “MomLife,” surfaced pornographic videos, according to two people familiar with the conversations.
Again, literally none of this had to go this way. Every single bit of this is due to terrible decisions by Elon, starting with overpaying for the site in the first place, then trying to back out of the deal, then saddling his new company with $13 billion in debt (which, by the way, now almost equals the amount that Fidelity claims the entire company is worth). Elon didn’t need to fire everyone. He didn’t need to make irrational, spontaneous decisions that pushed the worst content to the forefront of his site and drove advertisers away. He didn’t need to undermine the safety of the site’s highest profile users and advertisers.
But he did all of those things.
And now he thinks Linda Yaccarino will clean up the mess he’s still making, and which it’s still clear he doesn’t realize is entirely due to his own terrible decisions.
It is amazing the degree to which some people will engage in confirmation bias and believe absolute nonsense, even as the facts show the opposite is true. Over the past few months, we’ve gone through the various “Twitter Files” releases, and pointed out over and over again how the explanations people gave for them simply don’t match up with the underlying documents.
To date, not a single document revealed has shown what people now falsely believe: that the US government and Twitter were working together to “censor” people based on their political viewpoints. Literally none of that has been shown at all. Instead, what’s been shown is that Twitter had a competent trust & safety team that debated tough questions around how to apply policies for users on their platform and did not seem at all politically motivated in their decisions. Furthermore, while various government entities sometimes did communicate with the company, there’s little evidence of any attempt by government officials to compel Twitter to moderate in any particular way, and Twitter staff regularly and repeatedly rebuffed any attempt by government officials to go after certain users or content.
Now, as you may recall, two years ago, a few months after Donald Trump was banned from Twitter, Facebook, and YouTube, he sued the companies, claiming that the banning violated the 1st Amendment. This was hilariously stupid for many reasons, not the least of which is because at the time of the banning Donald Trump was the President of the United States, and these companies were very much private entities. The 1st Amendment restricts the government, not private entities, and it absolutely does not restrict private companies from banning the President of the United States should the President violate a site’s rules.
As expected, the case went poorly for Trump, leading to it being dismissed. It is currently on appeal. However, in early May, Trump’s lawyers filed a motion to effectively try to reopen the case at the district court, arguing that the Twitter Files changed everything, and that now there was proof that Trump’s 1st Amendment rights were violated.
In October of 2022, after the entry of this Court’s Judgment, Twitter was acquired by Elon Musk. Shortly thereafter, Mr. Musk invited several journalists to review Twitter’s internal records. Allowing these journalists to search for evidence that Twitter censored content that was otherwise compliant with Twitter’s “TOS”, the journalists disclosed their findings in a series of posts on Twitter collectively known as the Twitter Files. As set out in the attached Rule 60 motion, the Twitter Files confirm Plaintiffs’ allegations that Twitter engaged in a widespread censorship campaign that not only violated the TOS but, as much of the censorship was the result of unlawful government influence, violated the First Amendment.
I had been thinking about writing this up as a story, but things got busy, and last week Twitter (which, again, is now owned by Elon Musk who has repeatedly made ridiculously misleading statements about what the Twitter Files showed) filed its response, where they say (with risk of sanctions on the line) that this is all bullshit and nothing in the Twitter Files says what Trump (and Elon, and a bunch of his fans) claim it says. This is pretty fucking damning to anyone who believed the nonsense Twitter Files narrative.
The new materials do not plausibly suggest that Twitter suspended any of Plaintiffs’ accounts pursuant to any state-created right or rule of conduct. As this Court held, Lugar’s first prong requires a “clear,” government-imposed rule. Dkt. 165 at 6. But, as with Plaintiffs’ Amended Complaint, the new materials contain only a “grab-bag” of communications about varied topics, none establishing a state-imposed rule responsible for Plaintiffs’ challenged content-moderation decisions. The new materials cover topics ranging, for example, from Hunter Biden’s laptop, Pls.’ Exs. A.14 & A.27-A.28, to foreign interference in the 2020 election, Pls.’ Exs. A.13 at, e.g., 35:15-41:4, A.22, A.37, A.38, to techniques used in malware and ransomware attacks, Pls.’ Ex. A.38. As with the allegations in the Amended Complaint, “[i]t is … not plausible to conclude that Twitter or any other listener could discern a clear state rule” from such varied communications. Dkt. 165 at 6. The new materials would not change this Court’s dismissal of Plaintiffs’ First Amendment claims for this reason alone.
Moreover, a rule of conduct is imposed by the state only if backed by the force of law, as with a statute or regulation. See Sutton v. Providence St. Joseph Med. Ctr., 192 F.3d 826, 835 (9th Cir. 1999) (regulatory requirements can satisfy Lugar’s first prong). Here, nothing in the new materials suggests any statute or regulation dictating or authorizing Twitter’s content-moderation decisions with respect to Plaintiffs’ accounts. To the contrary, the new materials show that Twitter takes content-moderation actions pursuant to its own rules and policies. As attested to by FBI Agent Elvis Chan, when the FBI reported content to social media companies, they would “alert the social media companies to see if [the content] violated their terms of service,” and the social media companies would then “follow their own policies” regarding what actions to take, if any. Pls.’ Ex. A.13 at 165:9-22 (emphases added); accord id. at 267:19-23, 295:24-296:4. And general calls from the Biden administration for Twitter and other social media companies to “do more” to address alleged misinformation, see Pls.’ Ex. A.47, fail to suggest a state-imposed rule of conduct for the same reasons this Court already held the Amended Complaint’s allegations insufficient: “[T]he comments of a handful of elected officials are a far cry from a ‘rule of decision for which the State is responsible’” and do not impose any “clear rule,” let alone one with the force of law. Dkt. 165 at 6. The new materials thus would not change this Court’s determination that Plaintiffs have not alleged any deprivation caused by a rule of conduct imposed by the State.
Later on it goes further:
Plaintiffs appear to contend (Pls.’ Ex. 1 at 16-17) that the new materials support an inference of state action in Twitter’s suspension of Trump’s account because they show that certain Twitter employees initially determined that Trump’s January 2021 Tweets (for which his account was ultimately suspended) did not violate Twitter’s policy against inciting violence. But these materials regarding Twitter’s internal deliberations and disagreements show no governmental participation with respect to Plaintiffs’ accounts. See Pls.’ Exs. A.5.5, A-49-53.5
Plaintiffs are also wrong (Ex. 1 at 15-16) that general calls from the Biden administration to address alleged COVID-19 misinformation support a plausible inference of state action in Twitter’s suspensions of Cuadros’s and Root’s accounts simply because they “had their Twitter accounts suspended or revoked due to Covid-19 content.” For one thing, most of the relevant communications date from Spring 2021 or later, after Cuadros and Roots’ suspensions in 2020 and early 2021, respectively, see Pls.’ Ex. A.46-A.47; Am. Compl. ¶¶124, 150. Such communications that “post-date the relevant conduct that allegedly injured Plaintiffs … do not establish [state] action.” Federal Agency of News LLC v. Facebook, Inc., 432 F. Supp. 3d 1107, 1125-26 (N.D. Cal. 2020). Additionally, the new materials contain only general calls on Twitter to “do more” to address COVID-19 misinformation and questions regarding why Twitter had not taken action against certain other accounts (not Plaintiffs’). Pls.’ Exs. A.43-A.48. Such requests to “do more to stop the spread of false or misleading COVID-19 information,” untethered to any specific threat or requirement to take any specific action against Plaintiffs, is “permissible persuasion” and not state action. Kennedy v. Warren, 66 F.4th 1199, 1205, 1207-12 (9th Cir. 2023). As this Court previously held, government actors are free to “urg[e]” private parties to take certain actions or “criticize” others without giving rise to state action. Dkt. 165 at 12-13. Because that is the most that the new materials suggest with respect to Cuadros and Root, the new materials would not change this Court’s dismissal of their claims.
Twitter’s filing is like a beat-by-beat debunking of the conspiracy theories pushed by the dude who owns Twitter. It’s really quite incredible.
First, the simple act of receiving information from the government, or of deciding to act upon that information, does not transform a private actor into a state actor. See O’Handley, 62 F.4th at 1160 (reports from government actors “flagg[ing] for Twitter’s review posts that potentially violated the company’s content-moderation policy” were not state action). While Plaintiffs have attempted to distinguish O’Handley on the basis of the repeated communications reflected in the new materials, (Ex. 1 at 13), O’Handley held that such “flag[s]” do not suggest state action even where done “on a repeated basis” through a dedicated, “priority” portal. Id. The very documents on which Plaintiffs rely establish that when governmental actors reported to social media companies content that potentially violated their terms of service, the companies, including Twitter, would “see if [the content] violated their terms of service,” and, “[i]f [it] did, they would follow their own policies” regarding what content-moderation action was appropriate. Pls.’ Ex. A.13 at 165:3-17; accord id. at 296:1-4 (“[W]e [the FBI] would send information about malign foreign influence to specific companies as we became aware of it, and then they would review it and determine if they needed to take action.”). In other words, Twitter made an independent assessment and acted accordingly.
Moreover, the “frequen[t] [] meetings” on which Plaintiffs rely heavily in attempting to show joint action fall even farther short of what was alleged in O’Handley because, as discussed supra at 7, they were wholly unrelated to the kinds of content-moderation decisions at issue here.
Second, contrary to Plaintiffs’ contention (Ex. 1 at 11-12), the fact that the government gave certain Twitter employees security clearance does not transform information sharing into state action. The necessity for security clearance reflects only the sensitive nature of the information being shared— i.e., efforts by “[f]oreign adversaries” to “undermine the legitimacy of the [2020] election,” Pls.’ Ex. A.22. It says nothing about whether Twitter would work hand-in-hand with the federal government. Again, when the FBI shared sensitive information regarding possible election interference, Twitter determined whether and how to respond. Pls.’ Ex. A.13 at 165:3-17, 296:1-4.
Third, Plaintiffs are also wrong (Ex. 1 at 12-13) that Twitter became a state actor because the FBI “pay[ed] Twitter millions of dollars for the staff [t]ime Twitter expended in handling the government’s censorship requests.” For one thing, the communication on which Plaintiffs rely in fact explains that Twitter was reimbursed $3 million pursuant to a “statutory right of reimbursement for time spent processing” “legal process” requests. Pls.’ Ex. A.34 (emphasis added). The “statutory right” at issue is that created under the Stored Communications Act for costs “incurred in searching for, assembling, reproducing, or otherwise providing” electronic communications requested by the government pursuant to a warrant. 18 U.S.C. § 2706(a), see also id. § 2703(a). The reimbursements were not for responding to requests to remove any accounts or content and thus are wholly irrelevant to Plaintiffs’ joint-action theory
And, in any event, a financial relationship supports joint action only where there is complete “financial integration” and “indispensability.” Vincent v. Trend W. Tech. Corp., 828 F.2d 563, 569 (9th Cir. 1987) (quotation marks omitted). During the period in which Twitter recovered $3 million (late 2019 through early 2021), the company was valued at approximately $30 billion. Even Plaintiffs do not argue that a $3 million payment would be indispensable to Twitter.
I mean, if you read Techdirt, you already knew about all this, because we debunked the nonsense “government paid Twitter to censor” story months ago, even as Elon Musk was falsely tweeting exactly that. And now, Elon’s own lawyers are admitting that the company’s owner is completely full of shit or too stupid to actually read any of the details in the Twitter files. It’s incredible.
It goes on. Remember how Elon keeps insisting that the government coerced Twitter to make content moderation decisions? Well, Twitter’s own lawyers say that’s absolute horseshit. I mean, much of the following basically is what my Techdirt posts have explained:
The new materials do not evince coercion because they contain no threat of government sanction premised on Twitter’s failure to suspend Plaintiffs’ accounts. As this Court already held, coercion requires “a concrete and specific government action, or threatened action” for failure to comply with a governmental dictate. Dkt. 165 at 11. Even calls from legislators to “do something” about Plaintiffs’ Tweets (specifically, Mr. Trump’s) do not suggest coercion absent “any threatening remark directed to Twitter.” Id. at 7. The Ninth Circuit has since affirmed the same basic conclusion, holding in O’Handley that “government officials do not violate the First Amendment when they request that a private intermediary not carry a third party’s speech so long as the officials do not threaten adverse consequences if the intermediary refuses to comply.” 62 F.4th at 1158. Like the Amended Complaint, the new materials show, at most, attempts by the government to persuade and not any threat of punitive action, and thus would not alter the Court’s dismissal of Plaintiffs’ First Amendment claims.
FBI Officials. None of the FBI’s communications with Twitter cited by Plaintiffs evince coercion because they do not contain a specific government demand to remove content—let alone one backed by the threat of government sanction. Instead, the new materials show that the agency issued general updates about their efforts to combat foreign interference in the 2020 election. For example, one FBI email notified Twitter that the agency issued a “joint advisory” on recent ransomware tactics, and another explained that the Treasury department seized domains used by foreign actors to orchestrate a “disinformation campaign.” Pls.’ Ex. A.38. These informational updates cannot be coercive because they merely convey information; there is no specific government demand to do anything—let alone one backed by government sanction.
So too with respect to the cited FBI emails flagging specific Tweets. The emails were phrased in advisory terms, flagging accounts they believed may violate Twitter’s policies—and Twitter employees received them as such, independently reviewing the flagged Tweets. See, e.g., Pls.’ Exs. A.30 (“The FBI San Francisco Emergency Operations Center sent us the attached report of 207 Tweets they believe may be in violation of our policies.”), A.31, A.40. None even requested—let alone commanded—Twitter to take down any content. And none threatened retaliatory action if Twitter did not remove the flagged Tweets. As in O’Handley, therefore, the FBI’s “flags” cannot amount to coercion because there was “no intimation that Twitter would suffer adverse consequences if it refused.” 62 F.4th at 1158. What is more, unlike O’Handley, not one of the cited communications contains a request to take any action whatsoever with respect to any of Plaintiffs’ accounts.6
Plaintiffs’ claim (Ex. 1 at 14) that the FBI’s “compensation of Twitter for responding to its requests” had coercive force is meritless. As a threshold matter, as discussed supra at 10, the new materials demonstrate only that Twitter exercised its statutory right—provided to all private actors—to seek reimbursement for time it spent processing a government official’s legal requests for information under the Stored Communications Act, 18 U.S.C. § 2706; see also id. § 2703. The payments therefore do not concern content moderation at all—let alone specific requests to take down content. And in any event, the Ninth Circuit has made clear that, under a coercion theory, “receipt of government funds is insufficient to convert a private [actor] into a state actor, even where virtually all of the [the party’s] income [i]s derived from government funding.” Heineke, 965 F.3d at 1013 (quotation marks omitted) (third alteration in original). Therefore, Plaintiffs’ reliance on those payments does not evince coercion.
What about the pressure from Congress? That too is garbage, admits Twitter:
Congress. The new materials do not contain any actionable threat by Congress tied to Twitter’s suspension of Plaintiffs’ accounts. First, Plaintiffs place much stock (Ex. 1 at 14-15) in a single FBI agent’s opinion that Twitter employees may have felt “pressure” by Members of Congress to adopt a more proactive approach to content moderation, Pls.’ Ex. A13 at 117:15-118:6. But a third-party’s opinion as to what Twitter’s employees might have felt is hardly dispositive. And in any event, “[g]enerating public pressure to motivate others to change their behavior is a core part of public discourse,” and is not coercion absent a specific threatened sanction for failure to comply….
White House Officials. The new materials do not evince any actionable threat by White House officials either. Plaintiffs rely (Ex. 1 at 16) on a single statement by a Twitter employee that “[t]he Biden team was not satisfied with Twitter’s enforcement approach as they wanted Twitter to do more and to deplatform several accounts,” Pls.’ Ex. A.47. But those exchanges took place in December 2022, id.— well after Plaintiffs’ suspensions, and so could not have compelled Twitter to suspend their accounts. Furthermore, the new materials fail to identify any threat of government sanction arising from the officials’ “dissatisfaction”; indeed, Twitter was only asked to join “other calls” to continue the dialogue
Basically, Twitter’s own lawyers are admitting in a court filing that the guy who owns their company is spewing utter nonsense about what the Twitter Files revealed. I don’t think I’ve ever seen anything quite like this.
Guy takes over company because he’s positive that there are awful things happening behind the scenes. Gives “full access” to a bunch of very ignorant journalists who are confused about what they find. Guy who now owns the company falsely insists that they proved what he believed all along, leading to the revival of a preternaturally stupid lawsuit… only to have the company’s lawyers basically tell the judge “ignore our stupid fucking owner, he can’t read or understand any of this.”
As everyone continues to demand that social media companies pay news orgs for the crime of sending them traffic, it’s becoming clear that fewer and fewer people are using social media for news any more, and social media sites simply are not a major driver of traffic to news orgs anyway.
The PressGazette has had a series of stories lately highlighting how social media is increasingly less relevant in driving traffic to media orgs. After looking at where traffic to media orgs is coming from, the PressGazette finds that, in basically every case, social media is sending less and less traffic to media sites. And that’s especially true for the social media sites most commonly associated with news.
And while this decline clearly predates Elon Musk, it is notable that he still insists that Twitter is an important site for the media. That appears to not really be true based on the data in these articles.
But, still, the larger point is that the whole concept being pushed in these link tax bills, such as the CJPA here in California, is that social media companies are somehow unfairly stealing revenue from news orgs. When, from what we can see, it looks like social media companies don’t much care about news. It’s not driving much usage at all.
It’s possible this is why the media orgs are so desperate for these corrupt link tax government handouts, but it really suggests that the reasoning behind them, that social media is unfairly “profiting” from news, is simply not supported by the data at all.
Back when I wrote the blog post detailing the basic content moderation learning curve speedrun, I actually thought that, like most sites that go through it, Elon might actually learn from it. Yet, it appears he still has trouble processing lessons from basically any of the mistakes he makes. Or he seems to be trying to leverage his own nonsense into helping his friends.
Yesterday was a weird one on Twitter, and that’s saying something given how weird and pointless the site has been of late.
On Thursday morning, the CEO of a nonsense peddling website who doesn’t deserve to be named, took to Twitter to whine that Twitter was suppressing “conservative” speech. Apparently, that website had worked out a deal with Twitter to host a very silly excuse for a documentary that serves no purpose other than to push forward a hateful, harassing culture war. The documentary came out last year, and got exactly the kind of bad attention its creators wanted, which is why we see no reason to name it here either. If you don’t know what it is, trust me, it’s exactly the kind of nonsense you think it is, focused on driving mockery and hatred towards people based on their identity.
As part of Elon’s big new push to host video (which has resulted in lots of infringing movies uploaded to the site, and a surprising lack of lawsuits from Hollywood so far), Twitter and the nonsense peddling website had agreed to post the full documentary to Twitter, with some unclear promises of promotion. However, after the team at Twitter viewed a screener of the movie, they told the nonsense peddler that while the film could still be hosted on Twitter, they would limit its reach while labeling it (accurately) as “hateful conduct.”
To some extent, this was bound to happen. Remember, so much of this mess today is because a bunch of Trumpist crybabies insisted that basic moderation was ideological “censorship” of conservatives, even though actual studies showed that Twitter went out of their way to promote conservatives over others, and to let them avoid punishment for breaking the rules. But the Trumpist crew must, at all times, play the snowflake victim. They have no actual policy principles, so all they have is “these other people are trying to oppress us” despite that not being even remotely true. Hell, the whole movie at issue here is more of that very same thing. The underlying premise is that because some people ask you to treat them with respect, “the libs” are trying to oppress you. It’s nonsense.
Either way, there was, just briefly, this moment where it looked like maybe Twitter staff recognized that posting such whiny, hate-inspiring content wasn’t good for business. After all, just last month, the company had updated its “Hateful Conduct policy” which still includes rules against promoting “hostility and malice against others” based on a number of categories, including “gender identity.” And the policy makes it clear that this includes video content as well.
As such, it’s not hard to see how the film in question would violate that policy.
This was a mistake by many people at Twitter. It is definitely allowed.
Whether or not you agree with using someone’s preferred pronouns, not doing so is at most rude and certainly breaks no laws.
I should note that I do personally use someone’s preferred pronouns, just as I use someone’s preferred name, simply from the standpoint of good manners.
However, for the same reason, I object to rude behavior, ostracism or threats of violence if the wrong pronoun or name is used.
While he’s correct that it does not violate any laws in the US (in some countries it might), Twitter’s written policy says nothing at all about content needing to break the law to get visibility filtering.
And, again, remember that Musk himself keeps talking about “freedom of speech, not freedom of reach” and the company has said repeatedly that it will limit the visibility of content they believe violates their policy. And it appears that’s exactly what was happening here. The trust & safety team (what little is left of it) determined that this film violated the policies on promoting hostility and malice towards people for their gender identity, and, in response, allowed the film to still be posted on Twitter, but with limited reach.
All of that is clearly within Twitter’s stated policies under Elon Musk (all of those policies have been updated in the last two months under Musk).
So I’m not at all clear how Musk can be claiming that this was a “mistake.” Part of the problem is that he seems to think (incorrectly) that Twitter said the film wasn’t allowed at all, rather than just visibility filtered. But then… he basically says it shouldn’t be filtered either. Because someone pointed out that when a clip from the film was posted to Twitter, it had a label about visibility filtering, saying that the content may violate Twitter’s rules against Hateful Conduct, and Elon said it was “being fixed.”
But then things got even odder. After first claiming it was a mistake and was “being fixed,” a little while later he seemed to double back again and admit that the original designation was correct, and that it would be “advertising-restricted” which would “impact reach to some degree.”
A little later, after another nonsense peddler whined that the film was still being visibility filtered, Elon said that “we’re updating the system tomorrow so that those who follow” the nonsense peddler website “will see this in their feed, but it won’t be recommended to non-followers (nor will any advertising be associated with it).
Which, uh, sounds exactly like what the nonsense peddler website was told originally, and which Elon had originally said “was a mistake by many people at Twitter,” despite it (1) clearly following the policies that Elon himself had previously agreed on and (2) matching his claimed “freedom of speech, not freedom of reach” concept. So, which is it? Is it just Elon talking out of both sides of his mouth yet again?
Or… the alternative, which some people are suggesting: Elon thinks that pretending to “suppress” this film would drive more views of it. Which seems to be supported by him claiming that “The Streisand Effect on this will set an all-time record!”
As the person who coined the Streisand Effect in the first place, I can assure you, this is not how any of this works. But either way the whole thing is stupid (and also why we’re not naming the film or the website, because if this is all a stupid attempt to create a fake Streisand Effect, there’s no reason we should help).
And, either way, this morning Elon insisted that all the visibility filtering had been lifted and the only limitation would be whether or not advertising would appear next to it:
He later tweeted a direct link to the film itself, promoting a tweet from the nonsense peddling website insisting (little fucking snowflakes that they are) that it’s the film “they don’t want you to see.”
Basically, a manufactured martyrdom controversy, combined with Twitter pretending to stand up to encouraging hatred, only for Musk to double down that hate has a comfy, welcoming home on Twitter.
Of course, in the midst of all this, the news came out that Ella Irwin, who had been leading trust & safety since relatively early in the Elon Musk reign, and who had been on Twitter through Wednesday directly responding to trust & safety requests, had resigned and was no longer at the company. It’s unclear if her resignation had anything to do with this mess, but the timing does seem notable.
Still, given all of this, is it really any wonder that advertisers like Ben & Jerry’s have announced that they’re ending all paid advertising on the site in response to the proliferation of hate speech?
There has been a lot said about Gonzalez v. Google, the first Supreme Court Section 230 case in 22 years. Of course, in those 2+ decades Section 230’s “twenty-six words that created the internet” have generated their fair share of courtroom and political controversy. But even given 230’s lightning-rod status for free speech and the internet, interest in the Gonzalez case was extreme. Experts and interest groups filed a total of 78 different amici in Gonzalez alone1, totaling 236,471 (!!!) words for Google and 470,002 words total. In light of the volume, the Court extended oral arguments to 70 minutes and then still blew through that time limit by an hour and 34 minutes.
With so much to say, one might think there was much to be said, that is until two weeks ago, when the Court dismissed the case in a perfunctory 2.5 page per curiam opinion that was brief enough to fit into a 15 Tweet-thread.
Doesn’t that make you go, “hmmm?
That sure is a lot of words and time for… not a lot of words and time. Indeed, I think FAR more interesting than anything you could read about the legal issues discussed in the thousands of pages of briefing or ignored in the breviloquent final decision, was the delta in word-count between them, which belied a two-year long narrative arc of legal realism2.
Let me take you back to April 2021
The story, such that it is, begins in April of 2021 when Justice Clarence Thomas issued a very odd concurrence on a procedural dismissal of the case which had successfully challenged then-President Trump’s ability to block users on Twitter under the First Amendment. That case (confusingly called Knight v. Biden in the dismissal because of the change in administrations, but originally called Knight v. Trump) had since been rendered moot when Trump lost the election and ceased to be a government official.
With no live issue, the Court had little to do except clear the matter from the docket, which it did: granting cert, vacating the judgment, and remanding for dismissal in three tidy sentences — which made the multi-page Thomas’s concurrence attached to it all the weirder.
The concurrence had the vibe of an unprovoked rant, with Thomas harnessing many of the far-right conservative ideologies around the censorship of Big Tech companies, championing the controversial idea of applying common carriage doctrine to internet platforms, and attacking Section 230. But perhaps most concerning for internet law experts who disagreed, was that Thomas seemed to be essentially putting out a call-for-cases. “It’s an invitation for plaintiff’s lawyers to bring cases challenging Section 230,” Jeff Kosseff, an internet law professor and author of the authoritative book on the controversial law, said at the time. “And I would not be surprised if we would start seeing more states passing laws that attempted to regulate content moderation.”
Skip forward to one year later in April 2022
Kosseff turned out to be prescient on both fronts. Less than a year after Thomas’s writing, in early April 2022, lawyers for the plaintiff in Gonzalez filed for writ in the Supreme Court, challenging the application of platform immunity in Section 230.
But few law and technology experts had the time to take note of the case because Kosseff’s second prediction had also come true: both Florida and Texas had passed laws in 2021 putting in place must-carry-like provisions for social media.
So in the Spring of 2022, just as Gonzalez and its companion case Taamneh v. Twitter wound their way to the Supreme Court, almost no one was looking. Instead, all eyes were on the 11th and 5th Circuit Courts which were issuing dramatically divergent opinions on the Florida and Texas laws under the First Amendment. These cases — Netchoice v. Moody (Florida) and Netchoice v. Paxton (Texas) — were not only raising big constitutional issues, they had generated a circuit split, both of which made the odds of them being granted cert in the Court both high stakes and high probability.
October 2022: Everyone’s hair is on fire with the Netchoice cases and then the Supreme Court sets their feet on fire with Gonzalez and Taamneh
So when the Court announced on October 3, 2022 that it was granting cert in two relatively unknown, and low-profile internet tort cases, internet law experts were caught on their back foot. “When we were surprised by the cert grant, there was a sense that we [the internet law experts] might have just really misunderstood or underestimated the strength of these two cases,” said Mike Godwin, an internet law expert who filed an amicus brief in Gonzalez. But as many dropped everything to get up to speed on Gonzalez and Taamneh, another possibility emerged. It was not that these cases were underestimated in their legal strength or facts, which with the specter of Thomas’s activist concurrence, made the concern far greater. Instead, “as we dug in, we could see that the cases didn’t seem likely to provide the Court an easy way to reinterpret Section 230,” Godwin recounts, “unless the Court was dead-set on reaching that result regardless of what the underlying issues might be.”
It’s worth noting here that internet law lawyers don’t spend a lot of time in the U.S. Supreme Court. As I mentioned above, the last major case heard by the Court was Reno v. ACLU in 1999, which struck down all but Section 230 of the Communications Decency Act and set the stage, for better or worse, for the next two decades. Now, suddenly, in the span of a few weeks, there were two cases granted and two more likely to be granted in the coming months.
“There’s a very turbulent legal landscape ahead,” Daphne Keller, an internet lawyer at Stanford Cyber Policy Center summarized in an interview at the time. “It’s like Dobbs, in that everyone feels the law is up for grabs, that justices will act on their political convictions and would be willing to disregard precedent.”
The issues in the Netchoice cases were huge and complex — First Amendment, dormant commerce clause, and federalism — but now the threats of Gonzalez and Taamneh were direct and imminent. Over the next several weeks lawyers and advocates working on these issues scrambled to weigh in. The result of the five alarm fire translated into a huge flood of amicus briefing — 47 in support of affirmance or Google, 18 in favor of reversal or Gonzalez, and 13 supporting neither party (Full disclosure: I signed onto a brief with other law professors and law and tech experts in favor of Google).
The Farcical February Oral Arguments
By the time oral arguments rolled around in late February of this year, there was a mix of collective exhaustion and massive pessimism. Though I knew many who trekked to DC to wait in a line for 19 hours in the cold to attend oral argument in person, most of us were relegated to listening from the public audio feed provided for the Court. I organized group of experts to listen and weigh in via liveblog at the Rebooting Social Media Institute at Harvard, where I was a fellow — most people were excited for the camaraderie, but more than one declined reasoning that “given the odds we’ll be witnessing firsthand the demise of the internet” they preferred to be alone.
Supreme Court oral arguments are historically not a great predictor of the outcome of a case. Sometimes topics that come up at great length in discussion never even are discussed in the final opinion and the moods of the justices are hard to read and often change. So the group of us that assembled — a mix of lawyers and legal types who had followed closely or filed briefs in the case — gathered that morning with low expectations that we would learn anything new and a sense of gallows humor. If the internet was going to die that day, at least we’d be hanging out in Slack together making memes when it happened.
But as arguments began, it was clear that something very far outside the normal was happening. As the plaintiff, Gonzalez’s attorney went first, and at the close of his opening statement, unsurprisingly given his presumed interest in the case, the first question was from Thomas. But oddly the question from Thomas seemed somewhat hostile to the plaintiff’s arguments, urging him to make a better case: “I think you have to give us a clearer example of what your point is exactly,” the Justice stated, offering a few examples of what results one might get from asking a YouTube algorithm for a recipe of “rice pilaf from Uzbekistan… you don’t want pilaf from some other place, say, Louisiana.”
But whether for nerves, or ineptitude, the plaintiff’s attorney seemed unable to clarify his arguments to satisfy Thomas or really any of the justices, even as they tried to offer him help in doing so. As they went through their questions, it seemed not a single justice could tell why they were hearing this case and what the plaintiff was arguing for… and they were confused in nine different ways:
“Does your position send us down the road such that 230 really can’t mean anything at all?” asked Justice Kagan.
“I — I don’t know where you’re drawing the line. That’s the problem,” said Alito, later adding “I’m afraid I’m completely confused by whatever argument you’re making at the present time.”
“Can we back up a little bit and try to at least help me get my mind around your argument about how we should read the text of the statute?” offered Justice Jackson.
“Can I break down your complaint a moment?” asked Justice Sotomayor, summarizing the plaintiff’s main claim, then in a question that would turn any attorney’s blood cold asking, “I think, as I’m listening to you today, you seem to have abandoned that.”
Over at our little live blog, the funereal mood had turned into an almost jocular one. Slowly a new possibility about what the Court’s decision to hear arguments in Gonzalez and Taamneh emerged: maybe the Court had wanted to take an internet law tort case, but they’d picked badly and they now they knew it.
Resolving these cases in favor of the plaintiffs wouldn’t just radically re-interpret Section 230, it would also dramatically re-write the law on what “aiding and abetting” meant under the law — and that would have ramifications on all of tort law, not just social media. In other words, maybe the Court had been willing to be little activist, but not this activist.
How do you solve a problem like a Mistaken Grant of Cert?
So over on the live blog we started contemplating: maybe this isn’t end times for internet, maybe this is just an accident. What do you do with a Supreme Court case that you f*cked up in hearing? Well, if the Court wanted to resolve their little accident with as little damage as possible it had two best options. The thing a Court can do when it grants something seemingly on error is to “deny as improvidently granted.” As the arguments continued to deteriorate, Supreme Court expert, law professor Steve Vladeck, floated this as a possibility on Twitter.
The other option, as Ben Wittes ofDog Shirt Dailypointed out in our liveblog and in the Lawfare Podcast, was to rule for Twitter in the companion case to Gonzalez, and then use that resolution to avoid having to rule in Gonzalez at all. That, at least, would save the Court face, but to the same general effect.
And that is precisely what happened.
In Taamneh, a unanimous Court wrote a relatively brief 30 page opinion that upheld the dismissal of a case against social media companies. That decision made way for the less than 600 word per curiam in Gonzales, where the Court “declined to address” the issues raised in the over quarter of a million words of amicus briefs written on the case.
And that, bunnies, is how the Supreme Court turns a quarter-million words into 565
With such an unceremonious conclusion, many might see all those words from friends of the Court as wasted. After all, most people consider amicus briefs successful when they appear as cited in a majority opinion, not when they collectively generate a barely 3 page dismissal. But as part of this broader story of the case, it’s the best possible outcome.
“All the amicus work was vindicated,” Keller texted me minutes after the decision was released, “precisely by being made to seem so unnecessary.”
Republished with permission from the Klonickles (which you should subscribe to).
Elon Musk has insisted that “transparency is the key to trust” in rebuilding Twitter in his image. He says it all the time. But, of course, under Musk, Twitter has been significantly less transparent, choosing to skip its transparency reports, and generally close itself off. But one of the key methods for transparency on Twitter has long been its willingness to allow academic researchers to access its API and do research around Twitter and its users.
This is how, for example, we were able to learn that (contrary to widespread belief), Twitter’s moderation efforts actually favored conservatives (rather than suppressed them), and that the “bias” in its moderation efforts was against misinformation, not any political ideology.
Of course, in Musk’s desperate efforts to poke the bird he saddled with massive debt until it makes money, means that he turned off nearly everyone’s access to Twitter’s API (including ours) and demanded a minimum $42,000 per month from academics. That’s half a million dollars a year. For access to one company’s data. This is… not the kind of money that academic institutions have to.
The whole thing seems deliberately designed to cut academics off from Twitter’s data and to be as opaque as possible, rather than transparent.
As if to put an exclamation point on that thinking, the latest is that Twitter is telling academic institutions that haven’t paid (i.e., basically all of them that used to use Twitter’s data for research) that they are required to delete all the data they collected in the past by the end of this month.
But in recent weeks, the company has been contacting researchers, asking them to pay $42,000 a month to access 0.3% of all the tweets posted to the platform – something researchers have previously said is totally unaffordable. Previous contracts for access to the data were set as low as a couple of hundred dollars a month.
An email, seen by the i, says researchers who don’t sign the new contract “will need to expunge all Twitter data stored and cached in your systems”. Researchers will be required to post screenshots “that showcase evidence of removal”. They have been given 30 days after their agreement expires to complete the process.
Now, in talking to people (both former Twitter employees and academic researchers) about this, they do say that the Twitter API contract has long had a clause regarding data deletion. But also, that it has never been used in this manner (only in cases where there were claims of misuse of the data), and that the demand to prove the data has been deleted is particularly egregious and petty.
But, really, it just highlights how little Elon is willing to have outside experts look into the details of how Twitter is working. It’s the opposite of transparency.
And, thus, Elon himself is effectively telling you that you should never trust Twitter.
On top of that, it seems particularly ironic that Twitter is demanding proof of deletion the very same week that Twitter itself began accidentally putting back tweets that Twitter had told people had been deleted. So Twitter is now being less transparent, demanding proof of deletion, at the very same time that it can’t delete things it promised it had deleted.
A whole bunch of media articles are noting that Twitter users who deleted tweets have noticed in recent weeks that the deleted tweets have magically returned. There seems to be little rhyme or reason for which deleted tweets have returned, but it’s definitely happening to many users. In some cases, people said they had deleted tens of thousands of tweets, only to find them all come back.
Twitter has said nothing, and people are generally guessing what happened. A former Twitter employee says that maybe some servers were moved between data centers, and that they “didn’t properly adjust the topology before reinserting them into the network, leading to stale data becoming revived.”
This is the kind of thing that happens when you kick nearly all of the institutional knowledge that held your newly owned website together out the door.
Anyway, for most people this isn’t that big of a deal, but there is real potential for harm. There are many reasons why people might delete old tweets, and some of them may be to protect themselves. There could be legal reasons to delete a tweet. Or reasons to protect against harassment.
Having such tweets come back to life (without notification) creates a real risk that actual harm could occur. This is the kind of thing that a good engineering team, working with a good trust & safety team would, you know, strive to prevent, in order to keep the users of a platform safe.
But, it’s been made abundantly clear that this is not something that Elon Musk cares about. Putting people in danger is fine, just so long as he continues to be the center of attention.
Over the last few years, there’s been a lot of fretting among the media, politicians, and others about how “deep fakes” would have a major impact on events, with faked imagery, audio, and video creating havoc on news events and political campaigns. Back in 2019, we had published a story suggesting that people calm down a little. As we noted, similar fears had come about before, including in the early 1990s with the introduction of Photoshop. Similar predictions were made about how disastrous this would be for “truth.”
But… that never really came to be.
So it is interesting to see the story this week about a fake (most likely created using a generative AI program) photo showing what appears to be an explosion near the Pentagon. The image was fake, but it was shared by a bunch of accounts on Twitter who had paid Elon his $8 fee, enabling a blue checkmark to appear next to their name (while some people call them “verified” accounts, they’re not verified so they shouldn’t be called that). But, still, it was shared, and people believed it because it was made to look like it came from Bloomberg News.
The image, which bears all the hallmarks of being generated by artificial intelligence, was shared by numerous verified accounts with blue check marks, including one that falsely claimed it was associated with Bloomberg News.
“Large explosion near the Pentagon complex in Washington DC. – initial report,” the account posted, along with an image purporting to show black smoke rising near a large building.
And, from there, some in the media reported it as real:
The false reports of the explosion also made their way to air on a major Indian television network. Republic TV reported that an explosion had taken place, showing the fake image on its air and citing reports from the Russian news outlet RT. It later retracted the report when it became clear the incident had not taken place.
And… from there, it impacted the stock market:
In the moments after the image began circulating on Twitter, the US stock market took a noticeable dip. The Dow Jones Industrial Average fell about 80 points between 10:06 a.m. and 10:10 a.m., fully recovering by 10:13 a.m. Similarly, the broader S&P 500 went from up 0.02% at 10:06 a.m. to down 0.15% at 10:09 a.m.. By 10:11 a.m., the index was positive again.
The fact that it was debunked and the stock market recovered quickly again suggests that the “threat” of faked content is still at least somewhat limited. But, in those five minutes, it’s likely that some people might have lost a lot of money (and others may have made lots of money). So it did have an impact.
However, it does seem notable that this is the first story I can recall of such a faked image actually having such an impact, unlike the predictions from years ago that this would be a regular occurrence. Now, it may come to pass that this happens more often, but, if anything, this seems to reinforce our story from a few years ago that it’s pretty difficult to pull off a full scale faking that has any real impact.
It is still notable that the main vector that made it possible for this image to have even the slight (and temporary) effect that it had was Musk’s ridiculous decision to turn “verification” into a profit center/asshole signaling system, rather than an actual verification plan. That is still allowing malicious actors to abuse this system to try to pretend to be more legitimate. And that was a key piece to the puzzle here. Without that faux “verification” it seems unlikely that any of this would have worked.
Indeed, Bloomberg’s own report of the story, notes that the image was actually first posted to Facebook, but didn’t get much traction until “verified” Twitter accounts tweeted it, including conspiracy theory nonsense peddler ZeroHedge and a fake “Bloomberg News” feed:
The fake photo, which first appeared on Facebook, showed a large plume of smoke that a Facebook user claimed was near the US military headquarters in Virginia.
It soon spread on Twitter accounts that reach millions of followers, including the Russian state-controlled news network RT and the financial news site ZeroHedge, a participant in the social-media company’s new Twitter Blue verification system.
It’s no secret that Twitter isn’t paying many of its bills, including the rent for its headquarters. That was rumored last fall, but became much more clear when the landlords sued the company in January.
Now a new lawsuit, filed last week by six former employees, provides a lot more details on Elon’s view of, you know, paying for things he is contractually obligated to pay for. The employees, many of whom were high level, note that Musk and his circle of advisors (known by existing Twitter employees as “the goons”) made it clear to Twitter employees that they were to break all sorts of contracts:
Led by Musk and the cadres of sycophants who were internally referred to as the “transition team,” Twitter’s new leadership deliberately, specifically, and repeatedly announced their intentions to breach contracts, violate laws, and otherwise ignore their legal obligations.
The employees filing the lawsuit note that they were constantly ordered to violate their own legal obligations, often in the most obnoxious ways.
“Elon doesn’t pay rent,” one member of the transition team told Hawkins. Another member of the transition team put it more bluntly to Killian: “Elon told me he would only pay rent over his dead body.”
The crux of the lawsuit itself is that Musk hasn’t paid them the required severance he owes them per their contracts. But they used the opportunity to reveal a lot of what happened within Twitter. Enough that apparently the city of San Francisco has opened an investigation based on the claims in the complaint.
And while the complaint details various city building codes that Musk ordered employees to violate, the decision not to pay the rent is especially interesting. One of the plaintiffs, Tracy Hawkins, was Twitter’s VP of Real Estate and Workplace, and (as the complaint notes) if she had followed Elon’s orders, it would have destroyed her reputation in the real estate world.
The complaint paints quite a story:
On or about October 30, 2022, Hawkins attended a meeting with Steve Davis, Jared Burchall, and many of Twitter’s global leaders.
In that meeting, Davis announced several changes that boded ill for Hawkins’ team and her role at Twitter.
First, he announced that Twitter’s Sourcing and Procurement team should handle all lease negotiations from that point forward, despite lacking both personnel and experience sufficient to handle this task.
Next, he announced that the company would no longer be working with brokers to procure and negotiate leases.
This choice ran in conflict with every established standard and practice of commercial real estate management, and stood to further increase the burden on the in-house staff substantially.
The meeting gave no opportunities for feedback or discussion – it was merely a series of nonsensical pronouncements.
The only justification given for the changes was “Elon wants this.”
Very soon thereafter, Davis informed Hawkins that Twitter needed to find five hundred million dollars in annual savings.
To accomplish this, each Global Lead was given a massive spreadsheet that had to be filled out every single day, identifying possible savings opportunities.
Hawkins’ spreadsheet covered thirty locations and upwards of fifty leases.
The pressure to fill in the spreadsheet on time was immense. Expectations from above made it clear that compliance was prioritized above accuracy
Nonetheless, Hawkins and her team strove to deliver reliable, well-contextualized information.
For example, Twitter instructed Hawkins to identify leases for cancellation.
When she identified potential sites and leases that could be terminated for cost savings, Hawkins and her team took the time to document the risk factors involved in downsizing or terminating these leases, such as large termination fees.
However, when the time came to present their conclusions, this added context was not well received.
When informed of the risks of termination fees during a meeting on November 3, 2022, Steve Davis said “well, we just won’t pay those. We just won’t pay landlords.”
Davis also told Hawkins “We just won’t pay rent.”
Those are direct quotes of Davis per Hawkins’s best recollection; to the extent that they are not word-for-word accurate they are an extremely tight paraphrase.
Hawkins was shocked.
Twitter specifically directed Hawkins to breach its leases, whether by terminating without any good faith justification under the terms of the applicable lease, or by simply stealing from the landlords by intentionally remaining on the premises without any intention of paying amounts Twitter knew and believed were its legal obligation to pay.
Unwilling to be involved in (let alone responsible for) such thefts, Hawkins resigned the next day.
Later in the complaint it notes:
In effect, if she had done what Twitter was asking her to do, Hawkins would have become permanently unemployable in her field.
Perhaps even crazier is the story of Joseph Killian, Twitter’s Global Head of Construction and Design, who was given the role of taking over Hawkins’ responsibilities after she had quit. You have to read the following because it is absolutely incredible:
After Hawkins left Twitter, Killian, who was Twitter’s Global Head of Construction and Design, was immediately assigned Hawkins’s duties and given responsibility for managing Twitter’s portfolio of nearly fifty leases.
Killian worked directly with the Transition Team to effect the transition from Twitter 1.0 (pre-Musk) to Twitter 2.0 (post-Musk) and bring Twitter in line with Musk’s standard business practices.
Killian was directed in these activities by Steve Davis and Liz Jenkins, who worked for the Boring Company, and Pablo Mendoza, a venture capitalist who invested with Musk.
Killian was also directed in these activities by Nicole Hollander.
On information and belief, Hollander was not employed by any of Musk’s companies.
On information and belief, Hollander is Steve Davis’s girlfriend and the mother of his child.
On information and belief, Hollander was living at Twitter headquarters with Davis and their infant child, who was a month old.
Despite not being employed by any of Musk’s companies, Hollander nonetheless had full instructional authority over Killian and the rest of his team with regards to the transition.
Almost immediately, Musk’s “zero-cost basis” policy reared its head
Killian was informed by the Transition Team that he would have to justify his spend to Musk personally, and that if Musk was not convinced that the expenses were necessary, he would simply default on his contractual obligations and let the expenses go unpaid.
In early December, Davis sent a 3:00 a.m. email to 15 or 20 managers complaining about Twitter’s rent obligations, which totaled $130M annually.
In this email, Davis specifically compared Twitter’s rent obligations to SpaceX’s, noting that Twitter had 1/10th as many employees as SpaceX but paid five times as much rent annually.
Of course, Twitter had significantly more employees when it first incurred its rent obligations.
Killian quickly became concerned that Musk intended to stop paying rent on Twitter’s outstanding leases, breaching the contracts and placing the company at risk of being evicted.
Indeed, Musk’s attorney, Alex Spiro, loudly opined that it was unreasonable for Twitter’s landlords to expect Twitter to pay rent, since San Francisco was a “shithole.”
So, Alex Spiro is a big time lawyer. One of the biggest. But, I’m pretty sure that not paying your contractually obligated rent because the city is a “shithole” is not how anything fucking works.
Either way, I guess this means that Spiro and Musk approve of squatting in “shithole” cities? Power to the people! But also, I think this also means that Spiro doesn’t think anyone should pay Twitter anything because it, too, has become quite the shithole.
Either way, the complaint then details how Davis ordered Killian to stop paying rent. And, also to… install bathrooms all over Twitter HQ, telling him not to get permits (in violation of their lease) and to hire unlicensed plumbers to do the work. And when Killian emailed his concerns over this, Davis’ apparent girlfriend (who again, was not employed there, but living in Twitter HQ) told Killian to never put such concerns in writing:
Musk announced via the Transition Team that he was going to be installing “hotel rooms” at Twitter HQ.
Killian was initially told that the “hotel rooms,” soon renamed to “sleeping rooms” to avoid triggering the suspicions of the city inspectors, were just being installed to give exhausted and overworked employees a place to nap.
Though the changes had initially been simple, if unorthodox – removing a conference table and installing a bed – Davis instructed Killian to begin planning for and implementing the addition of features like en-suite bathrooms and other changes to the physical plant.
Concerned about how city inspectors would react to Twitter’s plans, Killian emailed the Transition Team to note that the changes they had made thus far were limited to ‘just furniture’ and therefore were code compliant, but that Twitter’s future planned changes would require permits and more complicated code compliance.
In response, Hollander visited him in person and emphatically instructed him to never put anything about the project in writing again.
Hollander appeared surprised and distressed that Killian did not inherently understand that this was not a project for which Musk and the Transition Team wanted a written record.
Hollander specifically conveyed that Davis in particular was upset that Killian had sent the email.
On Friday, Elon mocked the reports that came out of the lawsuit regarding the “wrong locks” on doors:
But… the details in the lawsuit are not just about “wrong locks on doors” but about the real risk that people would fucking die. Which puts Elon’s comment in a different light:
Killian was instructed to install space heaters in the “hotel rooms” in further violation of Twitter’s lease
Killian was also instructed to place locks on the “hotel room” doors – a request that betrayed the lie that these were intended to be temporary rest spaces for exhausted Tweeps.
California code requires locks that automatically disengage when the building’s fire suppression systems are triggered.
Killian was repeatedly told that compliant locks were too expensive and instructed to immediately install cheaper locks that were not compliant with life safety and egress codes.
Again, Killian protested that no licensed tradesperson would perform work that violated the building code.
Killian protested that installing these locks would put lives at risk – that in case of an earthquake or fire (the latter of which was made dramatically more likely by the noncompliant electrical work and the presence of the space heaters he had been instructed to install), these locks would remain locked, blocking first responders from being able to access the rooms and the Tweeps within.
Nobody cared.
On information and belief, the non-compliant locks were in fact eventually installed – but not by Killian.
Killian quit that day.
Yikes.
I mean, it’s pretty clear that Elon does not care one bit for the lives and wellbeing of those who work for him. But, this really takes it to another level.
And all this because Elon got suckered into massively overpaying for the company because he didn’t understand literally anything about social media, and then saddled the company with a huge, unnecessary debt, and his way to deal with that was to put people at risk?