So, last Friday, the 5th Circuit released its opinion in the appeal of an absolutely ridiculous Louisiana federal court ruling that insisted large parts of the federal government were engaged in some widespread censorial conspiracy with social media, and barred large parts of the government from talking to social media companies and even academic researchers.
The 5th Circuit massively trimmed back the district court’s injunction, throwing out 9 of the 10 listed “prohibitions,” removing a bunch of the defendants, including CISA and Anthony Fauci’s NIAID, noting that there was no evidence they had done anything improper, and taking the one remaining prohibition, and basically chopping it back to be close to meaningless (basically “don’t coerce the companies.”)
I thought the 5th Circuit was right to use the tests that the 2nd and 9th Circuits used for “coercion,” but found the actual application of those tests to be… at best weird, and at worst potentially extremely problematic (especially in the case of the CDC defendant, where the ruling made no sense at all). That confused application of the facts to the test at hand presented a challenge for the administration, as it arguably provided zero useful guidance for the administration on how to not violate the injunction. And that’s because the court really laid out no clear way of applying the test that was coherent or understandable. It kinda made stuff up as it went along and said “that’s coercion,” even though it wasn’t clear what was actually coercive.
Even when the 5th Circuit highlighted, for example, quotes from the administration to social media companies, it never provided the context or details. In fact, it would provide tiny fragments (a few word phrases) without any indication of who said what, what websites in particular they were talking about, and what it actually meant in context. And that was a real problem, especially as the lower court took many quotes so out of context as to reverse their meaning (and in one case, added in words to make a quote say the opposite of what it really said).
That said, I still wondered if the Biden administration would actually ask the Supreme Court to review it, because the final ruling was pretty limited in scope, and there’s a real risk that this Supreme Court, which has become so political in nature, would make a decision that was much, much worse and much, much more problematic for the administration.
Apparently, the White House felt differently, and they’ve rushed to the Supreme Court to ask the Supreme Court to review things on the shadow docket. Justice Alito has now put a stay on the injunctions and asked for filings by this coming Wednesday to review the issue.
The White House’s application is worth reading. First, they challenge the standing of the plaintiffs in the case (five people who were moderated on social media, along with the states Louisiana and Missouri). The White House notes that even if you argue that the individuals who were moderated have standing, they faced moderation before the White House said anything (i.e., it was independent decisions by the companies):
The Fifth Circuit held that they have standing because their posts have been moderated by social-media platforms. But respondents failed to show that those actions were fairly traceable to the government or redressable by injunctive relief. To the contrary, respondents’ asserted instances of moderation largely occurred before the allegedly unlawful government actions. The Fifth Circuit also held that the state respondents have standing because they have a “right to listen” to their citizens on social media. App., infra, 204a. But the court cited no precedent for that boundless theory, which would allow any state or local government to challenge any alleged violation of any constituent’s right to speak.
The larger point, though, is the 1st Amendment arguments regarding the jawboning questions, with the White House pointing out that these rulings take away the government’s bully pulpit, where it is allowed to advocate for positions, it just can’t threaten or punish people for their speech:
Second, the Fifth Circuit’s decision contradicts fundamental First Amendment principles. It is axiomatic that the government is entitled to provide the public with information and to “advocate and defend its own policies.” Board of Regents v. Southworth, 529 U.S. 217, 229 (2000). A central dimension of presidential power is the use of the Office’s bully pulpit to seek to persuade Americans — and American companies — to act in ways that the President believes would advance the public interest. President Kennedy famously persuaded steel companies to rescind a price increase by accusing them of “ruthless[ly] disregard[ing]” their “public responsibilities.” John F. Kennedy Presidential Library & Museum, News Conference 30 (Apr. 11, 1962), perma.cc/M7DL-LZ7N. President Bush decried “irresponsible” subprime lenders that shirked their “responsibility to help” distressed homeowners. The White House, President Bush Discusses Homeownership Financing (Aug. 31, 2007), perma.cc/DQ8B-JWN4. And every President has engaged with the press to promote his policies and shape coverage of his Administration. See, e.g., Graham J. White, FDR and the Press (1979).
Of course, the government cannot punish people for expressing different views. Nor can it threaten to punish the media or other intermediaries for disseminating disfavored speech. But there is a fundamental distinction between persuasion and coercion. And courts must take care to maintain that distinction because of the drastic consequences resulting from a finding of coercion: If the government coerces a private party to act, that party is a state actor subject “to the constraints of the First Amendment.” Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1933 (2019). And this Court has warned against expansive theories of state action that would “eviscerate” private entities’ “rights to exercise editorial control over speech and speakers on their properties or platforms.” Id. at 1932.
The Fifth Circuit ignored those principles. It held that officials from the White House, the Surgeon General’s office, and the FBI coerced social-media platforms to remove content despite the absence of even a single instance in which an official paired a request to remove content with a threat of adverse action — and despite the fact that the platforms declined the officials’ requests routinely and without consequence. Indeed, the Fifth Circuit suggested that any request from the FBI is inherently coercive merely because the FBI is a powerful law enforcement agency. And the court held that the White House, the FBI, and the CDC “significantly encouraged” the platforms’ content-moderation decisions — and thus transformed those decisions into state action — on the theory that officials were “entangled” in the platforms’ decisions. App., infra, 235a. The court did not define that novel standard, but found it satisfied primarily because platforms requested and relied upon CDC’s guidance on matters of public health.
Of course, this is the entire debate about jawboning in a nutshell. Where is the line between persuasion and coercion? The White House is correct that the 5th Circuit’s ruling doesn’t lay out a clear test or application, and leaves things muddled, but part of the problem is that where that line is has always been kinda muddled.
And I’m not at all sure that this Supreme Court will properly construe that line.
However, as the White House notes (and I would agree) the discussion with regards to the CDC in particular is kind of unworkable:
The implications of the Fifth Circuit’s holdings are startling. The court imposed unprecedented limits on the ability of the President’s closest aides to use the bully pulpit to address matters of public concern, on the FBI’s ability to address threats to the Nation’s security, and on the CDC’s ability to relay publichealth information at platforms’ request. And the Fifth Circuit’s holding that platforms’ content-moderation decisions are state action would subject those private actions to First Amendment constraints — a radical extension of the state-action doctrine
The White House also points out that the unclear nature of the remaining injunction creates a burden on federal government employees:
Third, the lower courts’ injunction violates traditional equitable principles. An injunction must “be no more burdensome to the defendant than necessary to provide complete relief to the plaintiffs.” Califano v. Yamasaki, 442 U.S. 682, 702 (1979). Here, however, the injunction sweeps far beyond what is necessary to address any cognizable harm to respondents: Although the district court declined to certify a class, the injunction covers the government’s communications with all social-media platforms (not just those used by respondents) regarding all posts by any person (not just respondents) on all topics. And it forces thousands of government officials and employees to choose between curtailing their interactions with (and public statements about) social-media platforms or risking contempt should the district court conclude that they ran afoul of the Fifth Circuit’s novel and ill-defined concepts of coercion and significant encouragement.
I don’t necessarily disagree with any of that. The ruling (mainly in how it applies the test for coercion) is a mess, and the final injunction (while massively slimmed down from the lower court’s) is confusing and unclear.
But, still, given how much of a partisan political football this is, I can easily see the Supreme Court making things way, way worse.
It looks like there will be quick turnaround on the shadow docket issue that I’m guessing may lead to a further stay of the injunction, as the White House said it intends to file for a full normal cert petition in October, allowing the Supreme Court to hear the full case this term. So it would be easy for Alito to stay the injunction until the case is fully briefed and heard.
Again, I get where the White House is coming from. The 5th Circuit ruling has real issues, but it struck me as way less damaging than whatever else might come out of this process. But, I guess, in the long run, it’s better to have a full ruling on this issue from the Supreme Court. I’m just scared of what this particular Supreme Court will say.
We’re going to go slow on this one, because there’s a lot of background and details and nuance to get into in Friday’s 5th Circuit appeals court ruling in the Missouri v. Biden case that initially resulted in a batshit crazy 4th of July ruling regarding the US government “jawboning” social media companies. The reporting on the 5th Circuit ruling has been kinda atrocious, perhaps because the end result of the ruling is this:
The district court’s judgment is AFFIRMED with respect to the White House, the Surgeon General, the CDC, and the FBI, and REVERSED as to all other officials. The preliminary injunction is VACATED except for prohibition number six, which is MODIFIED as set forth herein. The Appellants’ motion for a stay pending appeal is DENIED as moot. The Appellants’ request to extend the administrative stay for ten days following the date hereof pending an application to the Supreme Court of the United States is GRANTED, and the matter is STAYED.
Affirmed, reversed, vacated, modified, denied, granted, and stayed. All in one. There’s… a lot going on in there, and a lot of reporters aren’t familiar enough with the details, the history, or the law to figure out what’s going on. Thus, they report just on the bottom line, which is that the court is still limiting the White House. But it’s at a much, much, much lower level than the district court did, and this time it’s way more consistent with the 1st Amendment.
The real summary is this: the appeals court ditched nine out of the ten “prohibitions” that the district court put on the government, and massively narrowed the only remaining one, bringing it down to a reasonable level (telling the U.S. government that it cannot coerce social media companies, which, uh, yes, that’s exactly correct).
But then in applying its own (perhaps surprisingly, very good) analysis, the 5th Circuit did so in a slightly weird way. And then also seems to contradict the [checks notes] 5th Circuit in a different case. But we’ll get to that in another post.
Much of the reporting on this suggests it was a big loss for the Biden administration. The reality is that it’s a mostly appropriate slap on the wrist that hopefully will keep the administration from straying too close to the 1st Amendment line again. It basically threw out 9.5 out of 10 “prohibitions” placed by the lower court, and even on the half a prohibition it left, it said it didn’t apply to the parts of the government that the GOP keeps insisting were the centerpieces of the giant conspiracy they made up in their minds. The court finds that CISA, Anthony Fauci’s NIAID, and the State Department did not do anything wrong and are no longer subject to any prohibitions.
The details: the state Attorneys General of Missouri and Louisiana sued the Biden administration with some bizarrely stupid theories about the government forcing websites to take down content they disagreed with. The case was brought in a federal court district with a single Trump-appointed judge. The case was allowed to move forward by that judge, turning it into a giant fishing expedition into all sorts of government communications to the social media companies, which were then presented to the judge out of context and in a misleading manner. The original nonsense theories were mostly discarded (because they were nonsense), but by quoting some emails out of context, the states (and a few nonsense peddlers they added as plaintiffs to have standing), were able to convince the judges that something bad was going on.
As we noted in our analysis of the original ruling, they did turn up a few questionable emails from White House officials who were stupidly trying to act tough about disinformation on social media. But even then, things were taken out of context. For example, I highlighted this quote from the original ruling and called it out as obviously inappropriate by the White House:
Things apparently became tense between the White House and Facebook after that, culminating in Flaherty’s July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”
Except… if you look at it in context, the email has nothing to do with content moderation. The White House had noticed that the @potus Instagram account was having some issues, and Meta told the company that “the technical issues that had been affecting follower growth on @potus have been resolved.” A WH person received this and asked for more details. Meta responded with “it was an internal technical issue that we can’t get into, but it’s now resolved and should not happen again.” Someone then cc’d Rob Flaherty, and the quote above was in response to that. That is, it was about a technical issue that had prevented the @potus account from getting more followers, and he wanted details about how that happened.
So… look, I’d still argue that Flaherty was totally out of line here, and his response was entirely inappropriate from a professional standpoint. But it had literally nothing to do with content moderation issues or pressuring the company to remove disinformation. So it’s hard to see how it was a 1st Amendment violation. Yet, Judge Terry Doughty presented it in his ruling as if that line was about the removal of COVID disinfo. It is true that Flaherty had, months earlier, asked Facebook for more details about how the company was handling COVID disinfo, but those messages do not come across as threatening in any way, just asking for info.
The only way to make them seem threatening was to then include Flaherty’s angry message from months later, eliding entirely what it was about, and pretending that it was actually a continuation of the earlier conversation about COVID disinfo. Except that it wasn’t. Did Doughty not know this? Or did he pretend? I have no idea.
Doughty somehow framed this and a few other questionably out of context things as “a far-reaching and widespread censorship campaign.” As we noted in our original post, he literally inserted words that did not exist in a quote by Renee DiResta to make this argument. He claimed the following:
According to DiResta, the EIP was designed to “get around unclear legal authorities, including very real First Amendment questions” that would arise if CISA or other government agencies were to monitor and flag information for censorship on social media.
Except, if you read DiResta’s quote, “get around” does not actually show up anywhere. Doughty just added that out of thin air, which makes me think that perhaps he also knew he was misrepresenting the context of Flaherty’s comment.
Either way, Doughty’s quote from DiResta is a judicial fiction. He inserted words she never used to change the meaning of what was said. What DiResta is actually saying is that they set up EIP as a way to help facilitate information sharing, not to “get around” the “very real First Amendment questions,” and also not to encourage removal of information, but to help social media companies and governments counter and respond to disinformation around elections (which they did for things like misleading election procedures). That is, the quote here is about respecting the 1st Amendment, not “getting around” it. Yet, Doughty added “get around” to pretend otherwise.
He then issued a wide-ranging list of 10 prohibitions that were so broad I heard from multiple people within tech companies that the federal government canceled meetings with them on important cybersecurity issues, because they were afraid that any such meeting might violate the injunction.
So the DOJ appealed, and the case went to the 5th Circuit, which has a history of going… nutty. However, this ruling is mostly not nutty. It’s actually a very thorough and careful analysis of the standards for when the government steps over over the line in violating the 1st Amendment rights by pressuring speech suppression. As we’ve detailed for years, the line is whether or not the government was being coercive. The government is very much allowed to use its own voice to persuade. But when it is coercive, it steps over the line.
The appeals court analysis on this is very thorough and right on, as it borrows the important and useful precedents from other circuits that we’ve talked about for years, agreeing with all of them. Where is the line between persuasion and coercion?
Next, we take coercion—a separate and distinct means of satisfying the close nexus test. Generally speaking, if the government compels the private party’s decision, the result will be considered a state action. Blum, 457 U.S. at 1004. So, what is coercion? We know that simply “being regulated by the State does not make one a state actor.” Halleck, 139 S. Ct. at 1932. Coercion, too, must be something more. But, distinguishing coercion from persuasion is a more nuanced task than doing the same for encouragement. Encouragement is evidenced by an exercise of active, meaningful control, whether by entanglement in the party’s decision-making process or direct involvement in carrying out the decision itself. Therefore, it may be more noticeable and, consequently, more distinguishable from persuasion. Coercion, on the other hand, may be more subtle. After all, the state may advocate—even forcefully—on behalf of its positions
It points to the key case that all of these cases always lead back to, the important Bantam Books v. Sullivan case that is generally seen as the original case on “jawboning” (government coercion to suppress speech):
That is not to say that coercion is always difficult to identify. Sometimes, coercion is obvious. Take Bantam Books, Inc. v. Sullivan, 372 U.S. 58 (1963). There, the Rhode Island Commission to Encourage Morality—a state-created entity—sought to stop the distribution of obscene books to kids. Id. at 59. So, it sent a letter to a book distributor with a list of verboten books and requested that they be taken off the shelves. Id. at 61–64. That request conveniently noted that compliance would “eliminate the necessity of our recommending prosecution to the Attorney General’s department.” Id. at 62 n.5. Per the Commission’s request, police officers followed up to make sure the books were removed. Id. at 68. The Court concluded that this “system of informal censorship,” which was “clearly [meant] to intimidate” the recipients through “threat of  legal sanctions and other means of coercion” rendered the distributors’ decision to remove the books a state action. Id. at 64, 67, 71–72. Given Bantam Books, not-so subtle asks accompanied by a “system” of pressure (e.g., threats and followups) are clearly coercive.
But, the panel notes, that level of coercion is not always present, but it doesn’t mean that other actions aren’t more subtly coercive. Since the 5th Circuit doesn’t currently have a test for figuring out if speech is coercive, it adopts the same tests that were recently used in the 2nd Circuit with the NRA v. Vullo case, where the NRA went after a NY state official who encouraged insurance companies to reconsider issuing NRA-endorsed insurance policies. The 2nd Circuit ran through a test and found that this urging was an attempt at persuasion and not coercive. The 5th Circuit also cites the 9th Circuit, which even more recently tossed out a case claiming that Elizabeth Warren’s comments to Amazon regarding an anti-vaxxer’s book were coercive, ruling they were merely an attempt to persuade. Both cases take a pretty thoughtful approach to determining where the line is, so it’s good to see the 5th Circuit adopt a similar test.
For coercion, we ask if the government compelled the decision by, through threats or otherwise, intimating that some form of punishment will follow a failure to comply. Vullo, 49 F.4th at 715. Sometimes, that is obvious from the facts. See, e.g., Bantam Books, 372 U.S. at 62–63 (a mafiosi-style threat of referral to the Attorney General accompanied with persistent pressure and follow-ups). But, more often, it is not. So, to help distinguish permissible persuasion from impermissible coercion, we turn to the Second (and Ninth) Circuit’s four-factor test. Again, honing in on whether the government “intimat[ed] that some form of punishment” will follow a “failure to accede,” we parse the speaker’s messages to assess the (1) word choice and tone, including the overall “tenor” of the parties’ relationship; (2) the recipient’s perception; (3) the presence of authority, which includes whether it is reasonable to fear retaliation; and (4) whether the speaker refers to adverse consequences. Vullo, 49 F.4th at 715; see also Warren, 66 F.4th at 1207.
So, the 5th Circuit adopts a strong test to say when a government employee oversteps the line, and then looks to apply it. I’m a little surprised that the court then finds that some defendants probably did cross that line, mainly the White House and the Surgeon General’s office. I’m not completely surprised by this, as it did appear that both had certainly walked way too close to the line, and we had called out the White House for stupidly doing so. But… if that’s the case, the 5th Circuit should really show how they did so, and it does not do a very good job. It admits that the White House and the Surgeon General are free to talk to platforms about misinformation and even to advocate for positions:
Generally speaking, officials from the White House and the Surgeon General’s office had extensive, organized communications with platforms. They met regularly, traded information and reports, and worked together on a wide range of efforts. That working relationship was, at times, sweeping. Still, those facts alone likely are not problematic from a First-Amendment perspective.
So where does it go over the line? When the White House threatened to hit the companies with Section 230 reform if they didn’t clean up their sites! The ruling notes that even pressuring companies to remove content in strong language might not cross the line. But threatening regulatory reforms could:
That alone may be enough for us to find coercion. Like in Bantam Books, the officials here set about to force the platforms to remove metaphorical books from their shelves. It is uncontested that, between the White House and the Surgeon General’s office, government officials asked the platforms to remove undesirable posts and users from their platforms, sent follow-up messages of condemnation when they did not, and publicly called on the platforms to act. When the officials’ demands were not met, the platforms received promises of legal regime changes, enforcement actions, and other unspoken threats. That was likely coercive
Still… here the ruling is kinda weak. The panel notes that even with what’s said above the “officials’ demeanor” matters, and that includes their “tone.” To show that the tone was “threatening,” the panel… again quotes Flaherty’s demand for answers “immediately,” repeating Doughty’s false idea that that comment was about content moderation. It was not. The court does cite to some other “tone” issues, but again provides no context for them, and I’m not going to track down every single one.
Next, the court says we can tell that the White House’s statements were coercive because: “When officials asked for content to be removed, the platforms took it down.” Except, as we’ve reported before, that’s just not true. The transparency reports from the companies show how they regularly ignored requests from the government. And the EIP reporting system that was at the center of the lawsuit, and which many have insisted was the smoking gun, showed that the tech companies “took action” on only 35% of items. And even that number is too high, because TikTok was the most aggressive company covered, and they took action on 64% of reported URLs, meaning Facebook, Twitter, etc., took action on way less than 35%. And even that exaggerates the amount of influence because “take action” did not just mean “take down.” Indeed, the report said that only 13% of reported content was “removed.”
So, um, how does the 5th Circuit claim that “when officials asked for content to be removed, the platforms took it down”? The data simply doesn’t support that claim, unless they’re talking about some other set of requests.
One area where the court does make some good points is calling out — as we ourselves did — just how stupid it was for Joe Biden to claim that the websites were “killing people.” Of course, the court leaves out that three days later, Biden himself admitted that his original words were too strong, and that “Facebook isn’t killing people.” Somehow, only the first quote (which was admittedly stupid and wrong) makes it into the 5th Circuit opinion:
Here, the officials made express threats and, at the very least, leaned into the inherent authority of the President’s office. The officials made inflammatory accusations, such as saying that the platforms were “poison[ing]” the public, and “killing people.”
So… I’m a bit torn here. I wasn’t happy with the White House making these statements and said so at the time. But they didn’t strike me as anywhere near going over the coercive line. This court sees it differently, but seems to take a lot of commentary out of context to do so.
The concern about the FBI is similar. The court seems to read things totally out of context:
Fourth, the platforms clearly perceived the FBI’s messages as threats. For example, right before the 2022 congressional election, the FBI warned the platforms of “hack and dump” operations from “state-sponsored actors” that would spread misinformation through their sites. In doing so, the FBI officials leaned into their inherent authority. So, the platforms reacted as expected—by taking down content, including posts and accounts that originated from the United States, in direct compliance with the request.
But… that is not how anyone has described those discussions. I’ve seen multiple transcripts and interviews of people at the platforms who were in the meetings where “hack and dump” were discussed, and the tenor was more “be aware of this, as it may come from a foreign effort to spread disinfo about the election,” coming with no threat or coercion — just simply “be on the lookout” for this. It’s classic information sharing.
And the platforms had reason to be on the lookout for such things anyway. If the FBI came to Twitter and said “we’ve learned of a zero day hack that can allow hackers into your back end,” and Twitter responded by properly locking down their systems… would that be Twitter “perceiving the messages as threats,” or Twitter taking useful information from the FBI and acting accordingly? Everything I’ve seen suggests the latter.
Even stranger is the claim that the CDC was coercive. The CDC has literally zero power over the platforms. It has no regulatory power over them and now law enforcement power. So I can’t see how it was coercive at all. Here, the 5th Circuit just kinda wings it. After admitting that the CDC lacked any sort of power over the sites, it basically says “but the sites relied on info from the CDC, so it must have been coercive.”
Specifically, CDC officials directly impacted the platforms’ moderation policies. For example, in meetings with the CDC, the platforms actively sought to “get into  policy stuff” and run their moderation policies by the CDC to determine whether the platforms’ standards were “in the right place.” Ultimately, the platforms came to heavily rely on the CDC. They adopted rule changes meant to implement the CDC’s guidance. As one platform said, they “were able to make [changes to the ‘misinfo policies’] based on the conversation [they] had last week with the CDC,” and they “immediately updated [their] policies globally” following another meeting. And, those adoptions led the platforms to make moderation decisions based entirely on the CDC’s say-so—“[t]here are several claims that we will be able to remove as soon as the CDC debunks them; until then, we are unable to remove them.” That dependence, at times, was total. For example, one platform asked the CDC how it should approach certain content and even asked the CDC to double check and proofread its proposed labels.
So… one interpretation of that is that the CDC was controlling site moderation practices. But another, more charitable (and frankly, from conversations I’ve had, way more accurate) interpretation was that we were in the middle of a fucking pandemic where there was no good info, and many websites decided (correctly) that they didn’t have epidemiologists on staff, and therefore it made sense to ask the experts what information was legit and what was not, based on what they knew at the time.
Note that in the paragraph above, the one that the 5th Circuit uses to claim that the platform polices were controlled by the CDC, it admits that the sites were reaching out to the CDC themselves, asking them for info. That… doesn’t sound coercive. That sounds like trust & safety teams recognizing that they’re not the experts in a very serious and rapidly changing crisis… and asking the experts.
Now, there were perhaps reasons that websites should have been less willing to just go with the CDC’s recommendations, but would you rather ask expert epidemiologists, or the team who most recently was trying to stop spam on your platform? It seems, kinda logical to ask the CDC, and wait until they confirmed that something was false before taking action. But alas.
Still, even with those three parts of the administration being deemed as crossing the line, most of the rest of the opinion is good. Despite all of the nonsense conspiracy theories about CISA, which were at the center of the case according to many, the 5th Circuit finds no evidence of any coercion there, and releases them from any of the restrictions.
Finally, although CISA flagged content for social-media platforms as part of its switchboarding operations, based on this record, its conduct falls on the “attempts to convince,” not “attempts to coerce,” side of the line. See Okwedy, 333 F.3d at 344; O’Handley, 62 F.4th at 1158. There is not sufficient evidence that CISA made threats of adverse consequences— explicit or implicit—to the platforms for refusing to act on the content it flagged. See Warren, 66 F.4th at 1208–11 (finding that senator’s communication was a “request rather than a command” where it did not “suggest that compliance was the only realistic option” or reference potential “adverse consequences”). Nor is there any indication CISA had power over the platforms in any capacity, or that their requests were threatening in tone or manner. Similarly, on this record, their requests— although certainly amounting to a non-trivial level of involvement—do not equate to meaningful control. There is no plain evidence that content was actually moderated per CISA’s requests or that any such moderation was done subject to non-independent standards.
Ditto for Fauci’s NIAID and the State Department (both of which were part of nonsense conspiracy theories). The Court says they didn’t cross the line either.
So I think the test the 5th Circuit used is correct (and matches other circuits). I find its application of the test to the White House kinda questionable, but it actually doesn’t bother me that much. With the FBI, the justification seems really weak, but frankly, the FBI should not be involved in any content moderation issues anyway, so… not a huge deal. The CDC part is the only part that seems super ridiculous as opposed to just borderline.
But saying CISA, NIAID and the State Department didn’t cross the line is good to see.
And then, even for the parts the court said did cross the line, the 5th Circuit so incredibly waters down the injunction from the massive, overbroad list of 10 “prohibited activities,” that… I don’t mind it. The court immediately kicks out 9 out of the 10 prohibited activities:
The preliminary injunction here is both vague and broader than necessary to remedy the Plaintiffs’ injuries, as shown at this preliminary juncture. As an initial matter, it is axiomatic that an injunction is overbroad if it enjoins a defendant from engaging in legal conduct. Nine of the preliminary injunction’s ten prohibitions risk doing just that. Moreover, many of the provisions are duplicative of each other and thus unnecessary.
Prohibitions one, two, three, four, five, and seven prohibit the officials from engaging in, essentially, any action “for the purpose of urging, encouraging, pressuring, or inducing” content moderation. But “urging, encouraging, pressuring” or even “inducing” action does not violate the Constitution unless and until such conduct crosses the line into coercion or significant encouragement. Compare Walker, 576 U.S. at 208 (“[A]s a general matter, when the government speaks it is entitled to promote a program, to espouse a policy, or to take a position.”), Finley, 524 U.S. at 598 (Scalia, J., concurring in judgment) (“It is the very business of government to favor and disfavor points of view . . . .”), and Vullo, 49 F.4th at 717 (holding statements “encouraging” companies to evaluate risk of doing business with the plaintiff did not violate the Constitution where the statements did not “intimate that some form of punishment or adverse regulatory action would follow the failure to accede to the request”), with Blum, 457 U.S. at 1004, and O’Handley, 62 F.4th at 1158 (“In deciding whether the government may urge a private party to remove (or refrain from engaging in) protected speech, we have drawn a sharp distinction between attempts to convince and attempts to coerce.”). These provisions also tend to overlap with each other, barring various actions that may cross the line into coercion. There is no need to try to spell out every activity that the government could possibly engage in that may run afoul of the Plaintiffs’ First Amendment rights as long the unlawful conduct is prohibited.
The eighth, ninth, and tenth provisions likewise may be unnecessary to ensure Plaintiffs’ relief. A government actor generally does not violate the First Amendment by simply “following up with social-media companies” about content-moderation, “requesting content reports from social-media companies” concerning their content-moderation, or asking social media companies to “Be on The Lookout” for certain posts.23 Plaintiffs have not carried their burden to show that these activities must be enjoined to afford Plaintiffs full relief.
The 5th Circuit, thankfully, calls for an extra special smackdown Judge Doughty’s ridiculous prohibition on any officials collaborating with the researchers at Stanford and the University of Washington who study disinformation, noting that this prohibition itself likely violates the 1st Amendment:
Finally, the fifth prohibition—which bars the officials from “collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group” to engage in the same activities the officials are proscribed from doing on their own— may implicate private, third-party actors that are not parties in this case and that may be entitled to their own First Amendment protections. Because the provision fails to identify the specific parties that are subject to the prohibitions, see Scott, 826 F.3d at 209, 213, and “exceeds the scope of the parties’ presentation,” OCA-Greater Houston v. Texas, 867 F.3d 604, 616 (5th Cir. 2017), Plaintiffs have not shown that the inclusion of these third parties is necessary to remedy their injury. So, this provision cannot stand at this juncture
That leaves just a single prohibition. Prohibition six, which barred “threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech.” But, the court rightly notes that even that one remaining prohibition clearly goes too far and would suppress protected speech, and thus cuts it back even further:
That leaves provision six, which bars the officials from “threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech.” But, those terms could also capture otherwise legal speech. So, the injunction’s language must be further tailored to exclusively target illegal conduct and provide the officials with additional guidance or instruction on what behavior is prohibited.
So, the 5th Circuit changes that one prohibition to be significantly limited. The new version reads:
Defendants, and their employees and agents, shall take no actions, formal or informal, directly or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech. That includes, but is not limited to, compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directing, or otherwise meaningfully controlling the social-media companies’ decision-making processes.
And that’s… good? I mean, it’s really good. It’s basically restating exactly what all the courts have been saying all along: the government can’t coerce companies regarding their content moderation practices.
The court also makes it clear that CISA, NIAID, and the State Department are excluded from this injunction, though I’d argue that the 1st Amendment already precludes the behavior in that injunction anyway, so they already can’t do those things (and there remains no evidence that they did).
So to summarize all of this, I’d argue that the 5th Circuit got this mostly right, and corrected most of the long list of terrible things that Judge Doughty put in his original opinion and injunction. The only aspect that’s a little wonky is that it feels like the 5th Circuit applied the test for coercion in a weird way with regards to the White House, the FBI, and the CDC, often by taking things dramatically out of context.
But the “harm” of that somewhat wonky application of the test is basically non-existent, because the court also wiped out all of the problematic prohibitions in the original injunction, leaving only one, which it then modified to basically restate the crux of the 1st Amendment: the government should not coerce companies in their moderation practices. Which is something that I agree with, and which hopefully will teach the Biden administration to stop inching up towards the line of threats and coercion.
That said, this also seems to wholly contradict the very same 5th Circuit’s decision in the NetChoice v. Paxton case, but that’s the subject of my next post. As for this case, I guess it’s possible that either side could seek Supreme Court review. It would be stupid for the DOJ to do so, as this ruling gives them almost everything they really wanted, and the probability that the current Supreme Court could fuck this all up seems… decently high. That said, the plaintiffs might want to ask the Supreme Court to review for just this reason (though, of course, that only reinforces the idea that the headlines that claimed this ruling was a “loss” for the Biden admin are incredibly misleading).
So we wrote about Judge Terry Doughty’s somewhat questionable ruling preventing the Biden White House from communicating with tech companies or researchers regarding certain areas of disinformation. As we noted, there were some good elements in the ruling, reminding government officials of the 1st Amendment restrictions on coercion in attempting to silence protected speech.
But there were also plenty of extremely problematic elements to the ruling, including the lack of any clear standard by which the government might determine what is allowed and what is forbidden. As we noted, the injunction bars the government from talking about some things, but has exceptions for a bunch of other things. Except, it seems pretty clear that every example that Doughty cited as a problematic example could easily fit into the exceptions he outlined. And that’s a recipe for serious chilling effects on protected speech.
Even worse, we noted that Doughty literally inserted words into a quote to make it say something it never said. He flat out falsified a quote from a Stanford researcher, pretending she said they had set up the Election Integrity Partnership to “get around” the 1st Amendment, when the actual quote from her does not say anything about “getting around” the 1st Amendment, but was literally a statement of fact regarding the 1st Amendment limits on the government’s ability to do things.
Also, I had highlighted how there were emails from Rob Flaherty in the White House that I felt went too far, in angrily demanding that tech companies “explain” certain decisions they had made. At no point should a government official demand an explanation from a media organization about its editorial choices. But, as others have pointed out, the context of Flaherty’s angry email was totally misrespresented by Doughty. His demand for an explanation was not (as implied in the filings) about why certain accounts hadn’t been actioned/removed/etc. but rather about a bug in Facebook’s recommendation engine that removed the President’s account, limiting its growth.
Now… I still think that Flaherty’s email was a massive overreach. The President’s account has no inherent right to be regularly recommended by any recommendation engine, but the context here shows that it had zero to do with trying to take down or moderate accounts. In the context of the judge’s decision, you’d never know that all.
Either way, we’d already seen real world problems stemming from this decision as various government officials were cancelling important meetings with tech companies that had nothing whatsoever to do with content moderation or censorship, because of a fear that it would be seen to violate the law.
The DOJ quickly appealed the ruling, and asked Judge Doughty for a stay on the injunction until the appeal was heard. Granting such a stay is generally seen as standard practice. The plaintiffs in the case filed a brief opposing the stay, and even though the court told the plaintiffs that their filing was deficient (for a small technical reason) Judge Doughty issued his ruling rejecting the request for the stay before the plaintiffs even filed their corrected motion. You can see that the rejection is document number 301 in the docket, where the corrected opposition was document number 303, filed after the motion was already ruled on.
As with Doughty’s original ruling, the ruling rejecting the stay is filled with a lot of misleading and hyperbolic language. He insists that his ruling could not possibly cause harm, because of the exceptions he listed out (ignoring that every single example of speech he complained about easily and obviously fits into those exceptions):
The Preliminary Injunction also has several exceptions which list things that are NOT prohibited. The Preliminary Injunction allows Defendants to exercise permissible public government speech promoting government policies or views on matters of public concern, to inform social-media companies of postings involving criminal activity, criminal conspiracies, national security threats, extortion, other threats, criminal efforts to suppress voting, providing illegal campaign contributions, cyber-attacks against election infrastructure, foreign attempts to influence elections, threats against the public safety or security of the United States, postings intending to mislead voters about voting requirements, procedures, preventing or mitigating malicious cyber activity, and to inform social-media companies about speech not protected by the First Amendment.
Anyway, even the notoriously ridiculous 5th Circuit found Doughty’s move here to be a step too far, very quickly rejected his refusal to grant a stay, and did so in his stead. They also expedited the case to speed up the process.
IT IS ORDERED that this appeal is EXPEDITED to the next available Oral Argument Calendar.
IT IS FURTHER ORDERED that a temporary administrative stay is GRANTED until further orders of the court.
IT IS FURTHER ORDERED that Appellants’ opposed motion for stay pending appeal is deferred to the oral argument merits panel which receives this case.
That’s the entirety of the ruling, but basically the injunction is put on hold. For the time being, the government can again talk to social media companies and researchers. Of course, they cannot talk to them about “censorship” because that has always been barred by the 1st Amendment. At least for the time being, though, they should be free to talk to them about legitimate, non-problematic efforts towards harm reduction.
It will surprise nobody to learn that when politicians trumpet the First Amendment, they are generally referring only to expression that they agree with. But occasionally, they demonstrate their hypocrisy in a fashion so outrageously transparent that it shocks even the most cynical and jaded First Amendment practitioners. Last week, we were treated to just such an instance, courtesy of seven Republican Attorneys General. They deserve to be named, ignominiously: Todd Rokita (IN), Andrew Bailey (MO), Tim Griffin (AR), Daniel Cameron (KY), Raul Labrador (ID), Lynn Fitch (MS), and Alan Wilson (SC).
One of those names might stick out: Missouri AG Andrew Bailey. Last week, Bailey took a victory lap in Missouri’s lawsuit against the Biden administration: U.S. District Judge Terry Doughty engaged in some judicial theatrics, releasing a 155-page ruling on July 4 finding that an assortment of government actors likely violated the First Amendment by discussing content moderation with social media platforms.1
That ruling was a very mixed bag, and is outside the scope of this article (Mike Masnick has a good writeup here). The important thing to remember is that Missouri sued government officials, asserting that their pressure on social media platforms over content was unconstitutional—and a judge agreed.
The very next day, Bailey turned around and joined these other AGs in a ham-fisted, legally and factually inaccurate letter threatening Target over the sale of Pride Month merchandise and its support of an LGBT organization—all of which happens to be, you guessed it, protected expression. Let’s dig in.
It’s worth reviewing exactly what products the AGs complained about:
“Girls’ swimsuits with ‘tuck-friendly construction’ and ‘extra crotch coverage’ for male genitalia”
I’m going to stop them right here: The use of “girls” in this sentence is clearly intended to insinuate that the complained-of swimsuits are for children. But as it so (not surprisingly) happens, that was false: theses swimsuits were available in adult sizes only).
“Merchandise by the self-declared ‘Satanist-Inspired’ brand Abprallen” which “include the phrases ‘We Bash Back’ with a heart-shaped mace in the trans-flag colors, ‘Transphobe Collector’ with a skull, and ‘Homophobe Headrest’ with skulls beside a pastel guillotine.”
“[P]roducts with anti-Christian designs such as pentagrams, horned skulls, and other Satanic products . . . [including] the phrase ‘Satan Respects Pronouns’ with a horned ram representing Baphomet—a half-human, half-animal, hermaphrodite worshipped by the occult.”
It would be difficult to come up with a clearer example of government targeting expression on the basis of viewpoint—the most fundamental First Amendment violation possible. You don’t see them going after “daddy’s little girl” shirts or “Jesus Calling” books, and I’d bet my life that they wouldn’t pursue the seller of a shirt that says “there are only two genders.” The AGs’ complaint is, by its own admission, directed at the messages contained within certain products.
You may not need reminding, but apparently these inept AGs do: the First Amendment’s protection is quite broad.
And it protectsthesale, distribution, and reception of expression no less than the right to create the expression: the government cannot punish the seller of a book any more than it could prohibit writing it in the first place.
So What’s These AGs’ Problem, Exactly?
As a general matter, that’s a question better directed to their therapists—there’s probably a lot going on there.
But specific to these products, our merry band of hapless censors really had to heave a (entirely unconvincing) Hail Mary to try getting around the First Amendment:
Our concerns entail the company’s promotion and sale of potentially harmful products to minors [and] related interference with parental authority in matters of sex and gender identity .
State child-protection laws penalize the “sale or distribution . . . of obscene matter.” A matter is considered “obscene” if “the dominant theme of the matter . . . appeals to the prurient interest in sex,” including “material harmful to minors.” Indiana, as well as other states, have passed laws to protect children from harmful content meant to sexualize them and prohibit gender transitions of children.
Obscenity and “Harmful to Minors”
Threshold note: Obscenity doctrine is a complete mess, and for various reasons obscenity prosecutions are extremely difficult in this day and age. But historically, obscenity law has been a favorite tool of government actors seeking to suppress LGBT speech. These AGs are following in that ignoble, censorious, and bigoted tradition.
Let’s start with the definition of obscenity that Indiana AG Todd Rokita (who authored the letter) provides:
A matter is considered obscene “if the dominant theme of the matter . . . appeals to the prurient interest in sex,” including material harmful to minors.
First, Rokita actually gets his own state’s law wrong. Obscenity does not include “material harmful to minors” under Indiana law. The latter is its own separate category.2 Perhaps that’s a minor quibble, but if you’re going to issue bumptious threats under the color of law, you should at least describe the law correctly.
Second, Rokita conveniently leaves out the three otherrequirements for matter to be “harmful to minors”:
Sec. 2. A matter or performance is harmful to minors for purposes of this article if:
(1) it describes or represents, in any form, nudity, sexual conduct, sexual excitement, or sado-masochistic abuse;
(2) considered as a whole, it appeals to the prurient interest in sex of minors;
(3) it is patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable matter for or performance before minors; and
(4) considered as a whole, it lacks serious literary, artistic, political, or scientific value for minors.
He leaves them out, of course, because it’s obvious that none of the products discussed describe or represent “nudity, sexual conduct, sexual excitement, or sado-masochistic abuse” and the inquiry properly ends at Step One.
But even under his truncated definition, you would have to be incompetent to stand trial—let alone practice law—to conclude that any merchandise the letter complains of, “considered as a whole . . . appeals to the prurient interest in sex of minors.” The Supreme Court defined “prurient interest” as “a shameful or morbid interest in nudity, sex, or excretion.” As with all Supreme Court attempts to define sex-related things, this definition is somewhat clunky and unsatisfying; yet it still demonstrates how asinine these sorry excuses for lawyers are.
Recall some of the products named in the letter:
LGBT-themed onesies, bibs, and overalls. The inclusion of “bibs” indicates to me that they’re referring to…clothes for infants? First of all, that very young child wearing their Pride bib over their Pride onesie while chucking Cheerios across the room from their highchair has no knowledge of “nudity, sex, or excretion,” let alone the capacity for a shameful interest in it. Second, if these AGs look at an infant wearing a Pride bib and their mind immediately goes to SEX, I would urge them to seek immediate mental health care and stay at least 1000 feet away from any child, ever.
I’m also curious how either of these insanely benign shirts (made for adults, by the way) could possibly appeal to the prurient interest of anyone:
Aha, they will say. What about the tuck-friendly swimwear? Set aside the fact that they were apparently only available in adult sizes. Do they appeal to a shameful interest in nudity? Considering that it’s clothing, quite the opposite. What about sex? No, not really: sex means sex acts or sexual behavior, not mere gender expression. If a statute defining “prurient interest” as “incit[ing] lasciviousness or lust” was held unconstitutionally overbroad, there is no question that defining gender expression as “a shameful interest in sex” is not going to work. Excretion? Well, unless you’re the type of person that pees in the pool and gets off on it (way to tell on yourselves), that’s not going to work either.
And obviously the “Satanist” and “anti-Christian” merchandise they complain about in such a delicate, snowflake-like fashion have absolutely nothing to do with sex.
The only possible way that the AGs could believe (other than by reason of sheer incompetence) that these products are legally “harmful to minors” is if they believe that anything LGBT-related is ipso facto sexual. That’s a belief that is both shockingly prejudiced, and so stupid that even the Fifth Circuit wouldn’t likely accept it. During oral arguments in the litigation over Texas’ content moderation law, Judge Andy Oldham found it “extraordinary” that social media platforms affirmed that under their view of the First Amendment, they could ban all pro-LGBT content if they so desired. If all such content is “harmful to minors,” I have a hard time believing he would have found the proposition so troubling.
None of these products are even close calls. They are emphatically, and unquestionably protected by the First Amendment.
The AGs cite as another concern “potential interference with parental authority in matters of sex and gender identity.” Footnote 3 provides citations to a bevy of state laws about school libraries and gender-affirming care (several of which have been enjoined). Which, of course, have nothing to do with anything, as the footnote even acknowledges: “all of these laws may not be implicated by Target’s recent campaign.”
But even after acknowledging that these laws are irrelevant, the letter continues to say “they nevertheless demonstrate that our States have a strong interest in protecting children and the interests of parental rights.”
That’s great, I’m happy for them, but also…no. What they demonstrate is that your state legislatures passed some bills. What they don’t demonstrate is that you have the constitutionally valid interest you think you do. The merchandise is clearly protected by the First Amendment for both adults and minors. And “[s]peech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.”
California, too, tried the “parental rights” argument when it banned the sale of violent video games to minors. The Supreme Court was not impressed:
Such laws do not enforce parental authority over children’s speech . . . they impose governmental authority, subject only to a parental veto. In the absence of any precedent for state control, uninvited by the parents, over a child’s speech . . . and in the absence of any justification for such control that would satisfy strict scrutiny, those laws must be unconstitutional.
The law is clear: government may not place limits on (or punish) the distribution of constitutionally protected materials to minors by shouting “parental rights.” Parents are free to parent, but the government is not free to enforce its version of “good parenting” (guffaw) on everyone by law.
Target’s Donations to GLSEN
If you thought that was the end of the stupidity, buckle up. The AGs also complain about Target’s donations to GLSEN, an LGBT education advocacy group which the letter, for no apparent reason, instructs readers on how to pronounce (“glisten,” if you’re curious). Because GLSEN advocates that educators should not reveal students’ gender identity to their parents without consent, the AGs claim that the donations “raise concerns” under “child-protection and parental-rights laws.”
First things first: GLSEN has a First Amendment right to advocate for what it believes school policies should be,3 no matter what a state’s law says. The AGs’ insinuation that advocacy against their states’ laws is somehow unlawful is startling and dangerous.
Second, Target has a First Amendment right to support GLSEN through its partnership. This thinly-veiled threat that Target could face prosecution if it doesn’t stop donating to advocacy that government officials don’t like is wholly beneath contempt, and should be repulsive to every American. I’m not sure how much there is to say about this; it’s a dark sign that the attorneys general of seven states would so readily declare their opposition to fundamental liberties.
“But this speech we don’t like”
Simply put, the government “is not permitted to employ threats to squelch the free speech of private citizens.” Backpage.com, 807 F.3d at 235. “The mere fact that [the private party] might have been willing to act without coercion makes no difference if the government did coerce.” Mathis, 891 F.2d at 1434. “[S]uch a threat is actionable and thus can be enjoined even if it turns out to be empty…. But the victims in this case yielded to the threat.” Backpage.com, 807 F.3d at 230-31. Further, even a vaguely worded threat can constitute government coercion. See Okwedy, 333 F.3d at 341-42. But here, the threats have been repeated and explicit, and “the threats ha[ve] worked.” Backpage.com, 807 F.3d at 232.
The threats in this case . . . include a threat of criminal prosecution . . . Even an “implicit threat of retaliation” can constitute coercion, Okwedy, 333 F.3d at 344, and here the threats are open and explicit.
You could be forgiven for thinking that this came from a draft complaint or motion for a preliminary injunction aimed at the attorneys general who signed this letter.
But in fact, it is from Missouri’s own motion for a preliminary injunction in Missouri v. Biden, arguing that the federal government coerced social media platforms into censoring users.
What was the “threat of criminal prosecution” so explicit and coercive, in Missouri’s view, to render the government responsible for platforms’ content moderation decisions? Then-candidate Biden
threatened that Facebook CEO Mark Zuckerberg should be subject to civil liability, and possibly even criminal prosecution, for not censoring core political speech: “He should be submitted to civil liability and his company to civil liability…. Whether he engaged in something and amounted to collusion that in fact caused harm that would in fact be equal to a criminal offense, that’s a different issue. That’s possible. That’s possible – it could happen.”
So, according to Missouri, the blustering of a candidate who, if elected, would not himself even have the power to actually prosecute is sufficiently explicit and coercive. And that’s in a case about whether the government can be held responsible for private action against third-party speech.
This argument leaves precisely no room for the notion that a letter from states’ top prosecutors, citing various criminal statutes, to the speaker of the targeted, protected speech itself, is anything but an even more obvious First Amendment violation. It wouldbe so even had Missouri not made this argument. But the rank hypocrisy here is so brazen that it cannot escape notice.
Spaghetti at the Wall
In the second half of the letter, the AGs shift gears to say they are also writing as the representatives of their states in their capacity as shareholders of Target. They allege that Target’s management “may have acted negligently” in its Pride campaign, due to the backlash and falling stock price. They write:
Target’s management has no duty to fill stores with objectionable goods, let alone endorse or feature them in attention-grabbing displays at the behest of radical activists. However, Target management does have fiduciary duties to its share-holders to prudently manage the company and act loyally in the company’s best interests. Target’s board and its management may not lawfully dilute their fiduciary duties to satisfy the Board’s (or left-wing activists’) desires to foist contentious social or political agendas upon families and children at the expense of the company’s hard-won good will and against its best interests.
They aren’t even trying to hide their perverse inversion of the First Amendment, turning the company’s right to decide what expressive products to sell into a threat of liability for deciding to sell the expressive products they disfavor.
Perhaps the AGs think that framing it as a “shareholder” concern makes the First Amendment magically go away. They are wrong.
Regardless of how they try to obfuscate it, the AGs are using the coercive authority of the state to silence views they disagree with. Whether the states are shareholders is irrelevant, and I suspect Missouri would have said as much had the federal government defendants in Missouri v. Biden been daft enough to attempt this argument.
Dig into the investments of FERS, the U.S. Railroad Retirement Board, etc., and I’ll bet good money that you’ll find investments in companies that own social media platforms. If the federal government communicated concerns as a “shareholder” of those companies, threatening that they may be breaching their fiduciary duty/duty of care by not removing noxious content, what do you suppose the reaction from the Right would be? You know exactly what it would be.
To paraphrase the Supreme Court, very recently, “When a state [business regulation] and the Constitution collide, there can be no question which must prevail. U.S. Const., Art. VI, cl. 2.” Purporting to write as government “shareholders” is not an invisibility cloak against the First Amendment: state governments cannot simply purchase stock in a company and declare that they now have the right to threaten the company over their protected expression.
Implicitly Condoning Violence Against Speech (Provided it’s Against the People We Don’t Like)
To round off its unrelenting hypocrisy, the letter concludes by warning Target to “not yield” to “threats of violence.” But only some threats, apparently:
Some activists have recently pressured Target [to backtrack on its removal/relocation of Pride merchandise] by making threats of violence . . . Target’s board and management should not use such threats as a pretext . . . to promote collateral political and social agendas.
“You hear that, Target? You better not use anything as an excuse to say things we don’t like!”
Conspicuously absent is any note of the fact that it was threats of violence against Target employees that caused the merchandise to be removed or relocated in the first place. That, perhaps unsurprisingly. doesn’t seem to bother them so much—the violent threats, and Target caving to them, is just fine if these AGs agree with the perpetrators of the violence. Because for them, the First Amendment is about their own power, and nothing else.
Whatever one thinks of Target’s decisions, having even the slightest shred of honesty and principle when it comes to the First Amendment should leave you thoroughly disgusted by this letter.
But these AGs are not principled, honest, ethical, or competent attorneys (I’d wager that they aren’t those things as people either), and they deserve neither respect nor the offices they hold despite their manifest unfitness.
They are con-artists engaging in the familiar ploy of using the First Amendment as a partisan cudgel to claim expression they like is being censored, while actively working to censor speech they disagree with. Their view of the First Amendment is clear and pernicious: you can say whatever they think you should be allowed to say.
It’s nothing new, of course. But it’s always worthy of scorn and condemnation. And maybe a lawsuit or two.
1 It also bears mentioning that five of these seven state AG’s offices also signed on to an amicus brief asking the Fifth Circuit to uphold Texas’ content moderation law, arguing that platforms do not have a First Amendment right to decide for themselves what content to allow on their services.
2 Rokita also pulls the “dominant theme” language from the obscenity statute rather than the “harmful to minors” statute, so that’s another strike against his having a firm grasp on his own state’s law, but I suppose “considered as a whole” does similar (though not exactly the same) work.
3 In their zeal to glom on to culture war nonsense, the AGs also failed to recognize that this advocacy is contained in GLSEN’s model policy. That is, the ideal policy that they provide on their website for any school, anywhere to use or adapt.
One has to think that Donald Trump judicial appointee Judge Terry Doughty deliberately waited until July 4th (when the courts are closed) to release his ruling on the requested preliminary injunction preventing the federal government from communicating with social media companies. The results of the ruling are not a huge surprise, given Doughty’s now recognized pattern of being willing to bend over backwards as a judge in support of Trumpist culture war nonsense in multiple cases in his short time on the bench. But, even so, there are some really odd things about the ruling.
As you’ll recall, Missouri and Louisiana sued the Biden administration, arguing that it had violated the 1st Amendment by having Twitter block the NY Post story about the Hunter Biden laptop. But that happened before Joe Biden took office, and it’s also completely false. While it remains a key Trumpist talking point that this happened, every bit of evidence from the Twitter Files has revealed that the government had zero communications with Twitter regarding the NY Post’s story.
Still, Doughty does what Doughty does, and in March rejected the administration’s motion to dismiss with a bonkers, conspiracy-theory laden ruling. Given that, it wasn’t surprising that he would then grant the motion for a preliminary injunction. But, even so, there are some surprising bits in there that deserve attention.
There are elements of the ruling that are good and could be useful, some that are bad, and some that are just depressingly ugly. Let’s break them down, bit by bit.
There are legitimate concerns about government intrusions into private companies and their 1st Amendment protected decisions. I still think that the best modern ruling on this is Backpage v. Dart, in which then appeals court Judge Richard Posner smacked Cook County Sheriff Thomas Dart around for his threats to credit card companies that resulted in them refusing to accept transactions for Backpage.com. There are some elements of that kind of ruling here, but the main difference was in that case, the coercive elements by Dart were clear, and here, many (but not all) are made up fantasyland stuff.
There were some examples in the lawsuit that did seem likely to cross the line, including having officials in the White House complaining about certain tweets and even saying “wondering if we can get moving on the process of having it removed ASAP.” That’s definitely inappropriate. Most of the worst emails seemed to come from one guy, Rob Flaherty, the former “Director of Digital Strategy,” who seemed to believe his job in the White House made it fine for him to be a total jackass to the companies, constantly berating them for moderation choices he disliked.
I mean, this is just totally inappropriate for a government official to say to a private company:
Things apparently became tense between the White House and Facebook after that, culminating in Flaherty’s July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”
So having a ruling that highlights that the government should not be pressuring websites over speech is good to see.
Also, the ruling highlights that lawmakers threatening to revoke or modify Section 230 as part of the process of working the refs at these social media companies is a form of retaliation. This is a surprising finding, but a good one. We’ve highlighted in the past that politicians threatening to punish companies with regulatory changes in response to speech should be seen as a 1st Amendment violation, and had people yell at us (on both sides) about that. But here, Judge Doughty agrees, and highlights 230 reform as an example (though he’s a lot more credulous that 230 reform attempts between Republicans and Democrats are aligned).
With respect to 47 U.S.C. § 230, Defendants argue that there can be no coercion for threatening to revoke and/or amend Section 230 because the call to amend it has been bipartisan. However, Defendants combined their threats to amend Section 230 with the power to do so by holding a majority in both the House of Representatives and the Senate, and in holding the Presidency. They also combined their threats to amend Section 230 with emails, meetings, press conferences, and intense pressure by the White House, as well as the Surgeon General Defendants. Regardless, the fact that the threats to amend Section 230 were bipartisan makes it even more likely that Defendants had the power to amend Section 230. All that is required is that the government’s words or actions “could reasonably be interpreted as an implied threat.” Cuomo, 350 F. Supp. 3d at 114. With the Supreme Court recently making clear that Section 230 shields socialmedia platforms from legal responsibility for what their users post, Gonzalez v. Google, 143 S. Ct. 1191 (2023), Section 230 is even more valuable to these social-media platforms. These actions could reasonably be interpreted as an implied threat by the Defendants, amounting to coercion.
Cool. So, government folks, both in Congress and in the White House, should stop threatening to remove Section 230 as punishment for disagreeing with the moderation choices of private companies. That’s good and it’s nice to have that in writing, even if I’d be hard pressed to believe that most of the discussions on 230 are actual threats.
Doughty seems incredibly willing to include perfectly reasonable conversations about how to respond to actually problematic content as “censorship” and “coercion,” despite there being little evidence of either in many cases (again, in some cases, it does appear that some folks in the administration crossed the line).
For example, it’s public information (as we’ve discussed) that various parts of the government would meet with social media not for “censorship” but to share information, such as about foreign trolls seeking to disrupt elections with false information, or about particular dangers. These meetings were not about censorship, but just making everyone aware of what was going on. But conspiracy-minded folks have turned those meetings into something they most definitely are not.
Yet Doughty assumes all these meetings are nefarious.
In doing so, Doughty often fails to distinguish perfectly reasonable speech by government actors that is not about suppressing speech, but rather debunking or countering false information — which is traditional counterspeech. Now, again, when government actors are doing it, their speech is actually less protected (Posner’s ruling in the Dart case details this point), but so long as their speech is not focused on silencing other speech, it’s perfectly reasonable. For example, the complaint detailed some efforts by social media companies to deboost the promotion of the Great Barrington Declaration. One of the points in the lawsuit was that Francis Collins had emailed Anthony Fauci about how much attention it was getting, saying “there needs to be a quick and devastating published take down of its premises.” And Fauci responded:
The same day, Dr. Fauci wrote back to Dr. Collins stating, “Francis: I am pasting in below a piece from Wired that debunks this theory. Best, Tony.”
Doughty ridiculously interprets Collins saying “there needs to be a… take down of its premises” to mean “we need to get this taken off of social media.”
However, various emails show Plaintiffs are likely to succeed on the merits through evidence that the motivation of the NIAID Defendants was a “take down” of protected free speech. Dr. Francis Collins, in an email to Dr. Fauci told Fauci there needed to be a “quick and devastating take down” of the GBD—the result was exactly that.
But that’s clearly not what Collins meant in context. By a “quick and devastating published take down” he clearly meant a response. That is: more speech, debunking the claims that Collins worried were misleading. That’s why he said a “published take down.” Note that Doughty excises “published” from his quote in order to falsely imply that Collins was telling Fauci they needed to censor information.
And then Fauci continued to talk publicly about his concerns about the GBD, not urging any kind of censorship. And Doughty repeats all of those points, and still pretends the plan was “censorship”:
Dr. Fauci and Dr. Collins followed up with a series of public media statements attacking the GBD. In a Washington Post story run on October 14, 2020, Dr. Collins described the GBD and its authors as “fringe” and “dangerous.” Dr. Fauci consulted with Dr. Collins before he talked to the Washington Post. Dr. Fauci also endorsed these comments in an email to Dr. Collins, stating “what you said was entirely correct.”
On October 15, 2020, Dr. Fauci called the GBD “nonsense” and “dangerous.” Dr. Fauci specifically stated, “Quite frankly that is nonsense, and anybody who knows anything about epidemiology will tell you that is nonsense and very dangerous.” Dr. Fauci testified “it’s possible that” he coordinated with Dr. Collins on his public statements attacking the GBD.
Social-media platforms began censoring the GBD shortly thereafter. In October 2020, Google de-boosted the search results for the GBD so that when Google users googled “Great Barrington Declaration,” they would be diverted to articles critical of the GBD, and not to the GBD itself. Reddit removed links to the GBD. YouTube updated its terms of service regarding medical “misinformation,” to prohibit content about vaccines that contradicted consensus from health authorities. Because the GBD went against a consensus from health authorities, its content was removed from YouTube. Facebook adopted the same policies on misinformation based upon public health authority recommendations. Dr. Fauci testified that he could not recall anything about his involvement in seeking to squelch the GBD
Nothing in that shows coercion. It shows Fauci expressing an opinion on the accuracy of the statements in the GBD. That social media companies later chose to remove some of those links is wholly disconnected from that.
Indeed, under this theory, if a social media company wants to get government officials in trouble, all it has to do is remove any speech that a government official tries to respond to, enabling a lawsuit to claim that it was removed because of that response. That… makes no sense at all.
I mean, the conversation about the CDC is just bizarre. Whatever you think of the CDC, the details show that social media companies chose to rely on the CDC to try to understand what was accurate and what was not regarding Covid and Covid vaccines. That’s because a ton of information was flying back and forth and lots of it was inaccurate. As social media companies were hoping for a way to understand what was legit and what was not, it’s reasonable to ask an entity like the CDC what it thought.
Much like the other Defendants, described above, the CDC Defendants became “partners” with social-media platforms, flagging and reporting statements on social media Defendants deemed false. Although the CDC Defendants did not exercise coercion to the same extent as the White House and Surgeon General Defendants, their actions still likely resulted in “significant encouragement” by the government to suppress free speech about COVID-19 vaccines and other related issues.
Various social-media platforms changed their content-moderation policies to require suppression of content that was deemed false by CDC and led to vaccine hesitancy
Yeah, the companies did this because they (correctly) figured that the CDC — whose entire role is about this very thing — is going to be better at determining what’s legit and what’s dangerous than their own content moderation team. That’s a perfectly rational decision, not “censorship”. But Doughty doesn’t care.
Similarly, regarding the Hunter Biden laptop story — which we’ve debunked multiples times here — it’s now well established that the government had no involvement in the decision by social media companies to lower the visibility of that story for a short period of time. Incredibly, Doughty argues that the real problem was that the FBI didn’t tell social media companies that their concerns were wrong. Really:
The FBI’s failure to alert social-media companies that the Hunter Biden laptop story was real, and not mere Russian disinformation, is particularly troubling. The FBI had the laptop in their possession since December 2019 and had warned social-media companies to look out for a “hack and dump” operation by the Russians prior to the 2020 election. Even after Facebook specifically asked whether the Hunter Biden laptop story was Russian disinformation, Dehmlow of the FBI refused to comment, resulting in the social-media companies’suppression of the story. As a result, millions of U.S. citizens did not hear the story prior to the November 3, 2020 election. Additionally, the FBI was included in Industry meetings and bilateral meetings, received and forwarded alleged misinformation to social-media companies, and actually mislead social-media companies in regard to the Hunter Biden laptop story. The Court finds this evidence demonstrative of significant encouragement by the FBI Defendants.
So… despite so many parts of this lawsuit complaining about the government having contacts with social media, here the court says the real problem was that the FBI should have told the companies not to moderate this particular story? So, basically “don’t communicate with social media companies, except if your communication boosts the storylines that will help Donald Trump.”
Also, the idea that what social media companies did resulted in “millions of U.S. citizens” not hearing the story prior to the election is bullshit. As we’ve covered in the past, actual analysis showed that the attempts by Facebook and Twitter to deboost that story (very briefly — only for one day in the case of Twitter) actually created a Streisand Effect that got the story more attention than it was likely to get otherwise.
Over and over again in the ruling, Doughty highlights how the social media companies often explained to White House officials that they would not remove or otherwise take action on various accounts because they did not violate policies. That is consistent with everything we’ve seen, showing that the companies did not feel coerced, and if anything, often mocked the government officials for over-reacting to things online.
Indeed, as we’ve detailed, the actual evidence shows that the companies very, very rarely did anything in response to these flags. The report from Stanford showed that they only took action on 35% of flagged content, and those numbers were skewed by TikTok being much more aggressive. So Twitter/Facebook/YouTube took action on way less than 35%. And, by “take action,” they mostly just added more context (i.e., more speech, not suppression). The only things that were removed were obviously problematic content like phishing and impersonation.
But Doughty basically ignores all that and insists there’s evidence of coercion, because some companies took action. And now he’s saying that the government basically can’t flag any of this info.
This also means that in situations where useful information sharing to prevent real harm could occur, this preliminary injunction now blocks it. And we’re already seeing some of that with the State Department canceling meetings with Facebook in response to this ruling (I’ve heard that other meetings between the government and companies have also been canceled, including ones that are deliberately focused on harm reduction, not on “censorship.”)
Again, so much of this seems to be based on a very, very broad misunderstanding of the nature of investigating the flow of mis- and disinformation online, and the role of government in dealing with that. As we’ve discussed repeatedly, much of the information sharing that was set up around these issues involved things where government involvement made total sense: helping to determine attempts to undermine elections through misinformation regarding the time and place of polling stations, phishing attempts, and other such nonsense.
But, this ruling seems to treat that kind of useful information sharing as a nefarious plan to “censor conservatives.”
Judge Doughty seems to believe every nonsense conspiracy around regarding the culture war and false claims of social media deliberately stifling “conservatives.” This is despite multiple studies showing that they actually bent over backwards to allow conservatives to regularly break the rules to avoid claims of bias. I mean, this is just nonsense:
What is really telling is that virtually all of the free speech suppressed was “conservative” free speech. Using the 2016 election and the COVID-19 pandemic, the Government apparently engaged in a massive effort to suppress disfavored conservative speech. The targeting of conservative speech indicates that Defendants may have engaged in “viewpoint discrimination,” to which strict scrutiny applies
First of all, this isn’t true. The court is only aware of such speech being moderated because that’s all the plaintiffs in this case highlighted (often through exaggeration). Second, many of the contested actions happened under the Trump administration, and it would make no sense that a Republican administration would be seeking to suppress “conservative” speech. Third, the whole issue is that the companies were choosing to hold back dangerous false information that they feared would lead to real world harms. If it was true that such speech came more frequently from so-called “conservatives,” that’s on them. Not the government.
And that results in the details of the injunction, which are just ridiculously broad and go way beyond reasonable limits on attempts by the government to impact social media content moderation efforts.
Again, here, Doughty twists reality by viewing it through a distorted, conspiracy-laden prism. Take, for example, the following:
According to DiResta, the EIP was designed to “get around unclear legal authorities, including very real First Amendment questions” that would arise if CISA or other government agencies were to monitor and flag information for censorship on social media.
So, this part is really problematic. DiResta DID NOT SAY that EIP was an attempt to “get around” unclear legal authorities. Her full quote does not say that at all:
So, as with pretending that Collins told Fauci they had to “take down” content, when he meant provide more info that responds to it, here Doughty has put words in DiResta’s mouth. Where she’s explaining the reasons why the government can’t be in the business of flagging content, as there are “very real First Amendment questions,” Doughty, falsely, claims she said this was an attempt to “get around” those questions. But it’s not.
This is actually showing that those involved were being careful not to violate the 1st Amendment and to be cognizant of the limits the Constitution placed on government actors. Given the “very real First Amendment questions” that would be raised by having government officials highlighting misinformation to social media companies, groups like Stanford IO could make their analysis and pass it off to social media companies without the natural concerns of that information coming from government actors. In other words, Stanford’s involvement was not as a “government proxy,” but rather to provide useful information to the companies without the problematic context of government (and, again, Stanford’s eventual report on this stuff showed that the companies took action on only a tiny percentage of flagged content, and most of those were things like phishing attempts and impersonation — not anything to do with political speech).
It’s not “getting around” anything. It’s recognizing what the government is forbidden from doing.
If you look at the full context of DiResta’s quote, she’s actually making it clear that the reason Stanford decided to set up the EIP project was because the government shouldn’t be in that business, and that it made more sense for an academic institution to be tracking and highlighting disinformation for the sake of responding to it (i.e., not suppress it, but respond to it).
Yet, Doughty goes off on some nonsense tangent, winding himself up about how this is just the tip of the iceberg of some giant censorship regime, which is just laughable:
Plaintiffs have put forth ample evidence regarding extensive federal censorship that restricts the free flow of information on social-media platforms used by millions of Missourians and Louisianians, and very substantial segments of the populations of Missouri, Louisiana, and every other State. The Complaint provides detailed accounts of how this alleged censorship harms “enormous segments of [the States’] populations.” Additionally, the fact that such extensive examples of suppression have been uncovered through limited discovery suggests that the censorship explained above could merely be a representative sample of more extensive suppressions inflicted by Defendants on countless similarly situated speakers and audiences, including audiences in Missouri and Louisiana. The examples of censorship produced thus far cut against Defendants’ characterization of Plaintiffs’ fear of imminent future harm as “entirely speculative” and their description of the Plaintiff States’ injuries as “overly broad and generalized grievance[s].” The Plaintiffs have outlined a federal regime of mass censorship, presented specific examples of how such censorship has harmed the States’ quasi-sovereign interests in protecting their residents’ freedom of expression, and demonstrated numerous injuries to significant segments of the Plaintiff States’ populations.
Basically everything in that paragraph is bullshit.
Anyway, all that brings us to the nature of the actual injunction. And… it’s crazy. It basically prevents much of the US government from talking to any social media company or to various academics and researchers studying how information flows or how foreign election interference works. Which is quite a massive restriction.
But, really, the most incredible part is that the injunction pretends that it can distinguish the kinds of information the government can share with social media companies from the kinds it can’t. So, for example, the following is prohibited:
specifically flagging content or posts on social-media platforms and/or forwarding such to social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech;
urging, encouraging, pressuring, or inducing in any manner social-media companies to change their guidelines for removing, deleting, suppressing, or reducing content containing protected free speech;
emailing, calling, sending letters, texting, or engaging in any communication of any kind with social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech;
But then, it says the government can communicate with social media companies over the following:
informing social-media companies of postings involving criminal activity or criminal conspiracies;
contacting and/or notifying social-media companies of national security threats, extortion, or other threats posted on its platform;
contacting and/or notifying social-media companies about criminal efforts to suppress voting, to provide illegal campaign contributions, of cyber-attacks against election infrastructure, or foreign attempts to influence elections;
informing social-media companies of threats that threaten the public safety or security of the United States;
exercising permissible public government speech promoting government policies or views on matters of public concern;
informing social-media companies of postings intending to mislead voters about voting requirements and procedures;
informing or communicating with social-media companies in an effort to detect, prevent, or mitigate malicious cyber activity;
But here’s the thing: nearly all of the examples actually discussed fall into this exact bucket, but the plaintiffs (AND JUDGE DOUGHTY) pretend they fall into the first bucket (which is now prohibited). So, is sharing details of some jackass posting fake ways to vote “informing social media companies of posting intended to mislead voters about voting requirements” or is it “specifically flagging content or posts on social-media platforms and/or forwarding such to social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech“?
It seems abundantly clear that nearly all of the conversations were about legitimate information sharing, but nearly all of it is interpreted by the plaintiffs and the judge to be nefarious censorship. As such, the risk for anyone engaged in activities on the “not prohibited” list is that this judge will interpret them to be on the prohibited list.
And that’s why government officials are now calling off important meetings with these companies where they were sharing actual useful information that they can no longer share. I’ve even heard some government officials say they’re even afraid to post to social media out of a fear that that would violate this injunction.
Also, this is completely fucked up. Among the prohibited activities is having people in the government talk to a wide variety of researchers who aren’t even parties to this lawsuit.
collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group for the purpose of urging, encouraging, pressuring, or inducing in any manner removal, deletion, suppression, or reduction of content posted with social-media companies containing protected free speech
That should be a real concern, as (again) a key thing that the EIP did was connect with election officials who were facing bogus election claims, giving them the ability to share that info and move to debunk false information and provide more accurate information. But, under this ruling, that can’t happen.
If you wanted to set up a system that is primed to enable foreign interference in elections, you couldn’t have picked a better setup. Nice work, everyone.
Anyway, it’s no surprise that the US government has already moved to appeal this ruling. But, if you think the appeals court is going to save things, remember that Louisiana federal rulings go up to the 5th Circuit, which is the court that decided that Texas’s compelled speech law was just dandy.
Of course, in many ways, this ruling conflicts with that one, in that Texas’s social media law is actually a much more active attempt by government to force social media companies to moderate in the manner it wants. But the one way they are consistent is that both rulings support Trumpist delusions, meaning there’s a decent chance the 5th Circuit blesses the nonsense parts of this one.
Again, the good parts of the ruling shouldn’t be ignored. And many government officials do need a clear reminder of the boundaries between coercion and persuasion. But, all in all, this ruling goes way too far, interprets things in a nonsense manner, and creates an impossible-to-comply-with injunction that causes real harm not just for the users of social media, but actual 1st Amendment interests as well.
The internet has revolutionized communications, sales, and information distribution, and has enabled historic levels of porn consumption. These are all unequivocally good things. (Fight me.) What it has also done is revolutionize court precedent.
Prior to internet ubiquity, courts were sometimes more receptive to plaintiffs attempting to hold third parties responsible for content generated by their users. The Communications Decency Act somehow managed to prevent the internet from becoming a litigation playground for bad faith operators. The internet was still in its infancy, but certain legislators and justices recognized the harm posed by the addition of direct liability for sites that did nothing else but give users a place to congregate and converse.
For years, this wasn’t a problem. Lately though, it appears certain legislators believe the best thing to do is introduce platform liability, if only because their hideous, bigoted supporters keep getting themselves booted off of popular social media services.
For now, sanity (mostly) prevails. Outside of corrupt outliers like the shady-as-fuck Supreme Court justice Clarence Thomas, higher courts seem mostly unwilling to start holding tech companies directly responsible for content created by their users.
And, for the most part, courts are unwilling to entertain outlandish conspiracy theories that suggest any government official merely referencing unwanted content is the same thing as the federal government demanding (under the full force of law) said content be removed from these services.
Lawsuit after lawsuit after lawsuit alleging government interference in online interactions has failed. Most of them have been brought in the Trump era — a four-year period where anti-vaxxers, conspiracy theorists, and extremely hateful people found themselves unexpectedly supported (and echoed) by the most powerful political leader in the world.
Fortunately, the court system generally doesn’t care who’s in office. The person with the finger on The Button doesn’t matter. The law does. So people who thought a president that unexpectedly embraced their extreme views would lead to courtroom wins are being informed none of that rhetoric matters when it comes to matters of established law.
The losses continue to mount. We can hope people are getting smarter after all this time. But I guarantee you that’s not the case. We’ll be seeing lawsuits like this forever, especially when the most extreme outliers of the Republican party are allowed to say extremely stupid shit without fear of being corrected, much less censured (which is not censored, btw) by fellow party members.
Lose all you want. We’ll make more. That’s the credo of the dumbasses that keep lobbing lawsuits into federal court without any apparent knowledge of how the law works.
This case involves a book called “The Truth About COVID-19: Exposing the Great Reset, Lockdowns, Vaccine Passports, and the New Normal,” which includes a foreword from Robert F. Kennedy Jr. Sen. Warren wrote a letter to Amazon expressing “concerns” about the book and Amazon’s role in promoting the book through its algorithms. The letter asked Amazon to review and publicly report on its algorithms. The book authors sued Sen. Warren for violating their First Amendment rights. The Ninth Circuit affirms the denial of a preliminary injunction.
The Ninth Circuit boots these claims to the curb, affirming the lower court’s ruling. It says Warren’s (apparently performative) letter to Amazon is not government interference in the author’s free speech rights. (The fact that Warren never followed up on her demand for a report on Amazon’s algorithms strongly suggests this was sent to score political points, rather than actually secure information on Amazon’s book-sorting methods.)
While the Ninth Circuit panel agrees [PDF] that Warren’s words in the letter and statement published on her site had the ability to cause reputational damage to the book’s author, it also recognizes that strong (even damaging) language is protected speech, even when used by politicians.
We must read the phrase “potentially unlawful” in context, not in isolation. Senator Warren’s letter began by noting that this was the second time she had written to Amazon in recent months. Her prior correspondence, she explained, expressed concern that the company was providing consumers with false or misleading information about unauthorized KN95 masks. In the next sentence, she wrote that “[t]his pattern and practice of misbehavior suggests that Amazon is either unwilling or unable to modify its business practices to prevent the spread of falsehoods or the sale of inappropriate products—an unethical, unacceptable, and potentially unlawful course of action from one of the nation’s largest retailers.” (Emphasis added.) Placed in proper perspective, the phrase “potentially unlawful” most likely refers to the “sale of inappropriate products,” such as the unauthorized KN95 masks. Such a business practice could potentially constitute unlawful consumer fraud. By contrast, the letter does not explain which law Amazon might be violating by selling The Truth About COVID-19 or any other book.
Even if we accept the plaintiffs’ reading of the letter, however, referencing potential legal liability does not morph an effort to persuade into an attempt to coerce.
On top of that, if this was government coercion, it was the most ineffectual coercion ever.
Finally, a full review requires us to analyze not only the tone of the letter but also the tenor of the overall interaction between Senator Warren and Amazon. An interaction will tend to be more threatening if the official refuses to take “no” for an answer and pesters the recipient until it succumbs. In Bantam Books, for instance, the Commission sent repeated notices and followed up with police visits. Here, the record contains no evidence that Senator Warren followed up on her letter in any fashion, even though Amazon continued to sell The Truth About COVID-19 on its platform.
The court goes on to point out that Senator Warren was completely incapable of directly punishing Amazon for carrying the book, something that would have required unified Congressional effort and perhaps even a change of law. This was just one Senator saying things about one book Amazon carried. And nothing on the record suggests it went any further than Warren’s original playing-to-the-base letter she sent to Amazon’s execs.
The requested injunction (which would be of limited usefulness this far past the heyday of the COVID pandemic) is denied. The lower court’s refusal to grant credence to these far-fetched legal arguments is affirmed. And, since it’s a published opinion, the denial carries precedential weight. Of course, legal precedent rarely deters idiotic litigators. But it does make it much, much easier to dismiss their bogus claims long before they start costing innocent parties actual money.
Well, this is unfortunate. Back in May of last year we wrote about how Missouri and Louisiana had sued the Biden administration, claiming “censorship” over social media based on a bunch of convoluted and nonsensical claims, most of which were about events that happened during the Trump administration.
We noted that, when viewed in the most forgiving light, the best we could make of the ridiculously poorly plead account was that they were trying to make a jawboning argument, saying that some of the administrations comments (mostly about reforming or repealing Section 230) acted as a de facto threat to social media to get those companies to silence speech. As we’ve gone into great detail about before, the Biden administration has, at times, gone stupidly close to the 1st Amendment line, but we hadn’t seen how they’d gone past it. And the initial complaint was so poorly done, and so focused on being a political document (it was brought by then Missouri Attorney General Eric Schmitt, who happily used it to grandstand on his way to being elected a US Senator last year, which is his current job), that it didn’t come close to making this argument coherently.
Also, what’s weird about the argument is that Republicans over the last few years have been angrier about Section 230, and have been louder about their threats to repeal it.
Even worse, many of the examples the complaint claimed were proof of “censorship” by the Biden administration were issues like the false claims that it tried to censor the story about the Hunter Biden laptop (which even the Twitter Files confirmed was not blocked by Twitter on behalf of any request from either the government or the Biden campaign, which wasn’t even the government anyway). The complaint also talked about Twitter’s decision to block sharing regarding the (now considered more credible) “lab leak” theory, though again, that happened during the Trump administration, not the Biden one. (Update: it turns out this argument is even dumber than I thought since it was Facebook, not Twitter who banned discussions about a “lab leak” theory).
Throughout the Fall last year, then AG/Senatorial candidate Schmitt used the case to release extremely misleading and misrepresented documents to bolster the still unproven claim of the Biden administration conspiring with social media companies to silence speech. Indeed some journalists even fell for it.
Still, as more and more papers were filed in the case, which now has a docket with well over 200 entries, it meant that perhaps the states would be able to drag the case out. And… that’s exactly what’s happened.
The ruling starts out badly, and then gets progressively more unhinged, taking conspiracy theories and nonsense claims that have been rejected in basically every other court, and saying “yup, sure, that sounds reasonable.”
Much of the ruling focuses on whether or not the two states even have standing to bring these claims. The court says they do, because they have “adequately” argued “injury-in-fact.” The reasons why are, frankly, boring and not worth getting into. This is also true of a few private plaintiffs who are involved in the lawsuit: in this case some well known peddlers of misleading information who were banned from Twitter, which they insist happened because of the Biden administration.
The White House pointed out (reasonably) that those still don’t qualify for standing because Twitter’s private moderation actions are not traceable to the White House because the White House had nothing to do with them. Here, the court gets, well, stupid. The judge more or less accepts conspiracy theory nonsense that the White House pressured Twitter to silence voices:
Here, however, Plaintiffs have alleged the full picture: a cohesive and coercive campaign by the Biden Administration and all of the Agency Defendants to threaten and persuade social media companies to more avidly censor so-called “misinformation.” Thus, while the Changizi plaintiffs may have left gaps in their pleadings, Plaintiffs in the current case have not. Plaintiffs have alleged, as described in detail above, a “ramping up” in censorship that directly coincides with the deboosting, shadow-banning, and account suspensions that are the subject of the Amended Complaint. And these are not mere generalizations: Plaintiffs made specific allegations showing a link between Defendants’ statements and the social-media companies’ censorship activities. While Plaintiffs acknowledge that some censorship existed before Defendants made the statements that are the subject of this case, they also allege in detail an increase in censorship, which is tied temporally to the Defendants’ actions. Thus, Plaintiffs here provide the allegations that may have been missing in the Changizi complaint.
Further, the Defendants’ reliance on Hart v. Facebook Inc., No. 22-CV-00737-CRB, 2022 WL 1427507 (N.D. Cal. May 5, 2022), is also misplaced. As in the above cases, the plaintiffs in Hart sought redress for censorship of their viewpoints on social-media platforms like Twitter and Facebook. However, the Hart court found that the plaintiff’s allegations were simply too “vague” and “implausible” to fairly connect the government officials to the actions of the social-media companies. Id. at 5. But as this Court has repeatedly noted, Plaintiffs’ Amended Complaint simply cannot be characterized as “vague.” Instead, Plaintiffs have carefully laid out the alleged scheme of censorship and how Defendants are specifically connected to and involved with it.
This reads like motivated reasoning by a judge very, very interested in justifying a result rather than showing any actual coercion.
Having said that the plaintiffs have standing, the court moves on to the 1st Amendment claims, and in a move not surprising given what’s said above, suggests that they’re legit. But does so in a weird way. After first running through the various precedents regarding jawboning, including the very recent 9th Circuit ruling that said government flagging content to Twitter is not coercive, Judge Doughty says the Biden administration’s public statements, which included no actual threats or hints at threats, were coercive!
Here, Plaintiffs have clearly alleged that Defendants attempted to convince social-media companies to censor certain viewpoints. For example, Plaintiffs allege that Psaki demanded the censorship of the “Disinformation Dozen” and publicly demanded faster censorship of “harmful posts” on Facebook. Further, the Complaint alleges threats, some thinly veiled and some blatant, made by Defendants in an attempt to effectuate its censorship program. One such alleged threat is that the Surgeon General issued a formal “Request for Information” to social-media platforms as an implied threat of future regulation to pressure them to increase censorship. Another alleged threat is the DHS’s publishing of repeated terrorism advisory bulletins indicating that “misinformation” and “disinformation” on social-media platforms are “domestic terror threats.” While not a direct threat, equating failure to comply with censorship demands as enabling acts of domestic terrorism through repeated official advisory bulletins is certainly an action social-media companies would not lightly disregard. Moreover, the Complaint contains over 100 paragraphs of allegations detailing “significant encouragement” in private (i.e., “covert”) communications between Defendants and social-media platforms.
The Complaint further alleges threats that far exceed, in both number and coercive power, the threats at issue in the above-mentioned cases. Specifically, Plaintiffs allege and link threats of official government action in the form of threats of antitrust legislation and/or enforcement and calls to amend or repeal Section 230 of the CDA with calls for more aggressive censorship and suppression of speakers and viewpoints that government officials disfavor. The Complaint even alleges, almost directly on point with the threats in Carlin and Backpage, that President Biden threatened civil liability and criminal prosecution against Mark Zuckerburg if Facebook did not increase censorship of political speech. The Court finds that the Complaint alleges significant encouragement and coercion that converts the otherwise private conduct of censorship on social media platforms into state action, and is unpersuaded by Defendants’ arguments to the contrary.
Again, at the time we noted that much of what the administration said was stupid, and they should stop their jawboning. But Judge Doughty’s reading of it as coercive seems… bizarrely wrong. I mean, if that’s accurate, then how do we judge Donald Trump’s much more aggressive threats to repeal Section 230 if social media websites didn’t moderate the way he wanted to?
The Biden Administration notes that none of their public statements about disinformation included anything anywhere near a threat, but the judge doesn’t care.
Defendants argue that Plaintiffs allege only “isolated episodes in which federal officials engaged in rhetoric about misinformation on social media platforms” and that the Complaint is “devoid” of any “enforceable threat” to “prosecute.” Further, they argue that it “is unclear how the alleged comments about amending [Section 230 of the CDA] or bringing antitrust suits could be viewed as ‘threats’ given that no Defendant could unilaterally take such actions.” The Court is unpersuaded by these arguments for several reasons. First, as explained above, any suggestion that a threat must be enforceable in order to constitute coercive state action is clearly contradicted by the overwhelming weight of authority. Moreover, the Complaint alleges that the threats became more forceful once the Biden Administrative took office and gained control of both Houses of Congress, indicating that the Defendants could take such actions with the help of political allies in Congress. Additionally, the Attorney General, a position appointed by and removable by the President, could, through the DOJ, unilaterally institute antitrust actions against social-media companies.
Again, this seems almost certainly backwards as a matter of precedent. And, if it’s accurate, I can’t wait to see how these same courts judge cases in the next GOP administration that will almost certainly go much, much further.
The ruling then gets even dumber. Despite every other court laughing away any claim that seeks to make social media companies like Twitter “state actors,” here the Court says that in this case, there is “joint action” that makes them state actors. This is again, simply wrong. It’s backwards. It’s silly. Again, the judge points to the recent 9th Circuit case that gets it right, and says “but this is different because I say so.”
Recently, in O’Handley, the United States Court of Appeals for the Ninth Circuit found no joint action where government officials flagged certain tweets as misinformation. There, the plaintiff alleged the “conspiracy approach” to joint action which requires “the plaintiff to show a ‘meeting of the minds’ between the government and the private party to ‘violate constitutional rights.’” 2023 WL 2443073, at *7 (quoting Fonda v. Gray, 707 F.2d 435, 438 (9th Cir. 1983)). The court noted that, because the “only alleged interactions are communications between the OEC and Twitter in which the OEC flagged for Twitter’s review posts that potentially violated the company’s content-moderation policy,” the plaintiff “allege[d] no facts plausibly suggesting either that the OEC interjected itself into the company’s internal decisions to limit access to his tweets and suspend his account or that the State played any role in drafting Twitter’s Civic Integrity Policy.” Id. at *8. The court described the relationship between the state officials and Twitter as a permissible “arms-length” relationship. Id. at *8 (citing Mathis v. Pac. Gas & Elec. Co., 75 F.3d 498 (9th Cir. 1996)). For the reasons explained below, the allegations here are distinguishable from those in O’Handley.
Here, Plaintiffs have plausibly alleged joint action, entwinement, and/or that specific features of Defendants’ actions combined to create state action. For example, the Complaint alleges that “[o]nce in control of the Executive Branch, Defendants promptly capitalized on these threats by pressuring, cajoling, and openly colluding with social-media companies to actively suppress particular disfavored speakers and viewpoints on social media.” Specifically, Plaintiffs allege that Dr. Fauci, other CDC officials, officials of the Census Bureau, CISA, officials at HHS, the state department, and members of the FBI actively and directly coordinated with social-media companies to push, flag, and encourage censorship of posts the Government deemed “Mis, Dis, or Malinformation.”
These allegations, unlike those in O’Handley, demonstrate more than an “arms-length” relationship. Plaintiffs allege a formal government-created system for federal officials to influence social-media censorship decisions. For example, the Complaint alleges that federal officials set up a long series of formal meetings to discuss censorship, setting up privileged reporting channels to demand censorship, and funding and establishing federal-private partnership to procure censorship of disfavored viewpoints. The Complaint clearly alleges that Defendants specifically authorized and approved the actions of the social-media companies and gives dozens of examples where Defendants dictated specific censorship decisions to social-media platforms. These allegations are a far cry from the complained-of action in O’Handley: a single message from an unidentified member of a state agency to Twitter.
I mean, basically all of that is wrong. The discussions were not coordinating “censorship.” But, among the crowd of fools that are pushing this nonsense, it’s now taken as fact. Gullible fools suckered in by their own disinformation.
There’s also a lot of complete nonsense about Section 230 in the ruling, including this:
Plaintiffs’ injuries could be redressed by enjoining Defendants from engaging in the above-discussed “other factors” that have twisted Section 230 into a catalyst for government-sponsored censorship
But that makes a huge false assumption that Section 230 has been “a catalyst for government-sponsored censorship,” which remains not shown anywhere.
The judge also makes a hop, skip, and logical mental leap, to claim that because Twitter (a private company) engaged its own private property rights to remove certain content that it felt violated its rules… this is prior restraint:
Because Plaintiffs allege that Defendants are targeting particular views taken by speakers on a specific subject, they have alleged a clear violation of the First Amendment, i.e., viewpoint discrimination. Moreover, Plaintiffs allege that Defendants, by placing bans, shadow-bans, and other forms of restrictions on Plaintiffs’ social-media accounts, are engaged in de facto prior restraints, another clear violation of the First Amendment. Thus, the Court finds that Plaintiffs have plausibly alleged their First Amendment claims.
I mean, under this kind of ruling, any government would have massive, unchecked power to force any private property owner to host any speech they want, by publicly complaining about the content, because according to this judge, at that point, if the website chooses to moderate that speech, it must be because of state action.
The only part of the motion to dismiss that’s granted is a very narrow part requesting an injunction directly against President Biden. But everything else targeting the administration is allowed to stand. Of course, any appeal out of this court will go up to the 5th Circuit, which is somewhat famous for its motivated reasoning in cases like these. So there’s a decent chance this ruling stands.
Again, the White House never should have said what it said and shouldn’t have even suggested it was telling social media companies how to moderate. And I’m now doubly furious because if they’d just shut the fuck up, we wouldn’t have this terrible ruling on the books. But, now we do.
Of course, it’ll be fun when there’s another Trump or DeSantis administration and they find out they’re bound by the same rules, and merely commenting on content moderation choices is seen as coercive…
There’s been a lot of discussion of late, especially because of the various Twitter Files, regarding where the line is between governments simply flagging content for social media websites to vet against their own policies as compared to unconstitutional and impermissible suppression of speech in violation of the 1st Amendment.
As we’ve highlighted over and over again, the courts of these so-called “jawboning” cases have been pretty clear that there needs to be a coercive element to make it a 1st Amendment violation. Judge Posner’s ruling in the Backpage v. Dart case in the 7th Circuit lays it out pretty clearly back in 2015, citing back to the 2nd Circuit’s Okwedy v. Molinari case:
The difference between government expression and intimidation—the first permitted by the First Amendment, the latter forbidden by it—is well explained in Okwedy v. Molinari, 333 F.3d 339, 344 (2d Cir. 2003) (per curiam): “the fact that a public-official defendant lacks direct regulatory or decisionmaking authority over a plaintiff, or a third party that is publishing or otherwise disseminating the plaintiff’s message, is not necessarily dispositive … . What matters is the distinction between attempts to convince and attempts to coerce. A public-official defendant who threatens to employ coercive state power to stifle protected speech violates a plaintiff’s First Amendment rights, regardless of whether the threatened punishment comes in the form of the use (or, misuse) of the defendant’s direct regulatory or decision-making authority over the plaintiff, or in some less-direct form.”
A few years back, we highlighted what we thought was an interesting case regarding social media and jawboning by state officials brought by Shiva Ayyadurai. Despite Ayyadurai’s history of trying to destroy our own site, as well as some of the dubious claims that resulted in his own case, we thought he raised a potentially worthwhile 1st Amendment question regarding where the line was between “convince” and “coerce” when it came to state officials complaining to Twitter about taking down content. It wasn’t clear where things fell in that case, especially as the government official there admitted they were trying to get Shiva’s tweets taken down. For whatever reason (and there were a variety of procedural oddities in the case), Shiva dropped his case so we never got a direct ruling in that one.
However, a similar case was more recently filed in California (and we mentioned briefly), in which lawyer Rogan O’Handley, a 2020 election truther, lost his Twitter account for violating Twitter’s policies. It came out that his account was one that was flagged by the California Secretary of State’s Office as a “trusted” flagger. So O’Handley sued California’s Secretary of State, Shirley Weber, along with Twitter, and the National Association of Secretaries of State.
As always in these kinds of disputes, the specifics matter. O’Handley made a tweet alleging election fraud in California:
Audit every California ballot
Election fraud is rampant nationwide and we all know California is one of the culprits
Do it to protect the integrity of that state’s elections
The Secretary of State’s office flagged that tweet to Twitter via its Partner Support Portal saying the following:
Hi, We wanted to flag this Twitter post: https://twitter.com/DC_Draino/status/12370 73866578096129 From user @DC_Draino. In this post user claims California of being a culprit of voter fraud, and ignores the fact that we do audit votes. This is a blatant disregard to how our voting process works and creates disinformation and distrust among the general public.
The lower court dismissed on a variety of grounds, noting that Twitter wasn’t a state actor, and that his being banned from Twitter was “not fairly traceable to the Secretary’s actions” among other things. Both were appealed.
The appeals court easily tosses the claims against Twitter noting (of course) that Twitter is not the government:
O’Handley’s claims falter at the first step. Twitter did not exercise a state-created right when it limited access to O’Handley’s posts or suspended his account. Twitter’s right to take those actions when enforcing its content-moderation policy was derived from its user agreement with O’Handley, not from any right conferred by the State. For that reason, O’Handley’s attempt to analogize the authority conferred by California Elections Code § 10.5 to the “procedural scheme” in Lugar is wholly unpersuasive. Id. at 941. Lugar involved a prejudgment attachment system, created by state law, that authorized private parties to sequester disputed property. Id. Section 10.5, by contrast, does not vest Twitter with any power and, under the terms of the user agreement to which O’Handley assented, no conferral of power by the State was necessary for Twitter to take the actions challenged here.
Nor did Twitter enforce a state-imposed rule when it limited access to O’Handley’s posts and suspended his account for “violating the Twitter Rules . . . about election integrity.” As the quoted message that Twitter sent to O’Handley makes clear, the company acted under the terms of its own rules, not under any provision of California law.
That’s pretty straightforward. Also, the 9th Circuit notes that it really doesn’t matter that most of the accounts flagged by the Secretary of State’s office were later pulled down:
That Twitter and Facebook allegedly removed 98 percent of the posts flagged by the OEC does not suggest that the companies ceded control over their content-moderation decisions to the State and thereby became the government’s private enforcers. It merely shows that these private and state actors were generally aligned in their missions to limit the spread of misleading election information. Such alignment does not transform private conduct into state action.
Correlation is not causation in legal form.
And then we start to get into the more meatier question of whether or not there was any coercion which (again) is the key to all of this (this is still part of the analysis regarding whether or not Twitter has been turned into a state actor). The Court recognizes that there is none here.
In this case, O’Handley has not satisfied the nexus test because he has not alleged facts plausibly suggesting that the OEC pressured Twitter into taking any action against him. Even if we accept O’Handley’s allegation that the OEC’s message was a specific request that Twitter remove his November 12th post, Twitter’s compliance with that request was purely optional. With no intimation that Twitter would suffer adverse consequences if it refused the request (or receive benefits if it complied), any decision that Twitter took in response was the result of its own independent judgment in enforcing its Civic Integrity Policy. As was true under the first step of the Lugar framework, the fact that Twitter complied with the vast majority of the OEC’s removal requests is immaterial. Twitter was free to agree with the OEC’s suggestions—or not. And just as Twitter could pay greater attention to what a trusted civil society group had to say, it was equally free to prioritize communications from state officials in its review process without being transformed into a state actor.
The court notes that basic information sharing between governments and private actors does not make the private actors into state actors.
The relationship between Twitter and the OEC more closely resembles the “consultation and information sharing” that we held did not rise to the level of joint action in Mathis, 75 F.3d at 504. In that case, PG&E decided to exclude one of its employees from its plant after conducting an undercover investigation in collaboration with a government narcotics task force. Id. at 501. The suspended employee then sued PG&E for violating his constitutional rights under a joint action theory. Id. We rejected his claim because, even though the task force engaged in consultation and information sharing during the investigation, the task force “wasn’t involved in the decision to exclude Mathis from the plant,” and the plaintiff “brought no evidence PG&E relied on direct or indirect support of state officials in making and carrying out its decision to exclude him.” Id. at 504.
The same is true here. The OEC reported to Twitter that it believed certain posts spread election misinformation, and Twitter then decided whether to take disciplinary action under the terms of its Civic Integrity Policy. O’Handley alleges no facts plausibly suggesting either that the OEC interjected itself into the company’s internal decisions to limit access to his tweets and suspend his account or that the State played any role in drafting Twitter’s Civic Integrity Policy. As in Mathis, this was an arm’s-length relationship, and Twitter never took its hands off the wheel.
As for the claims directly against the Secretary of State, the 9th Circuit does find that O’Handley has standing, but still rejects his claims. The key part, again, is that Twitter gets to make its own decisions and the fact that the Secretary of State’s office flagged the tweet in no way changes that:
Here, as discussed above, the complaint’s allegations do not plausibly support an inference that the OEC coerced Twitter into taking action against O’Handley. The OEC communicated with Twitter through the Partner Support Portal, which Twitter voluntarily created because it valued outside actors’ input. Twitter then decided how to respond to those actors’ recommendations independently, in conformity with the terms of its own content-moderation policy
O’Handley tried to argue (as I’ve seen others as well) that the mere fact that the information sharing was coming from the government creates implicit intimidation factors, but the court, correctly, notes that this is not how any of this works:
O’Handley argues that intimidation is implicit when an agency with regulatory authority requests that a private party take a particular action. This argument is flawed because the OEC’s mandate gives it no enforcement power over Twitter. See Cal. Elec. Code § 10.5. Regardless, the existence or absence of direct regulatory authority is “not necessarily dispositive.” Okwedy, 333 F.3d at 344. Agencies are permitted to communicate in a non-threatening manner with the entities they oversee without creating a constitutional violation. See, e.g., National Rifle Association of America v. Vullo, 49 F.4th 700, 714–19 (2d Cir. 2022).
The court also rejects the idea that this was “retaliation” for O’Handley’s speech, noting that it doesn’t match up with the standards there either:
The retaliation-based theory of liability fails as well. To state a retaliation claim, a plaintiff must show that: “(1) he engaged in constitutionally protected activity; (2) as a result, he was subjected to adverse action by the defendant that would chill a person of ordinary firmness from continuing to engage in the protected activity; and (3) there was a substantial causal relationship between the constitutionally protected activity and the adverse action.” Blair v. Bethel School District, 608 F.3d 540, 543 (9th Cir. 2010) (footnote omitted).
O’Handley’s claim falters on the second prong because he has not alleged that the OEC took any adverse action against him. “The most familiar adverse actions are exercise[s] of governmental power that are regulatory, proscriptive, or compulsory in nature and have the effect of punishing someone for his or her speech.” Id. at 544 (citation and internal quotation marks omitted). Flagging a post that potentially violates a private company’s contentmoderation policy does not fit this mold.Rather, it is a form of government speech that we have refused to construe as “adverse action” because doing so would prevent government officials from exercising their own First Amendment rights. See Mulligan v. Nichols, 835 F.3d 983, 988–89 (9th Cir. 2016). California has a strong interest in expressing its views on the integrity of its electoral process. The fact that the State chose to counteract what it saw as misinformation about the 2020 election by sharing its views directly with Twitter rather than by speaking out in public does not dilute its speech rights or transform permissible government speech into problematic adverse action. See Hammerhead Enterprises, Inc. v. Brezenoff, 707 F.2d 33, 39 (2d Cir. 1983).
There is nothing surprising or out of the ordinary in the result of this case. It matches just fine with a large number of earlier “jawboning” style cases, including ones cited above like Okwedy, Bantam Books, and Backpage. However, since many seem eager to ignore all of this precedent, and because the facts are slightly different regarding social media and trusted flagging programs, it’s nice to see a clean ruling on these points.
Once again, the thing that matters is whether or not there is coercion. There may be cases where these programs or efforts tip over into coercion: and we should be vigilant in watching out for those scenarios. But mere information sharing, absent any form of coercion, cannot be a 1st Amendment violation.
This week, the NY Times had an article detailing how House Speaker Kevin McCarthy has formed a close bond with Rep. Marjorie Taylor Greene, a situation that many thought was impossible just a couple years ago when McCarthy seemed to see Greene as a shameful example of the modern Republican party’s infatuation with conspiracy theories, falsehoods, and nonsense.
The details of that article aren’t all that interesting for Techdirt, but there is one paragraph that certainly caught my attention:
Mr. McCarthy has gone to unusual lengths to defend Ms. Greene, even dispatching his general counsel to spend hours on the phone trying to cajole senior executives at Twitter to reactivate her personal account after she was banned last year for violating the platform’s coronavirus misinformation policy.
Later in the article, there are more details:
And by early 2022, Ms. Greene had begun to believe that Mr. McCarthy was willing to go to bat for her. When her personal Twitter account was shut down for violating coronavirus misinformation policies, Ms. Greene raced to Mr. McCarthy’s office in the Capitol and demanded that he get the social media platform to reinstate her account, according to a person familiar with the exchange.
Instead of telling Ms. Greene that he had no power to order a private company to change its content moderation policies, Mr. McCarthy directed his general counsel, Machalagh Carr, to appeal to Twitter executives. Over the next two months, Ms. Carr would spend hours on the phone with them arguing Ms. Greene’s case, and even helped draft a formal appeal on her behalf.
Now, let’s be clear: this is perfectly reasonable (as we’ve been describing) for politicians to state a case in favor of a certain course of action by platforms. It only reaches the problematic level when there is coercion involved.
But some folks, including in our comments, have been insisting that any interaction by any government official is automatically coercive. And, while I’m guessing they will argue here that “this is different,” because it was about reinstating an account, rather than taking one down, the simple fact remains, that it was government officials seeking to influence a moderation policy decision by a private company, effectively trying to sway that company’s own 1st Amendment protected right to decide for itself how to moderate.
The simple fact is that politicians on both sides of the aisle regularly are trying to influence how moderation occurs (often in contrasting ways). They’re allowed to try to persuade companies to act how they want, so long as there are no coercive elements there.
But, either way, this does go to reinforce the idea that the “Twitter Files” are simply cherry picking stories to suit their own political narrative, and apparently leaving out stories like this, where it was a high ranking Republican trying to influence a moderation decision.
Normally, there wouldn’t be much need to insert yourself into lawsuits involving seriously flawed claims about social media moderation. But these two lawsuits — both losses for plaintiffs claiming the Biden administration conspired to ban their social media accounts — are now in the hands of the Ninth Circuit Court of Appeals, which has delivered some unusual (and terrible) takes on Section 230 and intermediary liability recently.
One of the plaintiffs challenging her loss at the district level is “naturopath” Colleen Huber, who also sued someone for truthfully reporting that Huber’s cures for cancer (intravenous baking soda, vitamin C, etc.) would likely kill anyone who thought this was actual medical advice, suing the Biden Administration because Twitter killed her account after she sent out too much COVID vaccine misinformation.
That lawsuit was tossed (with prejudice) by the lower court in March of this year. The California court says there was no credible evidence backing the allegations that the Biden administration’s meetings with social media heads and expressions of concern about the spread of misinformation formed a conspiracy between Twitter and the government to silence certain users. No First Amendment violation, no Fifth Amendment violation, and no cause of action.
The same thing happened to Rogan O’Handley, an (apparently non-practicing) attorney who saw his “DC_Draino” Twitter account permanently suspended following his continuous posting of election misinformation. That lawsuit alleged pretty much the same thing Huber’s did, only with O’Handley targeting California state officials, rather than the Biden administration. His lawsuit was dismissed with prejudice in January.
The EFF has filed briefs in both cases, asking the Ninth Circuit to recognize what’s being claimed here — a conspiracy between the government and social media services — and recognize that the government expressing concerns about social media moderation is not the same thing as engaging directly in social media moderation. The government can — and often does — have some impact on moderation efforts by social media platforms. But only in narrow cases does that actually cross into something actionable.
“Jawboning,” or when the government influences content moderation policies, is common. We have argued that courts should only hold a jawboned social media platform liable as a state actor if: (1) the government replaces the intermediary’s editorial policy with its own, (2) the intermediary willingly cedes its editorial implementation of that policy to the government regarding the specific user speech, and (3) the censored party has no remedy against the government.
To ensure that the state action doctrine does not nullify social media platforms’ First Amendment rights, we recently filed two amicus briefs in the Ninth Circuit in Huber v. Biden and O’Handley v. Weber. Both briefs argued that these conditions were not met, and the courts should not hold the platforms liable under a state action theory.
In Huber’s case, the EFF points out that while the Biden administration may have voiced its concerns to Twitter about its handling of COVID misinformation, it did not insert itself into the moderation process by replacing Twitter’s policies with one of its own. Nor is there any evidence the government ever saw or discussed the tweets that got Huber banned.
O’Handley’s case is slightly different, in that California’s Office of Election Cybersecurity brought one of his tweets to the attention of Twitter. But that alone is not enough to plausibly allege the government of California stepped in to engage in its own moderation, or that Twitter replaced its own policies with ones crafted by the state.
In both cases, the final prong of the EFF’s “jawboning” definition is still in play. Even if there’s a finding the government crossed the line in these cases, both plaintiffs are still capable of suing the government directly without bringing Twitter into it. If the Appeals Court decides anything can be revived in these two dead cases, it should leave Twitter out of it and allow the plaintiffs to pursue their (likely bogus) claims against the government entities they believe somehow stripped them of their social media accounts.
What the court definitely should not do is become the very thing these plaintiffs are suing over: an extension of the government that orders Twitter — via a decision that undercuts Section 230 protections or places limits on its moderation efforts — to carry content it would rather not carry. That would be the government inserting itself into moderation in a far more direct fashion than is actually alleged anywhere in these two ridiculous lawsuits.