Every year, the President lays out the administration’s major agenda in the State of the Union address. For those of us who cover tech policy, there’s always some fear that something dumb will be said. In the last couple of years, Biden pushed nonsense moral panics about the evils of the internet. So, in some ways, this year’s State of the Union was a little better because it barely mentioned tech at all, and only did so in the most confusing of ways. This was basically all he said:
Pass bipartisan privacy legislation to protect our children online.
Harness — harness the promise of AI to protect us from peril. Ban AI voice impersonations and more.
The first line is… weird? Because none of the “protect our children” online bills currently being discussed could accurately be described as “privacy legislation.” Indeed, most of those “kid safety” bills would become superfluous if Congress could get its act together and pass actual comprehensive privacy legislation that limited data brokers. But somehow Congress is incapable of doing that one simple thing.
As for the AI bit, that part is also kind of nonsensical. “Harness the promise of AI to protect us from peril?” Huh? And banning AI voice impersonation is an issue way more complicated than that line makes out. There are situations where AI impersonation should be perfectly fine, and others where it’s problematic.
Honestly, it felt like those lines were just last minute add-ins to the speech when someone realized there was no mention of the “boogey man” of “big tech” and something had to be said. If that means that it’s not truly one of Biden’s priorities, I guess that’s an improvement given the nonsense from previous years.
However, along with the actual speech, Biden also released the White House’s official agenda on policy issues, and it has a lot more on tech policy, almost all of it problematic.
It starts out with him again mixing comprehensive privacy legislation with child safety. And, yes, it’s true that comprehensive privacy legislation would help child safety. It would actually help everyone’s safety and isn’t specific to children. This is good, because if it is specific to children, then it’s actually more damaging to kids. It would effectively mandate the collection of more private data to identify kids.
Protecting Americans’ Privacy and Safety Online, Especially Our Kids. Consistent with his commitment to tackle the mental health crisis, President Biden has acted to address the compelling and growing evidence that social media and other tech platforms harm mental health and wellbeing of all Americans especially our kids. In each of his State of the Union Addresses, President Biden has called for strong federal protections for Americans’ privacy, including clear limits on how companies collect, use and share highly personal data – your internet history, your personal communications, your location, and your health, genetic and biometric data. Disclosure is not enough – President Biden believes much of that data should not be collected in the first place and that young people, who are especially vulnerable online, need even stronger protections. Last month, President Biden took the most significant federal action any President has ever taken to protect Americans’ data security. His Executive Order begins a process that will stop the large-scale transfer of this data—which includes intimate insights into Americans’ health, location, and finances—to countries like China and Russia. But Congress must act. Strong bipartisan legislation is necessary to regulate the types of data that is collected, protect kids online, and ensure the privacy of all Americans, including legislation that limits targeted advertising and bans it altogether for children.
Also, it’s simply incorrect that there is “growing evidence that social media and other tech platforms harm mental health and wellbeing.” We’ve pointed out repeatedly that the evidence is incredibly mixed, and there remains no serious research showing a causal link. Some research even suggests that the impact is the other way: that those with mental health challenges end up spending more time on social media because they don’t have access to other sources of help.
Hilariously, the agenda paragraph above links to the Surgeon General’s report on the phrase “growing evidence,” but as we explained, that report does not actually show any “growing evidence” of mental health harms from social media. Instead, it admits that social media is actually very useful for many people, but says we should act as if it does harm, just in case.
So the Biden administration is lying when it says that the evidence supports these harms. It does not, and it’s disappointing that the White House is so quick to misrepresent the data here.
From there, the agenda leads into an extremely misguided and mistargeted attack on Section 230:
Holding Companies Accountable for the Harms They Cause. President Biden believes that all companies – including technology companies – should be held accountable for the harms they cause, including the content they spread and the algorithms they use. For this reason, President Biden has long called on Congress for fundamental reform to Section 230 of the Communications Decency Act, which absolves tech companies from legal responsibility for content posted on their sites. The President has also called on Congress to stop tech platforms from being used for criminal conduct, including sales of dangerous drugs like fentanyl. The Biden Administration has also used all its authorities to crack down on algorithmic discrimination and algorithmic collusion and to bring more competition back to the tech sector. The President’s vision for our economy is one in which everyone – small and midsized businesses, mom-and-pop shops, entrepreneurs – can compete on a level playing field with the biggest companies, including and perhaps especially in the tech sector. That’s why he has also worked with Congress to pass bipartisan legislation to boost funding for federal antitrust enforcers.
Again, all of this is worded in a weird way. If you read the second half, it seems to be talking about competition policy. But, as our own research has shown, having Section 230 leads to greater competition, because without it, only the largest companies could afford the liability risks associated with hosting third-party content.
Also, the bit in the middle about calling on Congress “to stop tech platforms from being used for criminal conduct, including sales of dangerous drugs like fentanyl” is particularly bizarre. Criminal conduct is, by definition, the purview of law enforcement, not private tech companies. This is like saying President Biden is calling on Congress “to stop Walmart from being used for criminal conduct like shoplifting” or “to stop Ford from being used for criminal conduct like providing getaway cars.”
Criminal activity is a law enforcement activity. You should never pin the responsibility on private platforms who are not law enforcement. And, again, Section 230 ALREADY exempts federal criminal law. So if the administration thinks that these companies are violating criminal law, the DOJ can go in and take action. The real question some reporter should ask the White House is “if you think that the companies are hiding behind 230 to avoid liability from criminal activities, why hasn’t the DOJ stepped in and taken action, since Section 230 places no limits on the DOJ?”
But somehow, no one asks that?
All President Biden is doing with these bullet points is misleading the American public. It’s a shame.
One frustrating thing in following everything that has happened in the case that started out as Missouri v. Biden and is now Murthy v. Missouri at the Supreme Court, is that the case is full of lies. The whole case is kind of a mess for a variety of reasons. This includes the original plaintiffs (a mix of states and private actors, where it’s not clear why they’re all together, and it’s not clear that any of them have actual standing), as well as the framing and positioning of the case, including misrepresenting various elements of reality.
In some ways, this case is an uncomfortable one. I’ve spent years explaining why government should stay the fuck out of any attempt to pressure companies to moderate in one way or another. I celebrated the Backpage v. Dart decision, as it gave a clear update to the Supreme Court’s Bantam Books case regarding coercing bookstores not to carry books. On top of that, I’ve found some of the actions by the Biden administration, in trying to convince companies to change their moderation practices, highly problematic. There were plenty of times they should have just shut up.
But it did not appear to me that anything they did crossed the line from persuasion and use of the bully pulpit (perfectly legal and expected) to coercion (a violation of the 1st Amendment). It could be argued that where you draw that line is complex, and people can draw the line in different places. Indeed, there would be an interesting Supreme Court case to be heard that looks at the proper place to draw such a line.
But this isn’t that case (nor is this that Supreme Court). And that’s mostly because the record in the lower courts is a total mess, full of made up fantasies that were accepted as real and accurate.
Just a few weeks ago, I had a good conversation with a very smart lawyer who comes down on the other side of this case than I do. I told him that the part that was most frustrating to me was that it felt like the administration was arguing this case as if one side (and some judges) hadn’t just made up a bunch of shit and insisted it was fact. This allowed people to suggest that there was actual evidence on the record of the White House crossing the line into coercion.
The problem is that the evidence isn’t really there.
And now, finally, the Biden administration has found its voice on this. Its reply brief leading up to the oral arguments later this month finally makes a pretty direct call out to the lies from below on the record.
As they did at the stay stage, respondents try to defend that startling result by invoking the district court’s factual findings—which they assert are “unrebutted,” Resp. Br. 2—to substantiate their allegations of widespread government censorship. But the government vigorously disputed the district court’s findings below, and the Fifth Circuit declined to rely on many of them— presumably because they are unsupported or demonstrably wrong. Gov’t Br. 9. Respondents’ presentation to this Court paints a profoundly distorted picture by pervasively relying on those debunked findings.
Respondents still have not identified any instance in which any government official sought to coerce a platform’s editorial decisions with a threat of adverse government action. Nor can respondents point to any evidence that the government ever imposed any sanction when the platforms declined to moderate content the government had flagged—as routinely occurred. Instead, respondents principally argue that government officials transformed private platforms into state actors subject to First Amendment constraints merely by speaking to the public on matters of public concern or seeking to influence or inform the platforms’ editorial decisions. The Court should reject that radical expansion of the state-action doctrine, which would “eviscerate certain private entities’ rights to exercise editorial control over speech and speakers on their properties or platforms.”
They even call out (finally!!!) the one email that keeps making the rounds: the email from Biden digital guy Rob Flaherty to Facebook. Like many others, when I first saw this as presented by the district court, I thought it was an actual example of the White House overstepping its bounds and said as much. But then, after looking at the more detailed record and context I realized that the plaintiffs and the judge totally misrepresented the email. It was actually about a technical problem regarding signups to the Biden campaign account, which Rob got angry about. But it was presented as him being angry about content moderation choices. In context, you realize this email (while intemperate) had nothing to do with coercing speech. It was venting about a technical glitch.
However, both the district court and the 5th Circuit falsely present it as being about content moderation, just as the plaintiffs did. And here, the White House finally calls bullshit on this (though in a footnote):
Although space does not permit a full treatment of the inaccuracies in respondents’ account of the White House’s communications, we offer one other example: As proof of supposedly “ominous and coercive” “threats,” respondents recount that in July 2021, “the White House emailed Facebook stating, ‘Are you guys fucking serious? I want an answer on what happened here and I want it today.’ ” Resp. Br. 8 (quoting J.A. 740). But that admittedly crude comment was asking for an answer about a “technical” problem affecting the President’s own Instagram account—it had nothing to do with moderating other users’ content.
It’s kinda frustrating that the case has gotten this far with that falsehood on the record.
The reply brief also seems to be targeting Justice Kavanaugh, who you might consider a natural to reflexively side with the states against Biden, but the DOJ’s brief leans heavily on the ruling in Halleck, which was written by Kavanaugh:
Respondents ask this Court to rewrite the “constitutional boundary between the governmental and the private,” Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019), by affirming a sweeping and unprecedented injunction based on sweeping and unprecedented understandings of Article III standing, the state-action doctrine, and the proper scope of equitable relief. Respondents insist that any person can establish standing to challenge any action affecting any speech by any third party merely by asserting a desire to hear it—a proposition that would effectively abolish Article III’s limits in free-speech cases. Respondents seek to transform private social-media platforms’ editorial choices into state action subject to the First Amendment. And respondents do not deny that the injunction installs the district court as the overseer of the Executive Branch’s communications with and about the platforms, muzzling senior officials’ speech to the public and exposing thousands of employees to contempt should the court conclude that their statements run afoul of the Fifth Circuit’s novel and vague standards.
The DOJ highlights the astounding weakness of the underlying record, which points to vague statements made by administration officials, followed by policy decisions made by tech companies, and insisting the two are connected, without showing any actual connection. And that should be seen as problematic.
Respondents assert (Br. 19-22) that they suffered “direct” injuries because the government purportedly caused platforms to moderate content respondents had posted. But the Fifth Circuit did not find that any particular government action caused a platform to do anything to any content posted by respondents that the platform would not have done in “its ‘broad and legitimate discretion’ as an independent company,” Changizi v. HHS, 82 F.4th 492, 497 (6th Cir. 2023) (citation omitted); see Gov’t Br. 17-18.
Seeking to plug that gap, respondents cite (Br. 19- 21) various instances in which the platforms moderated their content—most of which involve COVID-19-related content posted at the height of the pandemic. But respondents make little effort to connect those acts by the platforms to any specific action by the government. They do not, for example, suggest that government officials specifically targeted their content. Instead, they urge a “birds-eye view” of traceability, Resp. Br. 19 (citation omitted), under which they presume that the relevant acts of content moderation are traceable to government officials merely because those officials made general statements about content moderation at around the same time, see id. at 21.
That generalized approach fails. The platforms have strong independent business incentives to moderate content, see C.A. ROA 18,445-18,453; the platforms actually did moderate respondents’ COVID-19-related content starting in 2020, long before the bulk of the government actions challenged here, see Gov’t Br. 18-19; and each cited moderation decision is consistent with the platforms’ independent application of their own policies, see, e.g., J.A. 787-794 (Hines); J.A. 797-801 (Hoft). Especially given that context, respondents’ bare timing-based speculation does not establish traceability
I’m almost wondering if the DOJ didn’t really take this case as seriously until recently, which is why it feels like they’re finally coming out swinging at this point.
Indeed, the filing admits what I’ve said all along: if the government actually did what the respondents claim, then absolutely this would be a First Amendment violation. The problem is that there’s no evidence that they actually did it. And that makes this a messy case. I’d like the Supreme Court to rule that the White House cannot take actions to coerce social media companies, because that’s the correct answer.
But how does a White House deal with an injunction that says “stop doing this stuff we insist you’re doing, even though you’re not”? The lack of clarity here means that the White House’s only option is to go way beyond what the First Amendment prohibits to avoid crossing a line drawn insisting that perfectly legitimate activity is violating the First Amendment.
So, the brief admits that “yes, you should blame us if we had done all those awful things, but we didn’t.”
No one disputes that the government would have violated the First Amendment if it had used threats of adverse government action to coerce private social-media platforms into moderating content. See Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 67-68 (1963); Gov’t Br. 23, 26-27. But no such threats occurred here
The filing also calls out how the district court judge (repeatedly) inserted false quotes (or misattributed the quotes to make them seem worse):
Respondents repeat the district court’s assertion that the former White House Press Secretary made a “threat of ‘legal consequences’ if platforms do not censor misinformation more aggressively.” Resp. Br. 41 (quoting J.A. 111) (brackets omitted). But notwithstanding the internal quotation marks in that passage, the Press Secretary never uttered the words “legal consequences.” See C.A. ROA 23,764- 23,791. Instead, the words the district court attributed to her came from respondents’ statement of facts. Id. at 26,476. Although we have highlighted this error before, see Gov’t C.A. Br. 30; Gov’t C.A. Reply Br. 9, respondents continue to repeat it.
The problem is not just the misquotation, but the absence of any statement in the relevant briefing that could plausibly be described as a threat of legal consequences. Respondents repeat the district court’s assertion that the Press Secretary “linked the threat of a ‘robust anti-trust program’ with” a purported “censorship demand.” Resp. Br. 40 (citation omitted). In fact, she did no such thing. When asked to respond to a Senator’s comment that “ ‘if the Big Tech oligarchs can muzzle the former President, what’s to stop them from silencing you?’,” the Press Secretary said (among other things) that the President “supports better privacy protections and a robust anti-trust program”—a natural response to a question about “ ‘oligarchs.’ ” C.A. ROA 609. Like the other press statements on which the Fifth Circuit relied, see Gov’t Br. 31-32, that response cannot plausibly be characterized as a threat of adverse action if the platforms failed to take specific acts of content moderation. Deeming such general comments about important matters of public policy coercive would make it impossible for the President and his senior advisors to communicate with the public—or even to respond to press questions—on policy matters involving the platforms.
The government also highlights that the general admission that it’s allowed to participate in the marketplace of ideas, so long as it doesn’t do anything specific, which is weird and unworkable.
Respondents cite no authority supporting their proposed dichotomy between “abstract” and “particular” advocacy in this context. Their reliance on Brandenburg v. Ohio, 395 U.S. 444 (1969) (per curiam), is misplaced because that decision holds that speech is unprotected under the First Amendment when it imminently incites particular unlawful acts. Even setting aside the fact that the government’s entitlement to speak is not rooted in the First Amendment, the Court in Brandenburg did not purport to ascribe constitutional significance to the level of specificity used to encourage otherwise lawful actions, such as private platforms’ content-moderation decisions.
Respondents’ novel distinction between abstract and specific speech is also unworkable. President Roosevelt lambasted not all journalism, but only the muckraking variety; President Wilson complained about stories on a particular topic (the alleged presence of troops in Turtle Bay); and President Biden condemned specific videos about Osama Bin Laden that were circulating online. Gov’t Br. 24, 49. Which of those statements were sufficiently “abstract” to pass muster? Conversely, why were all of the statements at issue here—including public comments by the President, the Surgeon General, and others about the general problem of COVID-19 falsehoods—too specific? Respondents do not provide any answers, and none are apparent.
The DOJ brief also cites our own amicus brief, which called out how the injunction is so far-reaching that it even precludes companies (of their own free will) reaching out to government officials to inquire about certain information, which is completely ridiculous and unworkable:
The injunction flouts traditional equitable principles because it extends relief far beyond that required to redress any cognizable harm to respondents, and its vague terms would irreparably harm the government and the public by chilling a host of legitimate Executive Branch communications. Gov’t Br. 45-50. It also would harm the platforms and their customers by precluding the companies from voluntarily seeking governmental input and collaboration to improve the products they offer. E.g., id. at 44, 49; cf. Floor64 Amicus Br. 5-16.
Anyway, I still fear that this is an easy case for the Supreme Court to screw up big time. Many of the amicus briefs in favor of the states were absolutely crazy (a few were more serious). But this is (finally) a strong brief from the White House explaining the many, many ways in which this particular case is just stupid.
Of course, that won’t stop the Supreme Court from issuing a dumb ruling, but maybe it’ll at least give a few justices enough pause to realize how stupid this could get if they accept as accurate the lies told down below.
So, last Friday, the 5th Circuit released its opinion in the appeal of an absolutely ridiculous Louisiana federal court ruling that insisted large parts of the federal government were engaged in some widespread censorial conspiracy with social media, and barred large parts of the government from talking to social media companies and even academic researchers.
The 5th Circuit massively trimmed back the district court’s injunction, throwing out 9 of the 10 listed “prohibitions,” removing a bunch of the defendants, including CISA and Anthony Fauci’s NIAID, noting that there was no evidence they had done anything improper, and taking the one remaining prohibition, and basically chopping it back to be close to meaningless (basically “don’t coerce the companies.”)
I thought the 5th Circuit was right to use the tests that the 2nd and 9th Circuits used for “coercion,” but found the actual application of those tests to be… at best weird, and at worst potentially extremely problematic (especially in the case of the CDC defendant, where the ruling made no sense at all). That confused application of the facts to the test at hand presented a challenge for the administration, as it arguably provided zero useful guidance for the administration on how to not violate the injunction. And that’s because the court really laid out no clear way of applying the test that was coherent or understandable. It kinda made stuff up as it went along and said “that’s coercion,” even though it wasn’t clear what was actually coercive.
Even when the 5th Circuit highlighted, for example, quotes from the administration to social media companies, it never provided the context or details. In fact, it would provide tiny fragments (a few word phrases) without any indication of who said what, what websites in particular they were talking about, and what it actually meant in context. And that was a real problem, especially as the lower court took many quotes so out of context as to reverse their meaning (and in one case, added in words to make a quote say the opposite of what it really said).
That said, I still wondered if the Biden administration would actually ask the Supreme Court to review it, because the final ruling was pretty limited in scope, and there’s a real risk that this Supreme Court, which has become so political in nature, would make a decision that was much, much worse and much, much more problematic for the administration.
Apparently, the White House felt differently, and they’ve rushed to the Supreme Court to ask the Supreme Court to review things on the shadow docket. Justice Alito has now put a stay on the injunctions and asked for filings by this coming Wednesday to review the issue.
The White House’s application is worth reading. First, they challenge the standing of the plaintiffs in the case (five people who were moderated on social media, along with the states Louisiana and Missouri). The White House notes that even if you argue that the individuals who were moderated have standing, they faced moderation before the White House said anything (i.e., it was independent decisions by the companies):
The Fifth Circuit held that they have standing because their posts have been moderated by social-media platforms. But respondents failed to show that those actions were fairly traceable to the government or redressable by injunctive relief. To the contrary, respondents’ asserted instances of moderation largely occurred before the allegedly unlawful government actions. The Fifth Circuit also held that the state respondents have standing because they have a “right to listen” to their citizens on social media. App., infra, 204a. But the court cited no precedent for that boundless theory, which would allow any state or local government to challenge any alleged violation of any constituent’s right to speak.
The larger point, though, is the 1st Amendment arguments regarding the jawboning questions, with the White House pointing out that these rulings take away the government’s bully pulpit, where it is allowed to advocate for positions, it just can’t threaten or punish people for their speech:
Second, the Fifth Circuit’s decision contradicts fundamental First Amendment principles. It is axiomatic that the government is entitled to provide the public with information and to “advocate and defend its own policies.” Board of Regents v. Southworth, 529 U.S. 217, 229 (2000). A central dimension of presidential power is the use of the Office’s bully pulpit to seek to persuade Americans — and American companies — to act in ways that the President believes would advance the public interest. President Kennedy famously persuaded steel companies to rescind a price increase by accusing them of “ruthless[ly] disregard[ing]” their “public responsibilities.” John F. Kennedy Presidential Library & Museum, News Conference 30 (Apr. 11, 1962), perma.cc/M7DL-LZ7N. President Bush decried “irresponsible” subprime lenders that shirked their “responsibility to help” distressed homeowners. The White House, President Bush Discusses Homeownership Financing (Aug. 31, 2007), perma.cc/DQ8B-JWN4. And every President has engaged with the press to promote his policies and shape coverage of his Administration. See, e.g., Graham J. White, FDR and the Press (1979).
Of course, the government cannot punish people for expressing different views. Nor can it threaten to punish the media or other intermediaries for disseminating disfavored speech. But there is a fundamental distinction between persuasion and coercion. And courts must take care to maintain that distinction because of the drastic consequences resulting from a finding of coercion: If the government coerces a private party to act, that party is a state actor subject “to the constraints of the First Amendment.” Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1933 (2019). And this Court has warned against expansive theories of state action that would “eviscerate” private entities’ “rights to exercise editorial control over speech and speakers on their properties or platforms.” Id. at 1932.
The Fifth Circuit ignored those principles. It held that officials from the White House, the Surgeon General’s office, and the FBI coerced social-media platforms to remove content despite the absence of even a single instance in which an official paired a request to remove content with a threat of adverse action — and despite the fact that the platforms declined the officials’ requests routinely and without consequence. Indeed, the Fifth Circuit suggested that any request from the FBI is inherently coercive merely because the FBI is a powerful law enforcement agency. And the court held that the White House, the FBI, and the CDC “significantly encouraged” the platforms’ content-moderation decisions — and thus transformed those decisions into state action — on the theory that officials were “entangled” in the platforms’ decisions. App., infra, 235a. The court did not define that novel standard, but found it satisfied primarily because platforms requested and relied upon CDC’s guidance on matters of public health.
Of course, this is the entire debate about jawboning in a nutshell. Where is the line between persuasion and coercion? The White House is correct that the 5th Circuit’s ruling doesn’t lay out a clear test or application, and leaves things muddled, but part of the problem is that where that line is has always been kinda muddled.
And I’m not at all sure that this Supreme Court will properly construe that line.
However, as the White House notes (and I would agree) the discussion with regards to the CDC in particular is kind of unworkable:
The implications of the Fifth Circuit’s holdings are startling. The court imposed unprecedented limits on the ability of the President’s closest aides to use the bully pulpit to address matters of public concern, on the FBI’s ability to address threats to the Nation’s security, and on the CDC’s ability to relay publichealth information at platforms’ request. And the Fifth Circuit’s holding that platforms’ content-moderation decisions are state action would subject those private actions to First Amendment constraints — a radical extension of the state-action doctrine
The White House also points out that the unclear nature of the remaining injunction creates a burden on federal government employees:
Third, the lower courts’ injunction violates traditional equitable principles. An injunction must “be no more burdensome to the defendant than necessary to provide complete relief to the plaintiffs.” Califano v. Yamasaki, 442 U.S. 682, 702 (1979). Here, however, the injunction sweeps far beyond what is necessary to address any cognizable harm to respondents: Although the district court declined to certify a class, the injunction covers the government’s communications with all social-media platforms (not just those used by respondents) regarding all posts by any person (not just respondents) on all topics. And it forces thousands of government officials and employees to choose between curtailing their interactions with (and public statements about) social-media platforms or risking contempt should the district court conclude that they ran afoul of the Fifth Circuit’s novel and ill-defined concepts of coercion and significant encouragement.
I don’t necessarily disagree with any of that. The ruling (mainly in how it applies the test for coercion) is a mess, and the final injunction (while massively slimmed down from the lower court’s) is confusing and unclear.
But, still, given how much of a partisan political football this is, I can easily see the Supreme Court making things way, way worse.
It looks like there will be quick turnaround on the shadow docket issue that I’m guessing may lead to a further stay of the injunction, as the White House said it intends to file for a full normal cert petition in October, allowing the Supreme Court to hear the full case this term. So it would be easy for Alito to stay the injunction until the case is fully briefed and heard.
Again, I get where the White House is coming from. The 5th Circuit ruling has real issues, but it struck me as way less damaging than whatever else might come out of this process. But, I guess, in the long run, it’s better to have a full ruling on this issue from the Supreme Court. I’m just scared of what this particular Supreme Court will say.
We’re going to go slow on this one, because there’s a lot of background and details and nuance to get into in Friday’s 5th Circuit appeals court ruling in the Missouri v. Biden case that initially resulted in a batshit crazy 4th of July ruling regarding the US government “jawboning” social media companies. The reporting on the 5th Circuit ruling has been kinda atrocious, perhaps because the end result of the ruling is this:
The district court’s judgment is AFFIRMED with respect to the White House, the Surgeon General, the CDC, and the FBI, and REVERSED as to all other officials. The preliminary injunction is VACATED except for prohibition number six, which is MODIFIED as set forth herein. The Appellants’ motion for a stay pending appeal is DENIED as moot. The Appellants’ request to extend the administrative stay for ten days following the date hereof pending an application to the Supreme Court of the United States is GRANTED, and the matter is STAYED.
Affirmed, reversed, vacated, modified, denied, granted, and stayed. All in one. There’s… a lot going on in there, and a lot of reporters aren’t familiar enough with the details, the history, or the law to figure out what’s going on. Thus, they report just on the bottom line, which is that the court is still limiting the White House. But it’s at a much, much, much lower level than the district court did, and this time it’s way more consistent with the 1st Amendment.
The real summary is this: the appeals court ditched nine out of the ten “prohibitions” that the district court put on the government, and massively narrowed the only remaining one, bringing it down to a reasonable level (telling the U.S. government that it cannot coerce social media companies, which, uh, yes, that’s exactly correct).
But then in applying its own (perhaps surprisingly, very good) analysis, the 5th Circuit did so in a slightly weird way. And then also seems to contradict the [checks notes] 5th Circuit in a different case. But we’ll get to that in another post.
Much of the reporting on this suggests it was a big loss for the Biden administration. The reality is that it’s a mostly appropriate slap on the wrist that hopefully will keep the administration from straying too close to the 1st Amendment line again. It basically threw out 9.5 out of 10 “prohibitions” placed by the lower court, and even on the half a prohibition it left, it said it didn’t apply to the parts of the government that the GOP keeps insisting were the centerpieces of the giant conspiracy they made up in their minds. The court finds that CISA, Anthony Fauci’s NIAID, and the State Department did not do anything wrong and are no longer subject to any prohibitions.
The details: the state Attorneys General of Missouri and Louisiana sued the Biden administration with some bizarrely stupid theories about the government forcing websites to take down content they disagreed with. The case was brought in a federal court district with a single Trump-appointed judge. The case was allowed to move forward by that judge, turning it into a giant fishing expedition into all sorts of government communications to the social media companies, which were then presented to the judge out of context and in a misleading manner. The original nonsense theories were mostly discarded (because they were nonsense), but by quoting some emails out of context, the states (and a few nonsense peddlers they added as plaintiffs to have standing), were able to convince the judges that something bad was going on.
As we noted in our analysis of the original ruling, they did turn up a few questionable emails from White House officials who were stupidly trying to act tough about disinformation on social media. But even then, things were taken out of context. For example, I highlighted this quote from the original ruling and called it out as obviously inappropriate by the White House:
Things apparently became tense between the White House and Facebook after that, culminating in Flaherty’s July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”
Except… if you look at it in context, the email has nothing to do with content moderation. The White House had noticed that the @potus Instagram account was having some issues, and Meta told the company that “the technical issues that had been affecting follower growth on @potus have been resolved.” A WH person received this and asked for more details. Meta responded with “it was an internal technical issue that we can’t get into, but it’s now resolved and should not happen again.” Someone then cc’d Rob Flaherty, and the quote above was in response to that. That is, it was about a technical issue that had prevented the @potus account from getting more followers, and he wanted details about how that happened.
So… look, I’d still argue that Flaherty was totally out of line here, and his response was entirely inappropriate from a professional standpoint. But it had literally nothing to do with content moderation issues or pressuring the company to remove disinformation. So it’s hard to see how it was a 1st Amendment violation. Yet, Judge Terry Doughty presented it in his ruling as if that line was about the removal of COVID disinfo. It is true that Flaherty had, months earlier, asked Facebook for more details about how the company was handling COVID disinfo, but those messages do not come across as threatening in any way, just asking for info.
The only way to make them seem threatening was to then include Flaherty’s angry message from months later, eliding entirely what it was about, and pretending that it was actually a continuation of the earlier conversation about COVID disinfo. Except that it wasn’t. Did Doughty not know this? Or did he pretend? I have no idea.
Doughty somehow framed this and a few other questionably out of context things as “a far-reaching and widespread censorship campaign.” As we noted in our original post, he literally inserted words that did not exist in a quote by Renee DiResta to make this argument. He claimed the following:
According to DiResta, the EIP was designed to “get around unclear legal authorities, including very real First Amendment questions” that would arise if CISA or other government agencies were to monitor and flag information for censorship on social media.
Except, if you read DiResta’s quote, “get around” does not actually show up anywhere. Doughty just added that out of thin air, which makes me think that perhaps he also knew he was misrepresenting the context of Flaherty’s comment.
Either way, Doughty’s quote from DiResta is a judicial fiction. He inserted words she never used to change the meaning of what was said. What DiResta is actually saying is that they set up EIP as a way to help facilitate information sharing, not to “get around” the “very real First Amendment questions,” and also not to encourage removal of information, but to help social media companies and governments counter and respond to disinformation around elections (which they did for things like misleading election procedures). That is, the quote here is about respecting the 1st Amendment, not “getting around” it. Yet, Doughty added “get around” to pretend otherwise.
He then issued a wide-ranging list of 10 prohibitions that were so broad I heard from multiple people within tech companies that the federal government canceled meetings with them on important cybersecurity issues, because they were afraid that any such meeting might violate the injunction.
So the DOJ appealed, and the case went to the 5th Circuit, which has a history of going… nutty. However, this ruling is mostly not nutty. It’s actually a very thorough and careful analysis of the standards for when the government steps over over the line in violating the 1st Amendment rights by pressuring speech suppression. As we’ve detailed for years, the line is whether or not the government was being coercive. The government is very much allowed to use its own voice to persuade. But when it is coercive, it steps over the line.
The appeals court analysis on this is very thorough and right on, as it borrows the important and useful precedents from other circuits that we’ve talked about for years, agreeing with all of them. Where is the line between persuasion and coercion?
Next, we take coercion—a separate and distinct means of satisfying the close nexus test. Generally speaking, if the government compels the private party’s decision, the result will be considered a state action. Blum, 457 U.S. at 1004. So, what is coercion? We know that simply “being regulated by the State does not make one a state actor.” Halleck, 139 S. Ct. at 1932. Coercion, too, must be something more. But, distinguishing coercion from persuasion is a more nuanced task than doing the same for encouragement. Encouragement is evidenced by an exercise of active, meaningful control, whether by entanglement in the party’s decision-making process or direct involvement in carrying out the decision itself. Therefore, it may be more noticeable and, consequently, more distinguishable from persuasion. Coercion, on the other hand, may be more subtle. After all, the state may advocate—even forcefully—on behalf of its positions
It points to the key case that all of these cases always lead back to, the important Bantam Books v. Sullivan case that is generally seen as the original case on “jawboning” (government coercion to suppress speech):
That is not to say that coercion is always difficult to identify. Sometimes, coercion is obvious. Take Bantam Books, Inc. v. Sullivan, 372 U.S. 58 (1963). There, the Rhode Island Commission to Encourage Morality—a state-created entity—sought to stop the distribution of obscene books to kids. Id. at 59. So, it sent a letter to a book distributor with a list of verboten books and requested that they be taken off the shelves. Id. at 61–64. That request conveniently noted that compliance would “eliminate the necessity of our recommending prosecution to the Attorney General’s department.” Id. at 62 n.5. Per the Commission’s request, police officers followed up to make sure the books were removed. Id. at 68. The Court concluded that this “system of informal censorship,” which was “clearly [meant] to intimidate” the recipients through “threat of [] legal sanctions and other means of coercion” rendered the distributors’ decision to remove the books a state action. Id. at 64, 67, 71–72. Given Bantam Books, not-so subtle asks accompanied by a “system” of pressure (e.g., threats and followups) are clearly coercive.
But, the panel notes, that level of coercion is not always present, but it doesn’t mean that other actions aren’t more subtly coercive. Since the 5th Circuit doesn’t currently have a test for figuring out if speech is coercive, it adopts the same tests that were recently used in the 2nd Circuit with the NRA v. Vullo case, where the NRA went after a NY state official who encouraged insurance companies to reconsider issuing NRA-endorsed insurance policies. The 2nd Circuit ran through a test and found that this urging was an attempt at persuasion and not coercive. The 5th Circuit also cites the 9th Circuit, which even more recently tossed out a case claiming that Elizabeth Warren’s comments to Amazon regarding an anti-vaxxer’s book were coercive, ruling they were merely an attempt to persuade. Both cases take a pretty thoughtful approach to determining where the line is, so it’s good to see the 5th Circuit adopt a similar test.
For coercion, we ask if the government compelled the decision by, through threats or otherwise, intimating that some form of punishment will follow a failure to comply. Vullo, 49 F.4th at 715. Sometimes, that is obvious from the facts. See, e.g., Bantam Books, 372 U.S. at 62–63 (a mafiosi-style threat of referral to the Attorney General accompanied with persistent pressure and follow-ups). But, more often, it is not. So, to help distinguish permissible persuasion from impermissible coercion, we turn to the Second (and Ninth) Circuit’s four-factor test. Again, honing in on whether the government “intimat[ed] that some form of punishment” will follow a “failure to accede,” we parse the speaker’s messages to assess the (1) word choice and tone, including the overall “tenor” of the parties’ relationship; (2) the recipient’s perception; (3) the presence of authority, which includes whether it is reasonable to fear retaliation; and (4) whether the speaker refers to adverse consequences. Vullo, 49 F.4th at 715; see also Warren, 66 F.4th at 1207.
So, the 5th Circuit adopts a strong test to say when a government employee oversteps the line, and then looks to apply it. I’m a little surprised that the court then finds that some defendants probably did cross that line, mainly the White House and the Surgeon General’s office. I’m not completely surprised by this, as it did appear that both had certainly walked way too close to the line, and we had called out the White House for stupidly doing so. But… if that’s the case, the 5th Circuit should really show how they did so, and it does not do a very good job. It admits that the White House and the Surgeon General are free to talk to platforms about misinformation and even to advocate for positions:
Generally speaking, officials from the White House and the Surgeon General’s office had extensive, organized communications with platforms. They met regularly, traded information and reports, and worked together on a wide range of efforts. That working relationship was, at times, sweeping. Still, those facts alone likely are not problematic from a First-Amendment perspective.
So where does it go over the line? When the White House threatened to hit the companies with Section 230 reform if they didn’t clean up their sites! The ruling notes that even pressuring companies to remove content in strong language might not cross the line. But threatening regulatory reforms could:
That alone may be enough for us to find coercion. Like in Bantam Books, the officials here set about to force the platforms to remove metaphorical books from their shelves. It is uncontested that, between the White House and the Surgeon General’s office, government officials asked the platforms to remove undesirable posts and users from their platforms, sent follow-up messages of condemnation when they did not, and publicly called on the platforms to act. When the officials’ demands were not met, the platforms received promises of legal regime changes, enforcement actions, and other unspoken threats. That was likely coercive
Still… here the ruling is kinda weak. The panel notes that even with what’s said above the “officials’ demeanor” matters, and that includes their “tone.” To show that the tone was “threatening,” the panel… again quotes Flaherty’s demand for answers “immediately,” repeating Doughty’s false idea that that comment was about content moderation. It was not. The court does cite to some other “tone” issues, but again provides no context for them, and I’m not going to track down every single one.
Next, the court says we can tell that the White House’s statements were coercive because: “When officials asked for content to be removed, the platforms took it down.” Except, as we’ve reported before, that’s just not true. The transparency reports from the companies show how they regularly ignored requests from the government. And the EIP reporting system that was at the center of the lawsuit, and which many have insisted was the smoking gun, showed that the tech companies “took action” on only 35% of items. And even that number is too high, because TikTok was the most aggressive company covered, and they took action on 64% of reported URLs, meaning Facebook, Twitter, etc., took action on way less than 35%. And even that exaggerates the amount of influence because “take action” did not just mean “take down.” Indeed, the report said that only 13% of reported content was “removed.”
So, um, how does the 5th Circuit claim that “when officials asked for content to be removed, the platforms took it down”? The data simply doesn’t support that claim, unless they’re talking about some other set of requests.
One area where the court does make some good points is calling out — as we ourselves did — just how stupid it was for Joe Biden to claim that the websites were “killing people.” Of course, the court leaves out that three days later, Biden himself admitted that his original words were too strong, and that “Facebook isn’t killing people.” Somehow, only the first quote (which was admittedly stupid and wrong) makes it into the 5th Circuit opinion:
Here, the officials made express threats and, at the very least, leaned into the inherent authority of the President’s office. The officials made inflammatory accusations, such as saying that the platforms were “poison[ing]” the public, and “killing people.”
So… I’m a bit torn here. I wasn’t happy with the White House making these statements and said so at the time. But they didn’t strike me as anywhere near going over the coercive line. This court sees it differently, but seems to take a lot of commentary out of context to do so.
The concern about the FBI is similar. The court seems to read things totally out of context:
Fourth, the platforms clearly perceived the FBI’s messages as threats. For example, right before the 2022 congressional election, the FBI warned the platforms of “hack and dump” operations from “state-sponsored actors” that would spread misinformation through their sites. In doing so, the FBI officials leaned into their inherent authority. So, the platforms reacted as expected—by taking down content, including posts and accounts that originated from the United States, in direct compliance with the request.
But… that is not how anyone has described those discussions. I’ve seen multiple transcripts and interviews of people at the platforms who were in the meetings where “hack and dump” were discussed, and the tenor was more “be aware of this, as it may come from a foreign effort to spread disinfo about the election,” coming with no threat or coercion — just simply “be on the lookout” for this. It’s classic information sharing.
And the platforms had reason to be on the lookout for such things anyway. If the FBI came to Twitter and said “we’ve learned of a zero day hack that can allow hackers into your back end,” and Twitter responded by properly locking down their systems… would that be Twitter “perceiving the messages as threats,” or Twitter taking useful information from the FBI and acting accordingly? Everything I’ve seen suggests the latter.
Even stranger is the claim that the CDC was coercive. The CDC has literally zero power over the platforms. It has no regulatory power over them and now law enforcement power. So I can’t see how it was coercive at all. Here, the 5th Circuit just kinda wings it. After admitting that the CDC lacked any sort of power over the sites, it basically says “but the sites relied on info from the CDC, so it must have been coercive.”
Specifically, CDC officials directly impacted the platforms’ moderation policies. For example, in meetings with the CDC, the platforms actively sought to “get into [] policy stuff” and run their moderation policies by the CDC to determine whether the platforms’ standards were “in the right place.” Ultimately, the platforms came to heavily rely on the CDC. They adopted rule changes meant to implement the CDC’s guidance. As one platform said, they “were able to make [changes to the ‘misinfo policies’] based on the conversation [they] had last week with the CDC,” and they “immediately updated [their] policies globally” following another meeting. And, those adoptions led the platforms to make moderation decisions based entirely on the CDC’s say-so—“[t]here are several claims that we will be able to remove as soon as the CDC debunks them; until then, we are unable to remove them.” That dependence, at times, was total. For example, one platform asked the CDC how it should approach certain content and even asked the CDC to double check and proofread its proposed labels.
So… one interpretation of that is that the CDC was controlling site moderation practices. But another, more charitable (and frankly, from conversations I’ve had, way more accurate) interpretation was that we were in the middle of a fucking pandemic where there was no good info, and many websites decided (correctly) that they didn’t have epidemiologists on staff, and therefore it made sense to ask the experts what information was legit and what was not, based on what they knew at the time.
Note that in the paragraph above, the one that the 5th Circuit uses to claim that the platform polices were controlled by the CDC, it admits that the sites were reaching out to the CDC themselves, asking them for info. That… doesn’t sound coercive. That sounds like trust & safety teams recognizing that they’re not the experts in a very serious and rapidly changing crisis… and asking the experts.
Now, there were perhaps reasons that websites should have been less willing to just go with the CDC’s recommendations, but would you rather ask expert epidemiologists, or the team who most recently was trying to stop spam on your platform? It seems, kinda logical to ask the CDC, and wait until they confirmed that something was false before taking action. But alas.
Still, even with those three parts of the administration being deemed as crossing the line, most of the rest of the opinion is good. Despite all of the nonsense conspiracy theories about CISA, which were at the center of the case according to many, the 5th Circuit finds no evidence of any coercion there, and releases them from any of the restrictions.
Finally, although CISA flagged content for social-media platforms as part of its switchboarding operations, based on this record, its conduct falls on the “attempts to convince,” not “attempts to coerce,” side of the line. See Okwedy, 333 F.3d at 344; O’Handley, 62 F.4th at 1158. There is not sufficient evidence that CISA made threats of adverse consequences— explicit or implicit—to the platforms for refusing to act on the content it flagged. See Warren, 66 F.4th at 1208–11 (finding that senator’s communication was a “request rather than a command” where it did not “suggest[] that compliance was the only realistic option” or reference potential “adverse consequences”). Nor is there any indication CISA had power over the platforms in any capacity, or that their requests were threatening in tone or manner. Similarly, on this record, their requests— although certainly amounting to a non-trivial level of involvement—do not equate to meaningful control. There is no plain evidence that content was actually moderated per CISA’s requests or that any such moderation was done subject to non-independent standards.
Ditto for Fauci’s NIAID and the State Department (both of which were part of nonsense conspiracy theories). The Court says they didn’t cross the line either.
So I think the test the 5th Circuit used is correct (and matches other circuits). I find its application of the test to the White House kinda questionable, but it actually doesn’t bother me that much. With the FBI, the justification seems really weak, but frankly, the FBI should not be involved in any content moderation issues anyway, so… not a huge deal. The CDC part is the only part that seems super ridiculous as opposed to just borderline.
But saying CISA, NIAID and the State Department didn’t cross the line is good to see.
And then, even for the parts the court said did cross the line, the 5th Circuit so incredibly waters down the injunction from the massive, overbroad list of 10 “prohibited activities,” that… I don’t mind it. The court immediately kicks out 9 out of the 10 prohibited activities:
The preliminary injunction here is both vague and broader than necessary to remedy the Plaintiffs’ injuries, as shown at this preliminary juncture. As an initial matter, it is axiomatic that an injunction is overbroad if it enjoins a defendant from engaging in legal conduct. Nine of the preliminary injunction’s ten prohibitions risk doing just that. Moreover, many of the provisions are duplicative of each other and thus unnecessary.
Prohibitions one, two, three, four, five, and seven prohibit the officials from engaging in, essentially, any action “for the purpose of urging, encouraging, pressuring, or inducing” content moderation. But “urging, encouraging, pressuring” or even “inducing” action does not violate the Constitution unless and until such conduct crosses the line into coercion or significant encouragement. Compare Walker, 576 U.S. at 208 (“[A]s a general matter, when the government speaks it is entitled to promote a program, to espouse a policy, or to take a position.”), Finley, 524 U.S. at 598 (Scalia, J., concurring in judgment) (“It is the very business of government to favor and disfavor points of view . . . .”), and Vullo, 49 F.4th at 717 (holding statements “encouraging” companies to evaluate risk of doing business with the plaintiff did not violate the Constitution where the statements did not “intimate that some form of punishment or adverse regulatory action would follow the failure to accede to the request”), with Blum, 457 U.S. at 1004, and O’Handley, 62 F.4th at 1158 (“In deciding whether the government may urge a private party to remove (or refrain from engaging in) protected speech, we have drawn a sharp distinction between attempts to convince and attempts to coerce.”). These provisions also tend to overlap with each other, barring various actions that may cross the line into coercion. There is no need to try to spell out every activity that the government could possibly engage in that may run afoul of the Plaintiffs’ First Amendment rights as long the unlawful conduct is prohibited.
The eighth, ninth, and tenth provisions likewise may be unnecessary to ensure Plaintiffs’ relief. A government actor generally does not violate the First Amendment by simply “following up with social-media companies” about content-moderation, “requesting content reports from social-media companies” concerning their content-moderation, or asking social media companies to “Be on The Lookout” for certain posts.23 Plaintiffs have not carried their burden to show that these activities must be enjoined to afford Plaintiffs full relief.
The 5th Circuit, thankfully, calls for an extra special smackdown Judge Doughty’s ridiculous prohibition on any officials collaborating with the researchers at Stanford and the University of Washington who study disinformation, noting that this prohibition itself likely violates the 1st Amendment:
Finally, the fifth prohibition—which bars the officials from “collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group” to engage in the same activities the officials are proscribed from doing on their own— may implicate private, third-party actors that are not parties in this case and that may be entitled to their own First Amendment protections. Because the provision fails to identify the specific parties that are subject to the prohibitions, see Scott, 826 F.3d at 209, 213, and “exceeds the scope of the parties’ presentation,” OCA-Greater Houston v. Texas, 867 F.3d 604, 616 (5th Cir. 2017), Plaintiffs have not shown that the inclusion of these third parties is necessary to remedy their injury. So, this provision cannot stand at this juncture
That leaves just a single prohibition. Prohibition six, which barred “threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech.” But, the court rightly notes that even that one remaining prohibition clearly goes too far and would suppress protected speech, and thus cuts it back even further:
That leaves provision six, which bars the officials from “threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech.” But, those terms could also capture otherwise legal speech. So, the injunction’s language must be further tailored to exclusively target illegal conduct and provide the officials with additional guidance or instruction on what behavior is prohibited.
So, the 5th Circuit changes that one prohibition to be significantly limited. The new version reads:
Defendants, and their employees and agents, shall take no actions, formal or informal, directly or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech. That includes, but is not limited to, compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directing, or otherwise meaningfully controlling the social-media companies’ decision-making processes.
And that’s… good? I mean, it’s really good. It’s basically restating exactly what all the courts have been saying all along: the government can’t coerce companies regarding their content moderation practices.
The court also makes it clear that CISA, NIAID, and the State Department are excluded from this injunction, though I’d argue that the 1st Amendment already precludes the behavior in that injunction anyway, so they already can’t do those things (and there remains no evidence that they did).
So to summarize all of this, I’d argue that the 5th Circuit got this mostly right, and corrected most of the long list of terrible things that Judge Doughty put in his original opinion and injunction. The only aspect that’s a little wonky is that it feels like the 5th Circuit applied the test for coercion in a weird way with regards to the White House, the FBI, and the CDC, often by taking things dramatically out of context.
But the “harm” of that somewhat wonky application of the test is basically non-existent, because the court also wiped out all of the problematic prohibitions in the original injunction, leaving only one, which it then modified to basically restate the crux of the 1st Amendment: the government should not coerce companies in their moderation practices. Which is something that I agree with, and which hopefully will teach the Biden administration to stop inching up towards the line of threats and coercion.
That said, this also seems to wholly contradict the very same 5th Circuit’s decision in the NetChoice v. Paxton case, but that’s the subject of my next post. As for this case, I guess it’s possible that either side could seek Supreme Court review. It would be stupid for the DOJ to do so, as this ruling gives them almost everything they really wanted, and the probability that the current Supreme Court could fuck this all up seems… decently high. That said, the plaintiffs might want to ask the Supreme Court to review for just this reason (though, of course, that only reinforces the idea that the headlines that claimed this ruling was a “loss” for the Biden admin are incredibly misleading).
Earlier this year the White House put out a document articulating a National Cybersecurity Strategy. It articulates five “pillars,” or high-level focus areas where the government should concentrate its efforts to strengthen the nation’s resilience and defense against cyberattacks: (1) Defend Critical Infrastructure, (2) Disrupt and Dismantle Threat Actors, (3) Shape Market Forces to Drive Security and Resilience, (4) Invest in a Resilient Future, and (5) Forge International Partnerships to Pursue Shared Goals. Each pillar also includes several sub-priorities and objectives as well.
It is a seminal document, and one that has and will continue to spawn much discussion. For the most part what it calls for is too high level to be particularly controversial. It may even be too high level to be all that useful, although there can be value in distilling into words any sort of policy priorities. After all, even if what the government calls for may seem obvious (like “defending critical infrastructure,” which of course we’d all expect it do), going to the trouble to actually articulate it as a policy priority provides a roadmap for more constructive efforts to follow and may also help to martial resources, plus it can help ensure that any more tangible policy efforts the government is inclined to directly engage in are not at cross-purposes with what the government wants to accomplish overall.
Which is important because what the rest of this post discusses is how the strategy document itself reveals that there may already be some incoherence among the government’s policy priorities. In particular, it lists as one of the sub-priorities an objective with troubling implications: imposing liability on software developers. This priority is described in a few paragraphs in the section entitled, “Strategic Objective 3.3: Shift Liability for Insecure Software Products and Services,” but the essence is mostly captured in this one:
The Administration will work with Congress and the private sector to develop legislation establishing liability for software products and services. Any such legislation should prevent manufacturers and software publishers with market power from fully disclaiming liability by contract, and establish higher standards of care for software in specific high-risk scenarios. To begin to shape standards of care for secure software development, the Administration will drive the development of an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services. This safe harbor will draw from current best practices for secure software development, such as the NIST Secure Software Development Framework. It also must evolve over time, incorporating new tools for secure software development, software transparency, and vulnerability discovery.
Despite some equivocating language, at its essence it is no small thing that the White House proposes: legislation instructing people on how to code their software and requiring adherence to those instructions. And such a proposal raises a number of concerns, including in both the method the government would use to prescribe how software be coded, and the dubious constitutionality of it being able to make such demands. While with this strategy document itself the government is not yet prescribing a specific way to code software, it contemplates that the government someday could. And it does so apparently without recognizing how significantly shaping it is for the government to have the ability to make such demands – and not necessarily for the better.
In terms of method, while the government isn’t necessarily suggesting that a regulator enforce requirements for software code, what it does propose is far from a light touch: allowing enforcement of coding requirements via liability – or, in other words, the ability of people to sue if software turns out to be vulnerable. But regulation via liability is still profoundly heavy-handed, perhaps even more so than regulator oversight would be. For instance, instead of a single regulator working from discrete criteria there will be myriad plaintiffs and courts interpreting the language however they understand it. Furthermore, litigation is notoriously expensive, even for a single case, let alone with potentially all those same myriad plaintiffs. We have seen all too many innovative companies be obliterated by litigation, as well as seen how the mere threat of litigation can chill the investment needed to bring new good ideas into reality. This proposal seems to reflect a naïve expectation that litigation will only follow where truly deserved, but we know from history that such restraint is rarely the rule.
True, the government does contemplate there being some tuning to dull the edge of the regulatory knife, particularly through the use of safe harbors, such that there are defenses that could protect software developers from being drained dry by unmeritorious litigation threats. But while the concept of a safe harbor may be a nice idea, they are hardly a panacea, because we’ve also seen how if you have to litigate whether they apply then there’s no point if they even do. In addition, even if it were possible to craft an adequately durable safe harbor, given the current appetite among policymakers to tear down the immunities and safe harbors we currently have, like Section 230 or the already porous DMCA, the assumption that policymakers will actually produce a sustainable liability regime with sufficiently strong defenses and not be prone to innovation-killing abuse is yet another unfortunately naïve expectation.
The way liability would attach under this proposal is also a big deal: through the creation of a duty of care for the software developer. (The cited paragraph refers to it as “standards of care,” but that phrasing implies a duty to adhere to them, and liability for when those standards are deviated from.) But concocting such a duty is problematic both practically and constitutionally, because at its core, what the government is threatening here is alarming: mandating how software is written. Not suggesting how software should ideally be written, nor enabling, encouraging, nor facilitating it to be written well, but instead using the force of law to demand how software be written.
It is so alarming because software is written, and it raises a significant First Amendment problem for the government to dictate how anything should be expressed, regardless how correct or well-intentioned the government may be. Like a book or newspaper, software is something that is also expressed through language and expressive choices; there is not just one correct way to write a program that does something, but rather an infinite number of big and little structural and language decisions made along the way. But this proposal basically ignores the creative aspect to software development (indeed, software is even treated as eligible for copyright protection as an original work of authorship). Instead it treats it more like a defectively-made toaster than a book or newspaper, replacing the independent expressive judgment of the software developer with the government’s. Courts have also recognized the expressive quality to software, so it would be quite a sea change if the Constitution somehow did not apply to this particular form of expression. And such a change would have huge implications, because cybersecurity is not the only reason that the government keeps proposing to regulate software design. The White House proposal would seem to bless all these attempts, no matter how ill-advised or facially censorial, by not even contemplating the constitutional hurdles any legal regime to regulate software design would need to hurdle.
It would still need to hurdle them even if the government truly knew best, which is a big if, even here, and not just because the government may lack adequate enough or current enough expertise. The proposal does contemplate a multi-stakeholder process to develop best practices, and there is nothing wrong in general with the government taking on some sort of facilitating role to help illuminate what these practices are and making sure software developers are aware of them – it may even be a good idea. The issue is not that there may be no such thing as any best practices for software development – obviously there are. But they are not necessarily one-size-fits-all or static; a best practice may depend on context, and constantly need to evolve to address new vectors of attack. But a distant regulator, and one inherently in a reactive posture, may not understand the particular needs of a particular software program’s userbase, nor the evolving challenges facing the developer. Which is a big reason why requiring adherence to any particular practice through the force of law is problematic, because it can effectively require software developers to make their code the government’s way rather than what is ultimately the best way for them and their users. Or at least put them in the position of having to defend their choices, which up until now the Constitution had let them make freely. And which would amount to a huge, unprecedented burden that threatens to chill software development altogether.
Such chilling is not an outcome the government should want to invite, and indeed, according to the strategy document itself, does not want. The irony with the software liability proposal is that it is inherently out-of-step with the overall thrust of the rest of the document, and even the third pillar it appears in itself, which proposes to foster better cybersecurity through the operation of more efficient markets. But imposing design liability would have the exact opposite effect on those markets. Even if well-resourced private entities (ex: large companies) might be able to find a way to persevere and navigate the regulatory requirements, small ones (including those potentially excluded from the stakeholder process establishing the requirements) may not be able to meet them, and individual people coding software are even less likely to. The strategy document refers to liability only on developers with market power, but every software developer has market power, including those individuals who voluntarily contribute to open source software projects, which provide software users with more choices. But those continued contributions will be deterred if those who make them can be liable for them. Ultimately software liability will result in fewer people writing code and consequently less software for the public to use. So far from making the software market work more efficiently through competitive pressure, imposing liability for software development will only remove options for consumers, and with it the competitive pressure the White House acknowledges is needed to prompt those who still produce software to do better. Meanwhile, those developers who remain will still be inhibited from innovating if that innovation can potentially put them out of compliance with whatever the law has so far managed to imagine.
Which raises another concern with the software liability proposal and how it undermines the rest of the otherwise reasonable strategy document. The fifth pillar the White House proposes is to “Forge International Partnerships to Pursue Shared Goals”:
The United States seeks a world where responsible state behavior in cyberspace is expected and rewarded and where irresponsible behavior is isolating and costly. To achieve this goal, we will continue to engage with countries working in opposition to our larger agenda on common problems while we build a broad coalition of nations working to maintain an open, free, global, interoperable, reliable, and secure Internet.
On its face, there is nothing wrong with this goal either, and it, too, may be a necessary one to effectively deal with what are generally global cybersecurity threats. But the EU is already moving ahead to empower bureaucratic agencies to decide how software should be written, yet without a First Amendment or equivalent understanding of the expressive interests such regulation might impact. Nor does there seem to be any meaningful understanding about how any such regulation will affect the entire software ecosystem, including open source, where authorship emerges from a community, rather than a private entity theoretically capable of accountability and compliance.
In fact, while the United States hasn’t yet actually specified requirements for design practices a software developer must comply with, the EU is already barreling down the path of prescriptive regulation over software, proposing a law that would task an agency to dictate what criteria software must comply with. (See this post by Bert Hubert for a helpful summary of its draft terms.) Like the White House, the EU confuses its stated goal of helping the software market work more efficiently with an attempt to control what can be in the market. For all the reasons that an attempt by the US stands to be counterproductive, so would EU efforts be, especially if born from a jurisdiction lacking a First Amendment or equivalent understanding of the expressive interests such regulation would impact. Thus it may turn out to be European bureaucrats that attempt to dictate the rules of the road for how software can be coded, but that means that it will be America’s job to try to prevent that damage, not double-down on it.
It is of course true that not everything software developers currently do is a good idea or even defensible. Some practices are dreadful and damaging. It isn’t wrong to be concerned about the collateral effects of ill-considered or sloppy coding practices or for the government to want to do something about it. But how regulators respond to these poor practices is just as important, if not more so, than that they respond, if they are going to make our digital environment better and more secure and not worse and less. There are a lot of good ideas in the strategy document for how to achieve this end, but imposing software design liability is not one of them.
We’ve noted repeatedly how the massive freak out over TikTok is kind of dumb and myopic, with folks singularly fixated on TikTok, but not the lax global adtech, data broker ecosystem we built that helped create it in the first place.
We’ve also noted that most of the U.S. policy solutions for the supposed threat posed by TikTok (that it will be used by the Chinese government to spy on and brainwash your children) have been equally stupid. Like that time Trump pretended to care about privacy then tried to offload the entire company to his buddies at Walmart and Oracle for the safety of America’s toddlers or what have you.
Enter the Biden administration, which is purportedly working on a deal with TikTok and ByteDance that would let the company keep operating in the United States, but would implement some guard rails in terms of the company’s data security and governance. But it sounds like the deal isn’t going that well:
The two sides are still wrangling over the potential agreement. The Justice Department is leading the negotiations with TikTok, and its No. 2 official, Lisa Monaco, has concerns that the terms are not tough enough on China, two people with knowledge of the matter said. The Treasury Department, which plays a key role in approving deals involving national security risks, is also skeptical that the potential agreement with TikTok can sufficiently resolve national security issues, two people with knowledge of the matter said. That could force changes to the terms and drag out a final resolution for months.
The White House appears to be avoiding a TikTok ban for now. But they do seem to be continuing a key “solution” for TikTok begun in the Trump administration. And that is, basically tethering much of the app to Oracle, a U.S. company with a long history of privacy violations, cozying up to China, super dodgy legal and lobbying practices, and a CEO who may or may not believe in this whole democracy thing:
First, TikTok would store its American data solely on servers in the United States, probably run by Oracle, instead of on its own servers in Singapore and Virginia, two of the people said. Second, Oracle is expected to monitor TikTok’s powerful algorithms that determine the content that the app recommends, in response to concerns that the Chinese government could use its feed as a way to influence the American public, they said.
Tethering TikTok to a dodgy company like Oracle isn’t actually much of a solution, but it allows folks to feel like they’re doing something. Still, actual policy solutions to TikTok are likely going to prove hard to come by. In large part because the TikTok policy conversation is predominately being driven by bad faith operators who don’t actually care about the real underlying issue: consumer privacy.
FCC Commissioner Brendan Carr, an avid leader in the “ban TikTok” movement, has never shown the slightest interest in consumer privacy at his day job at the FCC, but has found the subject a great way to gain political brownie points among the China-phobic. Then there’s Facebook, whicho has already been caught several times spreading moral panic stories about TikTok in the press.
There’s no shortage of Silicon Valley executives who can’t engineer a better alternative and simply don’t want to compete with a popular Chinese app. Then there’s no shortage of DC politicians who are simply racist, but like to hide that racism under the veneer of national security.
That’s all to say that while there are very valid concerns about TikTok and the data it collects, most of the folks most vocally heading to the fainting couch don’t actually care about the supposed underlying issue: consumer privacy. Countless folks just hear the word “China” and their brain simple goes into a bizarre autopilot mode. That’s not a great place to start from when crafting policy.
Again, we created a massive adtech and data broker ecosystem in which consumer privacy has long taken a backseat to making money. So even if you ban TikTok tomorrow, China (or any other government or company) can still access much of the same U.S. consumer location, browsing, facial recognition, and behavior data from an absolute ocean of dodgy middlemen who see very little in the way of meaningful accountability. Many of the same folks complaining about TikTok proudly built that environment.
In that sense, the fixation on TikTok is a giant distraction from our real failure: consumer privacy and consumer protection. But guys like Mark Zuckerberg or Mathias Döpfner don’t want to actually have that conversation, as the end result might be new US privacy rules and laws that would trim a few zeroes off of their total net worth.
Guys like Brendan Carr don’t really want to have that conversation either, as it would advertise that the policies they support (like say stripping away broadband privacy rules at the FCC, or fighting every effort at a national privacy law if it upsets AT&T) routinely created the environment that allowed companies to abuse consumer trust and privacy with relative impunity for decades.
With bad faith actors leading the charge I’m not sure any of this ends well.
It’s pretty clear most of the loudest TikTok critics are perfectly ok with abusing platforms to spread propaganda, or over-collecting, abusing, and failing to secure consumer data, but only if we’re the ones doing it. But you can’t separate the two things; creating a zero accountability privacy-hoovering data broker hellscape created the problems with TikTok.
Just banning a single app doesn’t fix the actual problem. But we don’t want to fix the actual problem (our lax consumer protection and privacy standards) because some wealthy men in the U.S. might lose money. So instead we’re getting a series of face-fanning performances that will, ultimately, probably accomplish nothing.
So, once again, as we said with the previous disinformation board, if the goal is really to better understand the flow of information online, and how to counter it without running afoul of the 1st Amendment, that could be interesting. Harassment and abuse is a real issue on the internet. And, there are many lessons to be learned, including some really unique and creative approaches to dealing with the challenges related to such speech. Unfortunately, there are already many reasons to be concerned about this new task force — mainly in that many of the participants come from the world that believes in questionable approaches to dealing with this — such as by removing Section 230 and making companies somehow “liable” for speech, even when it’s legal.
That’s not an approach that is (1) constitutional or (2) workable. We’ve seen such systems get regularly abused to silence perfectly legitimate speech. And, of course, part of this is because while abuse and harassment are very real, there is no clear definition of what constitutes abusive speech. Hell, one of the “experts” at last week’s panel once openly harassed a supporter of Section 230 for merely reporting, neutrally, on a Supreme Court decision, suggesting that people should set up fake profiles on sites and send people to rape the supporter.
It is difficult to take the White House seriously in trying to “stop” online harassment when it would platform a harasser like that.
There are also other concerns about the task force. An unnamed White House official brushed off free speech concerns that were raised by a Washington Post reporter:
“We are very mindful of the First Amendment issues,” said the official, who spoke on the condition of anonymity to candidly discuss the White House’s plans. “But banning threatening speech is not protected by the First Amendment. So while we are going to carefully navigate those issues, we are also going to remain laser-focused on the non-speech aspects.”
There’s some awkward wording here. Even though this official says that “banning threatening speech is not protected” it sounds like they mean “threatening speech is not protected by the 1st Amendment, and therefore okay to ban.” But… that’s just fundamentally wrong in nearly all cases. There is a very, very, very narrow sliver of threatening speech — that focused on inciting imminent lawless action — that is not protected, but almost none of the actual abuse and harassment that occurs online goes anywhere near that level.
There are some good things a task force like this could obviously do — some of which appears to be part of its mission. Things like the following seem great:
increasing access to survivor-centered services, information, and support for victims, and increasing training and technical assistance for Federal, State, local, Tribal, and territorial governments as well as for global organizations and entities in the fields of criminal justice, health and mental health services, education, and victim services;
That seems like a useful thing. However, where it gets scary is when it starts dipping into “examining existing Federal laws, regulations, and policies.”
And, look, at this very moment, there’s a half decent chance that in 30 months we’ll have a President DeSantis in office. And let’s remember that, in Florida, DeSantis has put in place programs to effectively block teachers from teaching about race or gender issues out of fear that they could get sued. He’s also directly punished companies like Walt Disney, falsely claiming that it’s a “woke” corporation. How do you think a President DeSantis will make use of a task force that suggests new laws and regulations to stop “harassment” and “abuse?”
This is not difficult to play out, but for whatever reason, supporters of these kinds of things seem to think that their friends will always be in power. That’s not how it works.
Again, there could be something useful in bringing together experts in harassment, along with various organizations that have experimented with ways of countering harassment, not through legal enforcement, but with design choices and tools that minimize such things. Invite company CEOs like Blockparty’s Tracy Chou who has thought deeply about how to use technology to fight harassment.
Instead, we always end up with the same people, who seem to think that the law is the only way to fight harassment, even as it’s protected by the 1st Amendment.
And, of course, just as with the Disinformation Governance Board, Republicans are already going after this effort, once again claiming that this is just a “ministry of truth” designed to target conservative speech. Even if that’s not true, just the fact that they believe it is gives them even more justification the next time they’re in power to use the very same tools and setup to actually stifle speech they dislike.
You would think that after four years of a Trump administration abusing the levers of power that once the Democrats regained power they would, maybe, put in place more safeguards, rather than putting more weapons in place for Republicans to use next time they’re in power. But, apparently, they can’t think even that far ahead, and that should disqualify them from being taken seriously.
The Biden-Harris Administration has warned repeatedly about the potential for Russia to engage in malicious cyber activity against the United States in response to the unprecedented economic sanctions we have imposed. There is now evolving intelligence that Russia may be exploring options for potential cyberattacks.
The announcement lists a variety of ways in which companies should defend themselves against such cyberattacks including things like making use of multi-factor authentication and backing up your data. But then there’s this very wise suggestion:
Encrypt your data so it cannot be used if it is stolen;
And, this is a good idea, and it’s great that the White House is urging others to follow it. However, it does seem worth noting that this is happening at the exact same time that Congress is still considering the EARN IT Act, which is a clear attack on encryption. And while supporters of the bill like to pretend that the EARN IT Act is not attacking encryption, the bill’s main sponsor, Senator Richard Blumenthal directly admitted to a Washington Post reporter that of course the point of the bill was to attack encryption and to make sure companies couldn’t “hide” behind it.
All this does is highlight one of the many ways in which the EARN IT Act is so dangerous and so problematic. At a time when encrypting our data is more important than ever, as even the White House acknowledges, the idea that Congress is moving forward with plans that will deliberately weaken the ability of companies to offer encrypted services seems not just preposterously short-sighted, but downright dangerous.
Back in July, the Biden administration signed an executive order creating a new “competition council” tasked with taking a closer look at competition and monopoly issues in various business sectors. One of those sectors was telecom, which remains dominated by a handful of politically powerful regional monopolies, resulting in decades of spotty broadband service, high prices, and terrible customer service.
Back in July, the council offered several bits of advice as to how this could be fixed, including forcing ISPs to provide more clear pricing data to government (allowing policymakers to clearly illustrate the harms of regional monopolies), forcing ISPs to be more transparent with consumers about sneaky fees and pricing, and the restoration of the FCC’s consumer protection authority stripped away during the Trump-era net neutrality repeal:
“(i) adopting through appropriate rulemaking ‘Net Neutrality’ rules similar to those previously adopted under title II of the Communications Act of 1934 (Public Law 73-416, 48 Stat. 1064, 47 U.S.C. 151 et seq.), as amended by the Telecommunications Act of 1996, in “Protecting and Promoting the Open Internet,” 80 Fed. Reg. 19738 (Apr. 13, 2015);”
Last week the council held its inaugural meeting, including eight cabinet members and the leaders of seven independent agencies, including the FCC and acting chair Jessica Rosenworcel. As it was designed to do, the meeting focused on ways the administration can lower prices, shore up competition, and break down monopolistic logjams across business sectors:
“In the Council?s inaugural meeting, NEC Director Brian Deese (Council Chair) emphasized that the President?s competition agenda is core to the Administration?s plan to Build Back Better and critical to keeping prices low for American consumers, spurring innovation, and allowing small businesses to compete on a level playing field.”
Without a permanent boss and 3-2 voting majority, the FCC can’t really do much of anything controversial to shore up telecom competition issues, much to the relief of sector giants like AT&T, Comcast, and Verizon. Mired in partisan gridlock (quite intentionally by the Trump administration and the speedy appointment of Nathan Simington at the end of his term), it can’t do much else of any controversy either, whether that involves media consolidation or disaster preparedness. Worse, Rosenworcel’s tenure ends at the end of the year, so if this apathy continues there’s a chance the agency could see a 2-1 GOP majority in the new year, leaving it even further incapable of any real reform.
I’ve spent months talking to folks around DC asking why team Biden hasn’t staffed its telecom regulators eight months into his first term, and nobody has a reasonable explanation. While there’s clearly a lot going on, the administration wasn’t too busy to give top Comcast lobbyist David Cohen a cushy job as the U.S. Canadian Ambassador. At this rate, by the time a permanent FCC boss is seated, a full year of policy time will have been wasted, which doesn’t exactly scream “urgency” when it comes to telecom monopoly, media consolidation, or other reform.
The apathy on telecom and FCC staffing is an odd clash with the selection of antitrust-buster Lina Khan at the FTC. But it kind of fits the current DC obsession with fixating exclusively on “big tech,” while “big telecom” engages in much of the same (or sometimes worse) behavior. At some point you have to wonder if the apathy on telecom and media reform isn’t a screw up but an active policy choice.
This was rumored a week and a half ago, and at the time I stated that there was no way in hell it was happening, and that it was all just performative nonsense… but yesterday Axios reported that the White House is still pushing Congress to insert a total repeal of Section 230 into the “must pass” National Defense Authorization Act (NDAA). At the time, the story was that Trump would make a trade: he wouldn’t veto the bill over a provision that removed Confederate army names from US military bases if there was a full repeal of Section 230 in it.
This is silly for all sorts of reasons, including the idea that you’re horse trading the law that helped create the open internet for racist military base names in a bill that has fuck all to do with internet/telecom policy. Of course, then Thanksgiving happened, and the President threw a total shitfit because #DiaperDon started trending on Twitter, making him declare that we had to repeal Section 230 for “national security.” Seems more like it would be for dealing with the insecurity of the President of the United States.
And so it appears that the White House has decided to appease the whims of the mad child emperor, and is still pushing Congress to slip the repeal into the NDAA and hoping that the confused, misplaced, and somewhat contradictory bipartisan hatred for Section 230 will cause them to go with it. Incredibly, Axios notes that it’s the Republicans in the Senate trying to talk the White House out of this plan — though they’re pushing a bunch of nonsense 230 reform bills as an “alternative.” The article’s only comment on Democrats is that they “are sure to object.” And I think that will still doom this entire effort. But, the real goal seems to be to try to sneak through some terrible bills that are short of a full repeal.
But Senate Republicans are instead trying to negotiate an alternative that would combine multiple bills aimed at reforming the law, including the bipartisan Platform Accountability and Consumer Transparency Act and Wicker’s Online Freedom and Viewpoint Diversity Act, a Hill source familiar with the matter told Axios.
We’ve gone through the details of why all of those bills are garbage and/or unconstitutional, and even if there were legitimate movement on getting those bills through Congress, lighting up the NDAA with them is the exact wrong thing to do. Bills like these, that would fundamentally change the very nature of the internet, are not something you just hang on an appropriations bill at the last minute.
I’m still mostly confident that none of this is actually going to happen and that it’s still all just insane posturing and performative nonsense. But it’s still 2020, and crazy, unprecedented shit still keeps happening, so I’ll back down slightly from my “no way in hell” statement, and note that we’re in hell right now, and so there’s still a small chance that something horrific would happen here. It’s still very, very unlikely. But it’s just not going away.