5th Circuit Rewrites A Century Of 1st Amendment Law To Argue Internet Companies Have No Right To Moderate
from the batshit-crazy dept
As far as I can tell, in the area the 5th Circuit appeals court has jurisdiction, websites no longer have any 1st Amendment editorial rights. That’s the result of what appears to me to be the single dumbest court ruling I’ve seen in a long, long time, and I know we’ve seen some crazy rulings of late. However, thanks to judge Andy Oldham, internet companies no longer have 1st Amendment rights regarding their editorial decision making.
Let’s take a step back. As you’ll recall, last summer, in a fit of censorial rage, the Texas legislature passed HB 20, a dangerously unconstitutional bill that would bar social media websites from moderating as they see fit. As we noted, the bill opens up large websites to a lawsuit over basically every content moderation decision they make (and that’s just one of the problems). Pretty quickly, a district court judge tossed out the entire law as unconstitutional in a careful, thorough ruling that explained why every bit of the law violated websites’ own 1st Amendment rights to put in place their own editorial policies.
On appeal to the 5th Circuit, the court did something bizarre: without giving any reason or explanation at all, it reinstated the law and promised a ruling at some future date. This was procedurally problematic, leading the social media companies (represented by two of their trade groups, NetChoice and CCIA) to ask the Supreme Court to slow things down a bit, which is exactly what the Supreme Court did.
Parallel to all of this, Florida had passed a similar law, and again a district court had found it obviously unconstitutional. That, too, was appealed, yet in the 11th Circuit the court rightly agreed with the lower court that the law was (mostly) unconstitutional. That teed things up for Florida to ask the Supreme Court to review the issue.
However, remember, back in May when Texas initially reinstated the law, it said it would come out with its full ruling later. Over the last few months I’ve occasionally pondered (sometimes on Twitter) whether the 5th Circuit would ever get around to actually releasing an opinion. And that’s what it just did. And, as 1st Amendment lawyer Ken White notes, it’s “the most angrily incoherent First Amendment decision I think I’ve ever read.”
It is difficult to state how completely disconnected from reality this ruling is, and how dangerously incoherent it is. It effectively says that companies no longer have a 1st Amendment right to their own editorial policies. Under this ruling, any state in the 5th Circuit could, in theory, mandate that news organizations must cover certain politicians or certain other content. It could, in theory, allow a state to mandate that any news organization must publish opinion pieces by politicians. It completely flies in the face of the 1st Amendment’s association rights and the right to editorial discretion.
There’s going to be plenty to say about this ruling, which will go down in the annals of history as a complete embarrassment to the judiciary, but let’s hit the lowest points. The crux of the ruling, written by Judge Andy Oldham, is as follows:
Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say. Because the district court held otherwise, we reverse its injunction and remand for further proceedings.
Considering just how long Republicans (and Oldham was a Republican political operative before being appointed to the bench) have spent insisting that corporations have 1st Amendment rights, this is a major turnaround, and (as noted) an incomprehensible one. Frankly, Oldham’s arguments sound much more like the arguments made by ignorant trolls in our comments than anyone with any knowledge or experience with 1st Amendment law.
I mean, it’s as if Judge Oldham has never heard of the 1st Amendment’s prohibition on compelled speech.
First, the primary concern of overbreadth doctrine is to avoid chilling speech. But Section 7 does not chill speech; instead, it chills censorship. So there can be no concern that declining to facially invalidate HB 20 will inhibit the marketplace of ideas or discourage commentary on matters of public concern. Perhaps as-applied challenges to speculative, now-hypothetical enforcement actions will delineate boundaries to the law. But in the meantime, HB 20’s prohibitions on censorship will cultivate rather than stifle the marketplace of ideas that justifies the overbreadth doctrine in the first place.
Judge Oldham insists that concerns about forcing websites to post speech from Nazis, terrorist propaganda, and Holocaust denial are purely hypothetical. Really.
The Platforms do not directly engage with any of these concerns. Instead, their primary contention—beginning on page 1 of their brief and repeated throughout and at oral argument—is that we should declare HB 20 facially invalid because it prohibits the Platforms from censoring “pro-Nazi speech, terrorist propaganda, [and] Holocaust denial[s].” Red Br. at 1.
Far from justifying pre-enforcement facial invalidation, the Platforms’ obsession with terrorists and Nazis proves the opposite. The Supreme Court has instructed that “[i]n determining whether a law is facially invalid,” we should avoid “speculat[ing] about ‘hypothetical’ or ‘imaginary’ cases.” Wash. State Grange, 552 U.S. at 449–50. Overbreadth doctrine has a “tendency . . . to summon forth an endless stream of fanciful hypotheticals,” and this case is no exception. United States v. Williams, 553 U.S. 285, 301 (2008). But it’s improper to exercise the Article III judicial power based on “hypothetical cases thus imagined.” Raines, 362 U.S. at 22; cf. SinenengSmith, 140 S. Ct. at 1585–86 (Thomas, J., concurring) (explaining the tension between overbreadth adjudication and the constitutional limits on judicial power).
These are not hypotheticals. This is literally what these websites have to deal with on a daily basis. And which, under Texas’ law, they no longer could do.
Oldham continually focuses (incorrectly and incoherently) on the idea that editorial discretion is censorship. There’s a reason that we’ve spent the last few years explaining how the two are wholly different — and part of it was to avoid people like Oldham getting confused. Apparently it didn’t work.
We reject the Platforms’ efforts to reframe their censorship as speech. It is undisputed that the Platforms want to eliminate speech—not promote or protect it. And no amount of doctrinal gymnastics can turn the First Amendment’s protections for free speech into protections for free censoring.
That paragraph alone is scary. It basically argues that the state can now compel any speech it wants on private property, as it reinterprets the 1st Amendment to mean that the only thing it limits is the power of the state to remove speech, while leaving open the power of the state to foist speech upon private entities. That’s ridiculous.
Oldham then tries to square this by… pulling in wholly unrelated issues around the few rare, limited, fact-specific cases where the courts have allowed compelled speech.
Supreme Court precedent instructs that the freedom of speech includes “the right to refrain from speaking at all.” Wooley v. Maynard, 430 U.S. 705, 714 (1977); see also W. Va. State Bd. of Educ. v. Barnette, 319 U.S. 624, 642 (1943). So the State may not force a private speaker to speak someone’s else message. See Wooley, 430 U.S. at 714.
But the State can regulate conduct in a way that requires private entities to host, transmit, or otherwise facilitate speech. Were it otherwise, no government could impose nondiscrimination requirements on, say, telephone companies or shipping services. But see 47 U.S.C. § 202(a) (prohibiting telecommunications common carriers from “mak[ing] any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services”). Nor could a State create a right to distribute leaflets at local shopping malls. But see PruneYard Shopping Ctr. v. Robins, 447 U.S. 74, 88 (1980) (upholding a California law protecting the right to pamphleteer in privately owned shopping centers). So First Amendment doctrine permits regulating the conduct of an entity that hosts speech, but it generally forbids forcing the host itself to speak or interfering with the host’s own message.
From there, he argues that forcing websites to host speech they disagree with is not compelled speech.
The Platforms are nothing like the newspaper in Miami Herald. Unlike newspapers, the Platforms exercise virtually no editorial control or judgment. The Platforms use algorithms to screen out certain obscene and spam-related content. And then virtually everything else is just posted to the Platform with zero editorial control or judgment.
From there, Oldham literally argues there is no editorial discretion under the 1st Amendment. Really.
Premise one is faulty because the Supreme Court’s cases do not carve out “editorial discretion” as a special category of First-Amendment-protected expression. Instead, the Court considers editorial discretion as one relevant consideration when deciding whether a challenged regulation impermissibly compels or restricts protected speech.
To back this up, the court cites Turner v. FCC, which has recently become a misleading favorite among those who are attacking Section 230. But the Turner case really turned on some pretty specific facts about cable TV versus broadcast TV which are not at all in play here.
Oldham also states that content moderation isn’t editorial discretion, even though it literally is.
Even assuming “editorial discretion” is a freestanding category of First-Amendment-protected expression, the Platforms’ censorship doesn’t qualify. Curiously, the Platforms never define what they mean by “editorial discretion.” (Perhaps this casts further doubt on the wisdom of recognizing editorial discretion as a separate category of First-Amendment-protected expression.) Instead, they simply assert that they exercise protected editorial discretion because they censor some of the content posted to their Platforms and use sophisticated algorithms to arrange and present the rest of it. But whatever the outer bounds of any protected editorial discretion might be, the Platforms’ censorship falls outside it. That’s for two independent reasons.
And here it gets really stupid. The ruling argues that because of Section 230, internet websites can’t claim editorial discretion. This is a ridiculously confused misreading of 230.
First, an entity that exercises “editorial discretion” accepts reputational and legal responsibility for the content it edits. In the newspaper context, for instance, the Court has explained that the role of “editors and editorial employees” generally includes “determin[ing] the news value of items received” and taking responsibility for the accuracy of the items transmitted. Associated Press v. NLRB, 301 U.S. 103, 127 (1937). And editorial discretion generally comes with concomitant legal responsibility. For example, because of “a newspaper’s editorial judgments in connection with an advertisement,” it may be held liable “when with actual malice it publishes a falsely defamatory” statement in an ad. Pittsburgh Press Co. v. Pittsburgh Comm’n on Human Rels., 413 U.S. 376, 386 (1973). But the Platforms strenuously disclaim any reputational or legal responsibility for the content they host. See supra Part III.C.2.a (quoting the Platforms’ adamant protestations that they have no responsibility for the speech they host); infra Part III.D (discussing the Platforms’ representations pertaining to 47 U.S.C. § 230)
Then, he argues that there’s some sort of fundamental difference between exercising editorial discretion before or after the content is posted:
Second, editorial discretion involves “selection and presentation” of content before that content is hosted, published, or disseminated. See Ark. Educ. Television Comm’n v. Forbes, 523 U.S. 666, 674 (1998); see also Miami Herald, 418 U.S. at 258 (a newspaper exercises editorial discretion when selecting the “choice of material” to print). The Platforms do not choose or select material before transmitting it: They engage in viewpoint-based censorship with respect to a tiny fraction of the expression they have already disseminated. The Platforms offer no Supreme Court case even remotely suggesting that ex post censorship constitutes editorial discretion akin to ex ante selection.17 They instead baldly assert that “it is constitutionally irrelevant at what point in time platforms exercise editorial discretion.” Red Br. at 25. Not only is this assertion unsupported by any authority, but it also illogically equates the Platforms’ ex post censorship with the substantive, discretionary, ex ante review that typifies “editorial discretion” in every other context
So, if I read that correctly, websites can now continue to moderate only if they pre-vet all content they post. Which is also nonsense.
From there, Oldham goes back to Section 230, where he again gets the analysis exactly backwards. He argues that Section 230 alone makes HB 20’s provisions constitutional, because it says that you can’t treat user speech as the platform’s speech:
We have no doubts that Section 7 is constitutional. But even if some were to remain, 47 U.S.C. § 230 would extinguish them. Section 230 provides that the Platforms “shall [not] be treated as the publisher or speaker” of content developed by other users. Id. § 230(c)(1). Section 230 reflects Congress’s judgment that the Platforms do not operate like traditional publishers and are not “speak[ing]” when they host usersubmitted content. Congress’s judgment reinforces our conclusion that the Platforms’ censorship is not speech under the First Amendment.
Section 230 undercuts both of the Platforms’ arguments for holding that their censorship of users is protected speech. Recall that they rely on two key arguments: first, they suggest the user-submitted content they host is their speech; and second, they argue they are publishers akin to a newspaper. Section 230, however, instructs courts not to treat the Platforms as “the publisher or speaker” of the user-submitted content they host. Id. § 230(c)(1). And those are the exact two categories the Platforms invoke to support their First Amendment argument. So if § 230(c)(1) is constitutional, how can a court recognize the Platforms as First-Amendment-protected speakers or publishers of the content they host?
Oldham misrepresents the arguments of websites that support Section 230, claiming that by using 230 to defend their moderation choices they have claimed in court they are “neutral tools” and “simple conduits of speech.” But that completely misrepresents what has been said and how this plays out.
It’s an upside down and backwards misrepresentation of how Section 230 actually works.
Oldham also rewrites part of Section 230 to make it work the way he wants it to. Again, this reads like some of our trolls, rather than how a jurist is supposed to act:
The Platforms’ only response is that in passing § 230, Congress sought to give them an unqualified right to control the content they host— including through viewpoint-based censorship. They base this argument on § 230(c)(2), which clarifies that the Platforms are immune from defamation liability even if they remove certain categories of “objectionable” content. But the Platforms’ argument finds no support in § 230(c)(2)’s text or context. First, § 230(c)(2) only considers the removal of limited categories of content, like obscene, excessively violent, and similarly objectionable expression. It says nothing about viewpoint-based or geography-based censorship. Second, read in context, § 230(c)(2) neither confers nor contemplates a freestanding right to censor. Instead, it clarifies that censoring limited categories of content does not remove the immunity conferred by § 230(c)(1). So rather than helping the Platforms’ case, § 230(c)(2) further undermines the Platforms’ claim that they are akin to newspapers for First Amendment purposes. That’s because it articulates Congress’s judgment that the Platforms are not like publishers even when they engage in censorship.
Except that Section 230 does not say “similarly objectionable.” It says “otherwise objectionable.” By switching “otherwise objectionable” to “similarly objectionable,” Oldham is insisting that courts like his own get to determine what counts as “similarly objectionable,” and that alone is a clear 1st Amendment problem. The courts cannot decide what content a website finds objectionable. That is, yet again, the state intruding on the editorial discretion of a website.
Also, completely ridiculously, Oldham leaves out that (c)(2) does not just include that list of objectionable categories, but it states: “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” In other words, the law explicitly states that whether or not something falls into that list is up to the provider or user and not the state. To leave that out of his description of (c)(2) is beyond misleading.
Also notable: Oldham completely ignores the fact that Section 230 pre-empts state laws like Texas’s, saying that “no liability may be imposed under any State or local law that is inconsistent with this section.” I guess Oldham is arguing that Texas’s law somehow is not inconsistent with 230, but it certainly is inconsistent with two and a half decades of 230 jurisprudence.
There’s then a long and, again, nonsensical discussion of common carriers, basically saying that the state can magically declare social media websites common carriers. I’m not even going to give that argument the satisfaction of covering it, it is so disconnected from reality. Social media literally meets none of the classifications of traditional common carriers. The fact that Oldham claims, that “the Platforms are no different than Verizon or AT&T” makes me question how anyone could take anything in this ruling seriously.
I’m also going to skip over the arguments for why the “transparency” bits are constitutional according to the 5th Circuit, other than to note that California must be happy, because under this ruling its new social media transparency laws would also be deemed constitutional even if they now conflict with Texas’s (that’ll be fun).
There are a few notable omissions from the ruling. It never mentions ACLU v. Reno, which seems incredibly relevant given its discussion of how the internet and the 1st Amendment work together, and is glaring in its absence. Second, it completely breezes past Justice Kavanaugh’s ruling in the Halleck case, which clearly established that under the First Amendment a “private entity may thus exercise editorial discretion over the speech and speakers in the forum.” The only mention of the ruling is in a single footnote, claiming that ruling only applies to “public forums” and saying it’s distinct from the issue raised here. But, uh, the quote (and much of the ruling) literally says the opposite. It’s talking about private forums. This is ridiculous. Third, as noted, the ruling ignores the pre-emption aspects of Section 230. Fourth, while it discusses the 11th Circuit’s ruling regarding Florida’s law, it tries to distinguish the two (while also highlighting where the two Circuits disagree to set up the inevitable Supreme Court battle). Finally, it never addresses the fact that the Supreme Court put its original “turn the law back on” ruling on hold. Apparently Oldham doesn’t much care.
The other two judges on the panel also provided their own, much shorter opinions, with Judge Edith Jones concurring and just doubling down on Oldham’s nonsense. There is an opinion from Judge Leslie Southwick that is a partial concurrence and partial dissent. It concurs on the transparency stuff, but dissents regarding the 1st Amendment.
The majority frames the case as one dealing with conduct and unfair censorship. The majority’s rejection of First Amendment protections for conduct follows unremarkably. I conclude, though, that the majority is forcing the picture of what the Platforms do into a frame that is too small. The frame must be large enough to fit the wide-ranging, free-wheeling, unlimited variety of expression — ranging from the perfectly fair and reasonable to the impossibly biased and outrageous — that is the picture of the First Amendment as envisioned by those who designed the initial amendments to the Constitution. I do not celebrate the excesses, but the Constitution wisely allows for them.
The majority no doubt could create an image for the First Amendment better than what I just verbalized, but the description would have to be similar. We simply disagree about whether speech is involved in this case. Yes, almost none of what others place on the Platforms is subject to any action by the companies that own them. The First Amendment, though, is what protects the curating, moderating, or whatever else we call the Platforms’ interaction with what others are trying to say. We are in a new arena, a very extensive one, for speakers and for those who would moderate their speech. None of the precedents fit seamlessly. The majority appears assured of their approach; I am hesitant. The closest match I see is caselaw establishing the right of newspapers to control what they do and do not print, and that is the law that guides me until the Supreme Court gives us more.
Judge Southwick then dismantles, bit by bit, each of Oldham’s arguments regarding the 1st Amendment and basically highlights how his much younger colleague is clearly misreading a few outlier Supreme Court rulings.
It’s a good read, but this post is long enough already. I’ll just note this point from Southwick’s dissent:
In no manner am I denying the reasonableness of the governmental interest. When these Platforms, that for the moment have gained such dominance, impose their policy choices, the effects are far more powerful and widespread than most other speakers’ choices. The First Amendment, though, is not withdrawn from speech just because speakers are using their available platforms unfairly or when the speech is offensive. The asserted governmental interest supporting this statute is undeniably related to the suppression of free expression. The First Amendment bars the restraints.
This resonated with me quite a bit, and drove home the problem with Oldham’s argument. It is the equivalent of one of Ken White’s famed free speech tropes. Oldham pointed to the outlier cases where some compelled speech was found constitutional, and turned that automatically into “if some compelled speech is constitutional, then it’s okay for this compelled speech to be constitutional.”
But that’s not how any of this works.
Southwick also undermines Oldham’s common carrier arguments and his Section 230 arguments, noting:
Section 230 also does not affect the First Amendment right of the Platforms to exercise their own editorial discretion through content moderation. My colleague suggests that “Congress’s judgment” as expressed in 47 U.S.C. § 230 “reinforces our conclusion that the Platforms’ censorship is not speech under the First Amendment.” Maj. Op. at 39. That opinion refers to this language: “No provider or user of an interactive computer service” — interactive computer service being a defined term encompassing a wide variety of information services, systems, and access software providers — “shall be treated as the publisher or speaker of any information provided by another content provider.” 47 U.S.C. § 230(c)(1). Though I agree that Congressional fact-findings underlying enactments may be considered by courts, the question here is whether the Platforms’ barred activity is an exercise of their First Amendment rights. If it is, Section 230’s characterizations do not transform it into unprotected speech.
The Platforms also are criticized for what my colleague sees as an inconsistent argument: the Platforms analogize their conduct to the exercise of editorial discretion by traditional media outlets, though Section 230 by its terms exempts them from traditional publisher liability. This may be exactly how Section 230 is supposed to work, though. Contrary to the contention about inconsistency, Congress in adopting Section 230 never factually determined that “the Platforms are not ‘publishers.’” Maj. Op. at 41. As one of Section 230’s co-sponsors — former California Congressman Christopher Cox, one of the amici here — stated, Section 230 merely established that the platforms are not to be treated as the publishers of pieces of content when they take up the mantle of content moderation, which was precisely the problem that Section 230 set out to solve: “content moderation . . . is not only consistent with Section 230; its protection is the very raison d’etre of Section 230.” In short, we should not force a false dichotomy on the Platforms. There is no reason “that a platform must be classified for all purposes as either a publisher or a mere conduit.” In any case, as Congressman Cox put it, “because content moderation is a form of editorial speech, the First Amendment more fully protects it beyond the specific safeguards enumerated in § 230(c)(2).” I agree.
Anyway, that’s the quick analysis of this mess. There will be more to come, and I imagine this will be an issue for the Supreme Court to sort out. I wish I had confidence that they would not contradict themselves, but I’m not sure I do.
The future of how the internet works is very much at stake with this one.