11th Circuit Disagrees With The 5th Circuit (But Actually Explains Its Work): Florida’s Social Media Bill Still (Mostly) Unconstitutional

from the boom dept

Well, well. As we still wait to see what the Supreme Court will do about the 5th Circuit’s somewhat bizarre, and reasonless reinstatement of Texas’ ridiculously bad social media content moderation bill, the 11th Circuit has come out with what might be a somewhat rushed decision going mostly in the other direction, and saying that most of Florida’s content moderation bill is, as the lower court said, unconstitutional. It’s worth reading the entire decision, which may take a bit longer than the 5th Circuit’s one sentence reinstatement of the law, as it makes a lot of good points. I still think that the court is missing some important points about the parts of the law that it has reinstated (around transparency), but we’ll have another post on that shortly (and I hope those mistakes may be fixed with more briefing).

As for what the court got right: it tossed the key parts of the law around moderation, saying that those were an easy call as unconstitutional, just like the lower court said. The government cannot mandate how a website handles content moderation. The ruling opens strong:

Not in their wildest dreams could anyone in the Founding generation have imagined Facebook, Twitter, YouTube, or TikTok. But “whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary when a new and different medium for communication appears.” Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 790 (2011) (quotation marks omitted). One of those “basic principles”—indeed, the most basic of the basic—is that “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors.” Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1926 (2019). Put simply, with minor exceptions, the government can’t tell a private person or entity what to say or how to say it.

The court effectively laughs off Florida’s argument that social media is no longer considered a “private actor” and effectively mocks the claims, made by Florida, that “the ‘big tech’ oligarchs in Silicon Valley” are trying to “silence conservative speech in favor of a ‘radical leftist’ agenda.” The 1st Amendment protects companies’ right to moderate how they see fit:

We hold that it is substantially likely that social-media companies—even the biggest ones—are “private actors” whose rights the First Amendment protects, Manhattan Cmty., 139 S. Ct. at 1926, that their so-called “content-moderation” decisions constitute protected exercises of editorial judgment, and that the provisions of the new Florida law that restrict large platforms’ ability to engage in content moderation unconstitutionally burden that prerogative. We further conclude that it is substantially likely that one of the law’s particularly onerous disclosure provisions—which would require covered platforms to provide a “thorough rationale” for each and every content-moderation decision they make—violates the First Amendment. Accordingly, we hold that the companies are entitled to a preliminary injunction prohibiting enforcement of those provisions.

As noted above, the court also does say that there are a few disclosure/transparency provisions that it finds “far less burdensome” are “unlikely” to violate the 1st Amendment, and vacates that part of the lower court ruling. I still think this is incorrect, but, as noted, we’ll explain that part in another post.

For the most part, this is a fantastic ruling, explaining clearly why content moderation is protected by the 1st Amendment. And, because I know that some supporters of Florida in our comments kept insisting that the lower court decision was only because it was a “liberal activist” judge, I’ll note that this ruling was written by Judge Kevin Newsom, who was appointed to the court by Donald Trump (and the other two judges on the panel were also nominated by Republican Presidents).

The ruling kicks off by noting, correctly, that social media is mostly made up of speech by third parties, and also (thankfully!) recognizing that it’s not just the giant sites, but smaller sites as well:

At their core, social-media platforms collect speech created by third parties—typically in the form of written text, photos, and videos, which we’ll collectively call “posts”—and then make that speech available to others, who might be either individuals who have chosen to “follow” the “post”-er or members of the general public. Social-media platforms include both massive websites with billions of users—like Facebook, Twitter, YouTube, and TikTok— and niche sites that cater to smaller audiences based on specific interests or affiliations—like Roblox (a child-oriented gaming network), ProAmericaOnly (a network for conservatives), and Vegan Forum (self-explanatory)

It’s good that they recognize that these kinds of laws impact smaller companies as well.

From there the court makes “three important points”: private websites are not the government, social media is different than a newspaper, and social media are not “dumb pipes” like traditional telecom services:

Three important points about social-media platforms: First—and this would be too obvious to mention if it weren’t so often lost or obscured in political rhetoric—platforms are private enterprises, not governmental (or even quasi-governmental) entities. No one has an obligation to contribute to or consume the content that the platforms make available. And correlatively, while the Constitution protects citizens from governmental efforts to restrict their access to social media, see Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017), no one has a vested right to force a platform to allow her to contribute to or consume social-media content.

Second, a social-media platform is different from traditional media outlets in that it doesn’t create most of the original content on its site; the vast majority of “tweets” on Twitter and videos on YouTube, for instance, are created by individual users, not the companies that own and operate Twitter and YouTube. Even so, platforms do engage in some speech of their own: A platform, for example, might publish terms of service or community standards specifying the type of content that it will (and won’t) allow on its site, add addenda or disclaimers to certain posts (say, warning of misinformation or mature content), or publish its own posts.

Third, and relatedly, social-media platforms aren’t “dumb pipes”: They’re not just servers and hard drives storing information or hosting blogs that anyone can access, and they’re not internet service providers reflexively transmitting data from point A to point B. Rather, when a user visits Facebook or Twitter, for instance, she sees a curated and edited compilation of content from the people and organizations that she follows. If she follows 1,000 people and 100 organizations on a particular platform, for instance, her “feed”—for better or worse—won’t just consist of every single post created by every single one of those people and organizations arranged in reverse-chronological order. Rather, the platform will have exercised editorial judgment in two key ways: First, the platform will have removed posts that violate its terms of service or community standards—for instance, those containing hate speech, pornography, or violent content. See, e.g., Doc. 26-1 at 3–6; Facebook Community Standards, Meta, https://transparency.fb.com/policies/community-standards (last accessed May 15, 2022). Second, it will have arranged available content by choosing how to prioritize and display posts—effectively selecting which users’ speech the viewer will see, and in what order, during any given visit to the site.

Each of these points is important and effectively dispenses with much of the nonsense we’ve seen people claim in the past. First, it tosses aside the incorrect and misleading argument that some have read into Packingham’s decision that notes the internet is a “public square.” Here, the judges correctly note that Packingham only stands for the rule that the government cannot restrict their access to social media, and not that it can force private companies to host them.

Also, I love the fact that the court makes the “not a dumb pipe” argument, and even uses the line “reflexively transmitting data from point A to point B.” That’s nearly identical to the language that I’ve used in explaining why it makes no sense to call social media a common carrier.

Next, the court points out, again accurately, that the purpose of a social media website is to act as an “intermediary” between users, but also (and this is important) in crafting different types of online communities, including focusing on niches:

Accordingly, a social-media platform serves as an intermediary between users who have chosen to partake of the service the platform provides and thereby participate in the community it has created. In that way, the platform creates a virtual space in which every user—private individuals, politicians, news organizations, corporations, and advocacy groups—can be both speaker and listener. In playing this role, the platforms invest significant time and resources into editing and organizing—the best word, we think, is curating—users’ posts into collections of content that they then disseminate to others. By engaging in this content moderation, the platforms develop particular market niches, foster different sorts of online communities, and promote various values and viewpoints.

This is also an important point that is regularly ignored or overlooked. It’s the point that the authors of Section 230 have tried to drive home in explaining why they wrote the law in the first place. When they talk about “diversity of political discourse” in the law, they never meant “all on the same site,” but rather giving websites the freedom to cater to different audiences. It’s fantastic that this panel recognizes that fact.

When we get to the meat of the opinion, explaining the decision, the court again makes a bunch of very strong, and very correct points, about the impact of a law like Florida’s.

Social-media platforms like Facebook, Twitter, YouTube, and TikTok are private companies with First Amendment rights, see First Nat’l Bank of Bos. v. Bellotti, 435 U.S. 765, 781–84 (1978), and when they (like other entities) “disclos[e],” “publish[],” or “disseminat[e]” information, they engage in “speech within the meaning of the First Amendment.” Sorrell v. IMS Health Inc., 564 U.S. 552, 570 (2011) (quotation marks omitted). More particularly, when a platform removes or deprioritizes a user or post, it makes a judgment about whether and to what extent it will publish information to its users—a judgment rooted in the platform’s own views about the sorts of content and viewpoints that are valuable and appropriate for dissemination on its site. As the officials who sponsored and signed S.B. 7072 recognized when alleging that “Big Tech” companies harbor a “leftist” bias against “conservative” perspectives, the companies that operate social-media platforms express themselves (for better or worse) through their content-moderation decisions. When a platform selectively removes what it perceives to be incendiary political rhetoric, pornographic content, or public-health misinformation, it conveys a message and thereby engages in “speech” within the meaning of the First Amendment.

Laws that restrict platforms’ ability to speak through content moderation therefore trigger First Amendment scrutiny. Two lines of precedent independently confirm this commonsense conclusion: first, and most obviously, decisions protecting exercises of “editorial judgment”; and second, and separately, those protecting inherently expressive conduct.

The key point here: the court recognizes that content moderation is about “editorial judgment” and, as such, easily gets 1st Amendment protection. It cites case after case holding this, focusing heavily on the ruling in Turner v. FCC. This is actually important, as some people (notably FCC commissioner Brendan Carr) trying to tear down Section 230’s protections have ridiculously tried to argue that the ruling in Turner supports their views. But those people are wrong, as the court clearly notes:

So too, in Turner Broadcasting Systems, Inc. v. FCC, the Court held that cable operators—companies that own cable lines and choose which stations to offer their customers—“engage in and transmit speech.” 512 U.S. at 636. “[B]y exercising editorial discretion over which stations or programs to include in [their] repertoire,” the Court said, they “seek to communicate messages on a wide variety of topics and in a wide variety of formats.” Id. (quotation marks omitted); see also Ark. Educ. TV Comm’n v. Forbes, 523 U.S. 666, 674 (1998) (“Although programming decisions often involve the compilation of the speech of third parties, the decisions nonetheless constitute communicative acts.”). Because cable operators’ decisions about which channels to transmit were protected speech, the challenged regulation requiring operators to carry broadcast-TV channels triggered First Amendment scrutiny

(Just as an aside, this also applies to all the nonsense we’ve heard people claim in trying to argue that OAN can force DirecTV to continue to carry it).

Either way, the court drives home: content moderation is editorial judgment.

Social-media platforms’ content-moderation decisions are, we think, closely analogous to the editorial judgments that the Supreme Court recognized in Miami Herald, Pacific Gas, Turner, and Hurley. Like parade organizers and cable operators, social-media companies are in the business of delivering curated compilations of speech created, in the first instance, by others. Just as the parade organizer exercises editorial judgment when it refuses to include in its lineup groups with whose messages it disagrees, and just as a cable operator might refuse to carry a channel that produces content it prefers not to disseminate, social-media platforms regularly make choices “not to propound a particular point of view.” Hurley, 515 U.S. at 575. Platforms employ editorial judgment to convey some messages but not others and thereby cultivate different types of communities that appeal to different groups. A few examples:

  • YouTube seeks to create a “welcoming community for viewers” and, to that end, prohibits a wide range of content, including spam, pornography, terrorist incitement, election and public-health misinformation, and hate speech.
  • Facebook engages in content moderation to foster “authenticity,” “safety,” “privacy,” and “dignity,” and accordingly, removes or adds warnings to a wide range of content—for example, posts that include what it considers to be hate speech, fraud or deception, nudity or sexual activity, and public-health misinformation
  • Twitter aims “to ensure all people can participate in the public conversation freely and safely” by removing content, among other categories, that it views as embodying hate, glorifying violence, promoting suicide, or containing election misinformation.
  • Roblox, a gaming social network primarily for children, prohibits “[s]ingling out a user or group for ridicule or abuse,” any sort of sexual content, depictions of and support for war or violence, and any discussion of political parties or candidates.
  • Vegan Forum allows non-vegans but “will not tolerate members who promote contrary agendas.”

It also notes that this 1st Amendment right enables forums focused on specific political agendas as well:

And to be clear, some platforms exercise editorial judgment to promote explicitly political agendas. On the right, ProAmericaOnly promises “No Censorship | No Shadow Bans | No BS | NO LIBERALS.” And on the left, The Democratic Hub says that its “online community is for liberals, progressives, moderates, independent[s] and anyone who has a favorable opinion of Democrats and/or liberal political views or is critical of Republican ideology.”

All such decisions about what speech to permit, disseminate, prohibit, and deprioritize—decisions based on platforms’ own particular values and views—fit comfortably within the Supreme Court’s editorial-judgment precedents.

As for Florida’s argument that since most content on social media is not vetted first, there is no editorial judgment in content moderation, the court says that’s obviously incorrect.

With respect, the State’s argument misses the point. The “conduct” that the challenged provisions regulate—what this entire appeal is about—is the platforms’ “censorship” of users’ posts—i.e., the posts that platforms do review and remove or deprioritize. The question, then, is whether that conduct is expressive. For reasons we’ve explained, we think it unquestionably is.

There’s also a good footnote debunking the claim that content moderation isn’t expressive because the rules aren’t intending to “convey a particularized message.” As the court notes, that’s just silly:

To the extent that the states argue that social-media platforms lack the requisite “intent” to convey a message, we find it implausible that platforms would engage in the laborious process of defining detailed community standards, identifying offending content, and removing or deprioritizing that content if they didn’t intend to convey “some sort of message.” Unsurprisingly, the record in this case confirms platforms’ intent to communicate messages through their content-moderation decisions—including that certain material is harmful or unwelcome on their sites. See, e.g., Doc. 25-1 at 2 (declaration of YouTube executive explaining that its approach to content moderation “is to remove content that violates [its] policies (developed with outside experts to prevent real-world harms), reduce the spread of harmful misinformation . . . and raise authoritative and trusted content”); Facebook Community Standards, supra (noting that Facebook moderates content “in service of” its “values” of “authenticity,” “safety,” “privacy,” and “dignity”).

From there, the court digs into the idea that the two favorite cases cited regularly by both Florida and Texas in defense of these laws has any weight here. The two cases are Rumsfeld v. FAIR (regarding military recruitment on a college campus) and Pruneyard v. Robins (regarding a shopping mall where people wanted to hand out petitions). We’ve explained in detail in the past why neither case works here, but we’ll let the 11th Circuit panel handle the details here:

We begin with the “hosting” cases. The first decision to which the State points, PruneYard, is readily distinguishable. There, the Supreme Court affirmed a state court’s decision requiring a privately owned shopping mall to allow members of the public to circulate petitions on its property. 447 U.S. at 76–77, 88. In that case, though, the only First Amendment interest that the mall owner asserted was the right “not to be forced by the State to use [its] property as a forum for the speech of others.” Id. at 85. The Supreme Court’s subsequent decisions in Pacific Gas and Hurley distinguished and cabined PruneYard. The Pacific Gas plurality explained that “[n]otably absent from PruneYard was any concern that access to this area might affect the shopping center owner’s exercise of his own right to speak: the owner did not even allege that he objected to the content of the pamphlets.” 475 U.S. at 12 (plurality op.); see also id. at 24 (Marshall, J., concurring in the judgment) (“While the shopping center owner in PruneYard wished to be free of unwanted expression, he nowhere alleged that his own expression was hindered in the slightest.”); Hurley, 515 U.S. at 580 (noting that the “principle of speaker’s autonomy was simply not threatened in” PruneYard). Because NetChoice asserts that S.B. 7072 interferes with the platforms’ own speech rights by forcing them to carry messages that contradict their community standards and terms of service, PruneYard is inapposite.

Nice, simple, and straightforward. As for Rumsfeld v. FAIR, that is also easily different:

FAIR may be a bit closer, but it, too, is distinguishable. In that case, the Supreme Court upheld a federal statute—the Solomon Amendment—that required law schools, as a condition to receiving federal funding, to allow military recruiters the same access to campuses and students as any other employer. 547 U.S. at 56. The schools, which had restricted recruiters’ access because they opposed the military’s “Don’t Ask, Don’t Tell” policy regarding gay servicemembers, protested that requiring them to host recruiters and post notices on their behalf violated the First Amendment. Id. at 51. But the Court held that the law didn’t implicate the First Amendment because it “neither limit[ed] what law schools may say nor require[d] them to say anything.” Id. at 60. In so holding, the Court rejected two arguments for why the First Amendment should apply—(1) that the Solomon Amendment unconstitutionally required law schools to host the military’s speech, and (2) that it restricted the law schools’ expressive conduct. Id. at 60–61.

[….]

FAIR isn’t controlling here because social-media platforms warrant First Amendment protection on both of the grounds that the Court held that law-school recruiting services didn’t.

First, S.B. 7072 interferes with social-media platforms’ own “speech” within the meaning of the First Amendment. Social-media platforms, unlike law-school recruiting services, are in the business of disseminating curated collections of speech. A social-media platform that “exercises editorial discretion in the selection and presentation of” the content that it disseminates to its users “engages in speech activity.” Ark. Educ. TV Comm’n, 523 U.S. at 674; see Sorrell, 564 U.S. at 570 (explaining that the “dissemination of information” is “speech within the meaning of the First Amendment”); Bartnicki v. Vopper, 532 U.S. 514, 527 (2001) (“If the acts of ‘disclosing’ and ‘publishing’ information do not constitute speech, it is hard to imagine what does fall within that category.” (cleaned up)). Just as the must-carry provisions in Turner “reduce[d] the number of channels over which cable operators exercise[d] unfettered control” and therefore triggered First Amendment scrutiny, 512 U.S. at 637, S.B. 7072’s content-moderation restrictions reduce the number of posts over which platforms can exercise their editorial judgment. Because a social-media platform itself “spe[aks]” by curating and delivering compilations of others’ speech—speech that may include messages ranging from Facebook’s promotion of authenticity, safety, privacy, and dignity to ProAmericaOnly’s “No BS | No LIBERALS”—a law that requires the platform to disseminate speech with which it disagrees interferes with its own message and thereby implicates its First Amendment rights.

Second, social-media platforms are engaged in inherently expressive conduct of the sort that the Court found lacking in FAIR. As we were careful to explain in FLFNB I, FAIR “does not mean that conduct loses its expressive nature just because it is also accompanied by other speech.” 901 F.3d at 1243–44. Rather, “[t]he critical question is whether the explanatory speech is necessary for the reasonable observer to perceive a message from the conduct.” Id. at 1244. And we held that an advocacy organization’sfood-sharing events constituted expressive conduct from which, “due to the context surrounding them, the reasonable observer would infer some sort of message”—even without reference to the words “Food Not Bombs” on the organization’s banners. Id. at 1245. Context, we held, is what differentiates “activity that is sufficiently expressive [from] similar activity that is not”—e.g., “the act of sitting down” from “the sit-in by African Americans at a Louisiana library” protesting segregation. Id. at 1241 (citing Brown v. Louisiana, 383 U.S. 131, 141–42 (1966)).

Unlike the law schools in FAIR, social-media platforms’ content-moderation decisions communicate messages when they remove or “shadow-ban” users or content. Explanatory speech isn’t “necessary for the reasonable observer to perceive a message from,” for instance, a platform’s decision to ban a politician or remove what it perceives to be misinformation. Id. at 1244. Such conduct—the targeted removal of users’ speech from websites whose primary function is to serve as speech platforms—conveys a message to the reasonable observer “due to the context surrounding” it. Id. at 1245; see also Coral Ridge, 6 F.4th at 1254. Given the context, a reasonable observer witnessing a platform remove a user or item of content would infer, at a minimum, a message of disapproval. Thus, social-media platforms engage in content moderation that is inherently expressive notwithstanding FAIR

The court then takes a further hatchet to both FAIR and Pruneyard:

The State asserts that Pruneyard and FAIR—and, for that matter, the Supreme Court’s editorial-judgment decisions—establish three “guiding principles” that should lead us to conclude that S.B. 7072 doesn’t implicate the First Amendment. We disagree.

The first principle—that a regulation must interfere with the host’s ability to speak in order to implicate the First Amendment— does find support in FAIR. See 547 U.S. at 64. Even so, the State’s argument—that S.B. 7072 doesn’t interfere with platforms’ ability to speak because they can still affirmatively dissociate themselves from the content that they disseminate—encounters two difficulties. As an initial matter, in at least one key provision, the Act defines the term “censor” to include “posting an addendum,” i.e., a disclaimer—and thereby explicitly prohibits the very speech by which a platform might dissociate itself from users’ messages. Fla. Stat. § 501.2041(1)(b). Moreover, and more fundamentally, if the exercise of editorial judgment—the decision about whether, to what extent, and in what manner to disseminate third-party content—is itself speech or inherently expressive conduct, which we have said it is, then the Act does interfere with platforms’ ability to speak. See Pacific Gas, 475 U.S. at 10–12, 16 (plurality op.) (noting that if the government could compel speakers to “propound . . . messages with which they disagree,” the First Amendment’s protection “would be empty, for the government could require speakers to affirm in one breath that which they deny in the next”).

The State’s second principle—that in order to trigger First Amendment scrutiny a regulation must create a risk that viewers or listeners might confuse a user’s and the platform’s speech—finds little support in our precedent. Consumer confusion simply isn’t a prerequisite to First Amendment protection. In Miami Herald, for instance, even though no reasonable observer would have mistaken a political candidate’s statutorily mandated right-to-reply column for the newspaper reversing its earlier criticism, the Supreme Court deemed the paper’s editorial judgment to be protected. See 418 U.S. at 244, 258. Nor was there a risk of consumer confusion in Turner: No reasonable person would have thought that the cable operator there endorsed every message conveyed by every speaker on every one of the channels it carried, and yet the Court stated categorically that the operator’s editorial discretion was protected. See 512 U.S. at 636–37. Moreover, it seems to us that the State’s confusion argument boomerangs back around on itself: If a platform announces a community standard prohibiting, say, hate speech, but is then barred from removing or even disclaiming posts containing what it perceives to be hate speech, there’s a real risk that a viewer might erroneously conclude that the platform doesn’t consider those posts to constitute hate speech.

The State’s final principle—that in order to receive First Amendment protection a platform must curate and present speech in such a way that a “common theme” emerges—is similarly flawed. Hurley held that “a private speaker does not forfeit constitutional protection simply by combining multifarious voices, or by failing to edit their themes to isolate an exact message as the exclusive subject matter of the speech.” 515 U.S. at 569–70; see FLFNB I, 901 F.3d at 1240 (citing Hurley for the proposition that a “particularized message” isn’t required for conduct to qualify for First Amendment protection). Moreover, even if one could theoretically attribute a common theme to a parade, Turner makes clear that no such theme is required: It seems to us inconceivable that one could ascribe a common theme to the cable operator’s choice there to carry hundreds of disparate channels, and yet the Court held that the First Amendment protected the operator’s editorial discretion….

In short, the State’s reliance on PruneYard and FAIR and its attempts to distinguish the editorial-judgment line of cases are unavailing.

How about the “common carrier” argument? Nope. Not at all.

The first version of the argument fails because, in point of fact, social-media platforms are not—in the nature of things, so to speak—common carriers. That is so for at least three reasons.

First, social-media platforms have never acted like common carriers. “[I]n the communications context,” common carriers are entities that “make a public offering to provide communications facilities whereby all members of the public who choose to employ such facilities may communicate or transmit intelligence of their own design and choosing”—they don’t “make individualized decisions, in particular cases, whether and on what terms to deal.” FCC v. Midwest Video Corp., 440 U.S. 689, 701 (1979) (cleaned up). While it’s true that social-media platforms generally hold themselves open to all members of the public, they require users, as preconditions of access, to accept their terms of service and abide by their community standards. In other words, Facebook is open to every individual if, but only if, she agrees not to transmit content that violates the company’s rules. Social-media users, accordingly, are not freely able to transmit messages “of their own design and choosing” because platforms make—and have always made—“individualized” content- and viewpoint-based decisions about whether to publish particular messages or users.

Second, Supreme Court precedent strongly suggests that internet companies like social-media platforms aren’t common carriers. While the Court has applied less stringent First Amendment scrutiny to television and radio broadcasters, the Turner Court cabined that approach to “broadcast” media because of its “unique physical limitations”—chiefly, the scarcity of broadcast frequencies. 512 U.S. at 637–39. Instead of “comparing cable operators to electricity providers, trucking companies, and railroads—all entities subject to traditional economic regulation”—the Turner Court “analogized the cable operators [in that case] to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment.” U.S. Telecom Ass’n v. FCC, 855 F.3d 381, 428 (D.C. Cir. 2017) (Kavanaugh, J., dissental); see Turner, 512 U.S. at 639. And indeed, the Court explicitly distinguished online from broadcast media in Reno v. American Civil Liberties Union, emphasizing that the “vast democratic forums of the Internet” have never been “subject to the type of government supervision and regulation that has attended the broadcast industry.” 521 U.S. 844, 868–69 (1997). These precedents demonstrate that social-media platforms should be treated more like cable operators, which retain their First Amendment right to exercise editorial discretion, than traditional common carriers.

Finally, Congress has distinguished internet companies from common carriers. The Telecommunications Act of 1996 explicitly differentiates “interactive computer services”—like social-media platforms—from “common carriers or telecommunications services.” See, e.g., 47 U.S.C. § 223(e)(6) (“Nothing in this section shall be construed to treat interactive computer services as common carriers or telecommunications carriers.”). And the Act goes on to provide protections for internet companies that are inconsistent with the traditional common-carrier obligation of indiscriminate service. In particular, it explicitly protects internet companies’ ability to restrict access to a plethora of material that they might consider “objectionable.” Id. § 230(c)(2)(A). Federal law’s recognition and protection of social-media platforms’ ability to discriminate among messages—disseminating some but not others—is strong evidence that they are not common carriers with diminished First Amendment rights.

Okay, but what if Florida just declares them to be common carriers? No, no, that’s not how any of this works either:

If social-media platforms are not common carriers either in fact or by law, the State is left to argue that it can force them to become common carriers, abrogating or diminishing the First Amendment rights that they currently possess and exercise. Neither law nor logic recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier. Quite the contrary, if social-media platforms currently possess the First Amendment right to exercise editorial judgment, as we hold it is substantially likely they do, then any law infringing that right—even one bearing the terminology of “common carri[age]”—should be assessed under the same standards that apply to other laws burdening First-Amendment-protected activity. See Denver Area Educ. Telecomm. Consortium, Inc. v. FCC, 518 U.S. 727, 825 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part) (“Labeling leased access a common carrier scheme has no real First Amendment consequences.”);Cablevision Sys. Corp. v. FCC, 597 F.3d 1306, 1321–22 (D.C. Cir. 2010) (Kavanaugh, J., dissenting) (explaining that because video programmers have a constitutional right to exercise editorial discretion, “the Government cannot compel [them] to operate like ‘dumb pipes’ or ‘common carriers’ that exercise no editorial control”); U.S. Telecom Ass’n, 855 F.3d at 434 (Kavanaugh, J., dissental) (“Can the Government really force Facebook and Google . . . to operate as common carriers?”)

Okay, then, how about that these websites are somehow so important that it magically means the state can regulate speech on them. Lol, nope, says the court:

The State seems to argue that even if platforms aren’t currently common carriers, their market power and public importance might justify their “legislative designation . . . as common carriers.” Br. of Appellants at 36; see Knight, 141 S. Ct. at 1223 (Thomas, J., concurring) (noting that the Court has suggested that common-carrier regulations “may be justified, even for industries not historically recognized as common carriers, when a business . . . rises from private to be a public concern” (quotation marks omitted)). That might be true for an insurance or telegraph company, whose only concern is whether its “property” becomes “the means of rendering the service which has become of public interest.” Knight, 141 S. Ct. at 1223 (Thomas, J., concurring) (quoting German All. Ins. Co. v. Lewis, 233 U.S. 389, 408 (1914)). But the Supreme Court has squarely rejected the suggestion that a private company engaging in speech within the meaning of the First Amendment loses its constitutional rights just because it succeeds in the marketplace and hits it big. See Miami Herald, 418 U.S. at 251, 258.

In short, because social-media platforms exercise—and have historically exercised—inherently expressive editorial judgment, they aren’t common carriers, and a state law can’t force them to act as such unless it survives First Amendment scrutiny.

So many great quotes in all of this.

Anyway, once the court has made it clear that content moderation is protected by the 1st Amendment, that’s not the end of the analysis. Because there are some cases in which the state can still regulate, but first it must pass strict scrutiny. And here, the court says, we’re not even close:

We’ll start with S.B. 7072’s content-moderation restrictions. While some of these provisions are likely subject to strict scrutiny, it is substantially likely that none survive even intermediate scrutiny. When a law is subject to intermediate scrutiny, the government must show that it “is narrowly drawn to further a substantial governmental interest . . . unrelated to the suppression of free speech.” FLFNB II, 11 F.4th at 1291. Narrow tailoring in this context means that the regulation must be “no greater than is essential to the furtherance of [the government’s] interest.” O’Brien, 391 U.S. at 377.

We think it substantially likely that S.B. 7072’s content-moderation restrictions do not further any substantial governmental interest—much less any compelling one. Indeed, the State’s briefing doesn’t even argue that these provisions can survive heightened scrutiny. (The State seems to have wagered pretty much everything on the argument that S.B. 7072’s provisions don’t trigger First Amendment scrutiny at all.) Nor can we discern any substantial or compelling interest that would justify the Act’s significant restrictions on platforms’ editorial judgment. We’ll briefly explain and reject two possibilities that the State might offer.

As for the argument that the state has to protect those poor, poor conservatives against “unfair” censorship, the court points out that’s not how this works:

The State might theoretically assert some interest in counteracting “unfair” private “censorship” that privileges some viewpoints over others on social-media platforms. See S.B. 7072 § 1(9). But a state “may not burden the speech of others in order to tilt public debate in a preferred direction,” Sorrell, 564 U.S. at 578–79, or “advance some points of view,” Pacific Gas, 475 U.S. at 20 (plurality op.). Put simply, there’s no legitimate—let alone substantial—governmental interest in leveling the expressive playing field. Nor is there a substantial governmental interest in enabling users—who, remember, have no vested right to a social-media account—to say whatever they want on privately owned platforms that would prefer to remove their posts: By preventing platforms from conducting content moderation—which, we’ve explained, is itself expressive First-Amendment-protected activity—S.B. 7072 “restrict[s] the speech of some elements of our society in order to enhance the relative voice of others”—a concept “wholly foreign to the First Amendment.” Buckley v. Valeo, 424 U.S. 1, 48–49 (1976). At the end of the day, preventing “unfair[ness]” to certain users or points of view isn’t a substantial governmental interest; rather, private actors have a First Amendment right to be “unfair”—which is to say, a right to have and express their own points of view. Miami Herald, 418 U.S. 258.

How about enabling more speech? That’s not the government’s job either:

The State might also assert an interest in “promoting the widespread dissemination of information from a multiplicity of sources.” Turner, 512 U.S. at 662. Just as the Turner Court held that the must-carry provisions served the government’s substantial interest in ensuring that American citizens were able to access their “local broadcasting outlets,” id. at 663–64, the State could argue that S.B. 7072 ensures that political candidates and journalistic enterprises are able to communicate with the public, see Fla. Stat. §§ 106.072(2); 501.2041(2)(f), (j). But it’s hard to imagine how the State could have a “substantial” interest in forcing large platforms—and only large platforms—to carry these parties’ speech: Unlike the situation in Turner, where cable operators had “bottleneck, or gatekeeper control over most programming delivered into subscribers’ homes,” 512 U.S. at 623, political candidates and large journalistic enterprises have numerous ways to communicate with the public besides any particular social-media platform that might prefer not to disseminate their speech—e.g., other more-permissive platforms, their own websites, email, TV, radio, etc. See Reno, 521 U.S. at 870 (noting that unlike the broadcast spectrum, “the internet can hardly be considered a ‘scarce’ expressive commodity” and that “[t]hrough the use of Web pages, mail exploders, and newsgroups, [any] individual can become a pamphleteer”). Even if other channels aren’t as effective as, say, Facebook, the State has no substantial (or even legitimate) interest in restricting platforms’ speech—the messages that platforms express when they remove content they find objectionable—to “enhance the relative voice” of certain candidates and journalistic enterprises. Buckley, 424 U.S. at 48–49

Another nice bit of language: the court says that the government can’t force websites to not use an algorithm to rank content (which is a big deal as many states are trying to do just that):

Finally, there is likely no governmental interest sufficient to justify forcing platforms to show content to users in a “sequential or chronological” order, see § 501.2041(2)(f), (g)—a requirement that would prevent platforms from expressing messages through post-prioritization and shadow banning.

Finally, there’s a great footnote that recognizes the problems we pointed out with regards to Texas’ law and the livestream of the mass murderer in Buffalo. The Court recognizes how the same issue could apply in Florida:

Even worse, S.B. 7072 would seemingly prohibit Facebook or Twitter from removing a video of a mass shooter’s killing spree if it happened to be reposted by an entity that qualifies for “journalistic enterprise” status.

And, that’s basically it. As noted up top, there are a few, fairly minor provisions that the court says should not be subject to the injunction, and we’ll have another post on that shortly. But for now, this is a pretty big win for the 1st Amendment, actual free speech, and the rights of private companies to moderate as they see fit.

Hilariously, Florida is pretending it won the ruling, because of the few smaller provisions that are no longer subject to the injunction. But this is a near complete loss for the state, and a huge win for free speech.

Filed Under: , , , , , , , , , ,
Companies: ccia, netchoice

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “11th Circuit Disagrees With The 5th Circuit (But Actually Explains Its Work): Florida’s Social Media Bill Still (Mostly) Unconstitutional”

Subscribe: RSS Leave a comment
24 Comments
Bruce C. says:

20/20 hindsight...

even though it’s 2022: Texas and Florida would have been smarter to go after “Big Tech” on anti-trust grounds. But at the time these laws were fabricated, Republicans were still pretending to be the “business friendly” party.

(Insert obligatory comment about “Republicans” and “smart” appearing in the same sentence.)

James Burkhardt (profile) says:

Re:

Not sure I agree. Democrat attempts to use anti-trust in this way keep running into issues with defining the ‘market’. Big tech, for all the talk of it being a monolith, occupies several radically different markets and if this bill is any indication, Anti-trust inquiries wouldn’t see a narrow market definition that holds up to scrutiny, even if Anti-trust was warranted based on that market definition.

Naughty Autie says:

Re:

In the present time, some Republicans have decided that it is better to censor viewpoints with which they disagree than to debate them like smart people. That do you?

This comment has been deemed insightful by the community.
Toom1275 (profile) says:

Move over “9th-circuit-shuts-down-PragerUwu’s-attempt-to-censor-Youtube’s-free-speech,” here’s a new good and correct ruling to bookmark and quote at illiterate anti-free-speech morons like Hyman and Chozen.

This comment has been deemed funny by the community.
Anonymous Coward says:

Re: Re: Re:2

Maybe just add an animated “Under Construction” gif to your copypasta until you’re done completing it, even though it will never be complete…

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Some legal arguments and laws have theme songs I guess

It’s the strangest thing, for some reason Weird Al’s ‘Everything you know is wrong’ kept running through my head as I read the court’s ruling ripping the law to pieces by showing how it was fatally flawed on everything

Can’t say I’m surprised that the florida government is still trying to claim this as a win, after a spanking that thorough it’s not like they could do anything else since admitting that they got dragged over the coals would by necessity involve admitting to why the court basically laughed them out of the room.

This comment has been deemed insightful by the community.
Michael says:

1st Amendment, but not the 4th

This line is interesting …

“the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary when a new and different medium for communication appears”

… given that it appears to only apply to the 1st Amendment. Why is the 4th Amendment so variable, then? Technology means that many of my my “personal effects” are stored in email, the cloud, or elsewhere, yet they don’t receive the same protections as the papers in my home safe.

This comment has been deemed funny by the community.
Mike Masnick (profile) says:

Where is everyone?

So… what happened to our crew of regular commenters who insisted the 11th Circuit full of Republican appointees was clearly going to side with Florida?

It’s mighty quiet in here today.

Anonymous Coward says:

Re:

It’s mighty quiet in here today.

Noticed that myself. Been wondering when the usual trolls would show up and try to convince everybody that they are correct and the court is somehow wrong.

But crickets…

I am guessing that the points made by the court are somewhat non-arguable as they make it plain as day that social media is not some common carrier public service.

This comment has been deemed insightful by the community.
Toom1275 (profile) says:

tl;dr version:

State has a right to control how platforms moderate? Lol nope
Platforms are too big to be private actors? Try again.
Moderation is a radical lwftist agenda? Lol First Amendment bitches!
Platforms are the government? Dafuq you smoking?
Platforms are like newspaprts? Not even.
Platforms are dumb conduits like phone companies? Not in this reality.
Free speech requires a site to be open to every “opinion?” That’s not how this works.
Packingham? Completely irrelevant
Turner v FCC? Nope.
Pulitical neutrality required? First Amendment calls bullshit.
Moderation isn’t free speech? Debunked.
Rumsfeld? Pruneyard? No relevance
Platforms are comon carriers? Not to anyone literate.
The state can declare them common carriers anyway? Lies.
“Too big to speak freely?” Not even.
Fascists have a “compelling interest” to regulate platforms’ speech? Try harder.
Government-censorship-enforced “political fairness?” Illegal.
“More” “Speech?” None of your business.
“But Algorithms!” Hands off.
“Terrorism is sacrosanct!” This is America.

Lostinlodos (profile) says:

They kept the one good thing?

As long as community actions of down ranking remain unaffected…

The one thing I really support here is laying out the rules clearly. What actions a user makes that will trigger a site based action.

Yes STS, the troll argument—I simply don’t care. I prefer transparency.
It’s my personal opinion. And the route I choose. Violate the rules, go to the litterbox. Shrek around them I adjust them. It’s worth the extra work for transparency.

Leave a Reply to Stephen T. Stone Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...