The Problem With The Otherwise Very Good And Very Important Eleventh Circuit Decision On The Florida Social Media Law
from the the-hole-in-the-donut dept
There are many good things to say about the Eleventh Circuit’s decision on the Florida SB 7072 social media law, including that it’s a very well-reasoned, coherent, logical, sustainable, precedent-consistent, and precedent-supporting First Amendment analysis explaining why platforms moderating user-generated speech still implicates their own protected rights. And not a moment too soon, while we wait for the Supreme Court to hopefully grant relief from the unconstitutional Texas HB20 social media bill.
But there’s also a significant issue with it, which is that it only found most of the provisions of SB 7072 presumptively unconstitutional, so some of the law’s less-obviously-yet-still pernicious provisions have been allowed to go into effect.
These provisions include the need to disclose moderation standards (§501.2041(2)(a)) (the court only took issue with needing to post an explanation for every moderation decision), disclose when the moderation rules change (501.2041(2)(c)), disclose to users view counts on their posts (§501.2041(2)(e)), disclose that it has given candidates free advertising (§ 106.1072(4)), and give deplatformed users access to their data (§ 510.2041(2)(i)). The analysis gave short-shrift to these provisions that it allowed to go into effect, despite their burdens on the same editorial discretion the court overall recognized was First Amendment-protected, despite the extent that they violate the First Amendment as a form of compelled speech, and despite how they should be pre-empted by Section 230.
Of course, the court did acknowledge that these provisions might yet be shown to violate the First Amendment. For instance, in the context of the data-access provision the court wrote:
It is theoretically possible that this provision could impose such an inordinate burden on the platforms’ First Amendment rights that some scrutiny would apply. But at this stage of the proceedings, the plaintiffs haven’t shown a substantial likelihood of success on the merits of their claim that it implicates the First Amendment. [FN 18]
And it made a somewhat similar acknowledgment for the campaign advertising provision:
While there is some uncertainty in the interest this provision serves and the meaning of “free advertising,” we conclude that at this stage of the proceedings, NetChoice hasn’t shown that it is substantially likely to be unconstitutional. [FN 24]
And for the other disclosure provisions as well:
Of course, NetChoice still might establish during the course of litigation that these provisions are unduly burdensome and therefore unconstitutional. [FN 25]
Yet because the court could not already recognize how these rules chill editorial discretion means that they will now get the chance to. For example, it is unclear how a platform could even comply with them, especially a platform like Techdirt (or Reddit, or Wikimedia), which use community-based moderation, and whose moderating whims are impossible to know, let alone disclose, in advance of implementing. Such a provision would seem to be chilling of editorial discretion by making it impossible to choose such a moderation system, even when doing so aligns with the expressive values of the platform. (True, SB 7072 may not yet reach the aforementioned platforms, but such is little consolation if it means that the platforms it does reach could still be chilled from making of such editorial choices.)
The analysis was also scant with respect to the First Amendment prohibition against compelled speech, which these provisions implicate by forcing platforms to say certain things. Although this prohibition against compelled speech supported the court’s willingness to enjoin the other provisions, its analysis glossed over how this constitutional rule should have applied to these disclosure provisions:
These are content-neutral regulations requiring social-media platforms to disclose “purely factual and uncontroversial information” about their conduct toward their users and the “terms under which [their] services will be available,” which are assessed under the standard announced in Zauderer. 471 U.S. at 651. While “restrictions on non-misleading commercial speech regarding lawful activity must withstand intermediate scrutiny,” when “the challenged provisions impose a disclosure requirement rather than an affirmative limitation on speech . . . the less exacting scrutiny described in Zauderer governs our review.” Milavetz, Gallop & Milavetz, P.A. v. United States, 559 U.S. 229, 249 (2010). Although this standard is typically applied in the context of advertising and to the government’s interest in preventing consumer deception, we think it is broad enough to cover S.B. 7072’s disclosure requirements—which, as the State contends, provide users with helpful information that prevents them from being misled about platforms’ policies. [p. 57-8]
And by not enjoining these provisions it will now compel platforms to publish information it wasn’t already publishing, or even potentially significantly re-engineer their systems (such as to give users view count data).
In addition, the decision then gave short shrift to how Section 230 pre-empted such requirements. To an extent, this oversight may in part be due to how the court found it was not necessary to reach Section 230 in finding that most of the law’s provisions should be enjoined (“Because we conclude that the Act’s content-moderation restrictions are substantially likely to violate the First Amendment, and because that conclusion fully disposes of the appeal, we needn’t reach the merits of the plaintiffs’ preemption challenge.” [p.18]).
But for the provisions where it couldn’t find the First Amendment to be enough of a reason to enjoin it, the court ideally should have moved onto this alternative basis before allowing the provisions to go into effect. Unfortunately, it’s also possible that the court really didn’t recognize how Section 230 was a bar to them:
Nor are these provisions substantially likely to be preempted by 47 U.S.C. § 230. Neither NetChoice nor the district court asserted that § 230 would preempt the disclosure, candidate-advertising, or user-data-access provisions. It is not substantially likely that any of these provisions treat social-media platforms “as the publisher or speaker of any information provided by” their users, 47 U.S.C. § 230(c)(1), or hold platforms “liable on account of” an “action voluntarily taken in good faith to restrict access to or availability of material that the provider considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,” id. § 230(c)(2)(A). [FN 26]
Fortunately, however, there will likely be future opportunities to brief that issue more clearly in the future, as the case has now been remanded back to the district court for further proceedings – this appeal was only in reference to whether the law was likely to be so legally dubious to warrant being enjoined while it was challenged, but the challenge itself can continue. And it will happen in the shadow of this otherwise full-throated defense of the First Amendment in the context of platform content moderation.