Busting Still More Myths About Section 230, For You And The FCC
from the human-readable-advocacy dept
The biggest challenge we face in advocating for Section 230 is how misunderstood it is. Instead of getting to argue about its merits, we usually have to spend our time disabusing people of their mistaken impressions about what the statute does and how. If people don’t get that part right then we’ll never be able to have a meaningful conversation about the appropriate role it should have in tech policy.
It’s particularly a problem when it’s a federal agency getting these things wrong. In our last comment to the FCC we therefore took issue with some of the worst falsehoods the NTIA had asserted in its petition demanding the FCC somehow seize imaginary authority it doesn’t actually have to change Section 230. But in reading a number of public comments in support of its petition, it became clear that there was more to say to address these misapprehensions about the law.
The record developed in the opening round of comments in the [FCC’s] rulemaking reflects many opinions about Section 230. But opinions are not facts, and many of these opinions reflect a fundamental misunderstanding of how Section 230 works, why we have it, and what is at risk if it is changed.
These misapprehensions should not become the basis of policy because they cannot possibly be the basis of *good* policy. To help ensure they will not be the predicate for any changes to Section 230, the Copia Institute submits this reply comment to address some of the recurrent myths surrounding Section 230, which should not drive policy, and reaffirm some fundamental truths, which should.
Our exact reply comment is attached below. But because it isn’t just these agencies we want to make sure understand how this important law works, instead of just summarizing it here, we’re including a version of it in full right here, below.
As we told the FCC, there are several recurring complaints that frequently appear in the criticism leveled at Section 230. Unfortunately, most of these complaints are predicated on fundamental misunderstandings of why we have Section 230, or how it works. What follows is an attempt to dispel many of these myths and to explain what is at risk by making changes to Section 230 ? especially any changes born out of these misunderstandings.
To begin with, one type of flawed argument against Section 230 tends to be premised on the incorrect notion that Section 230 was intended to be some sort of Congressional handout designed to subsidize a nascent Internet. The thrust of the argument is that now that the Internet has become more established, Section 230 is no longer necessary and thus should be repealed. But there are several problems with this view.
For one thing, it is technically incorrect. Prodigy, the platform jeopardized by the Stratton Oakmont decision, which prompted the passage of Section 230, was already more than ten years old by that point and handling large amounts of user-generated content. It was also owned by large corporate entities (Sears and IBM). It is true that Congress was worried that if Prodigy could be held liable for its users’ content it would jeopardize the ability for new service providers to come into being. But the reason Congress had that concern was because of how that liability threatened the service providers that already existed. In other words, it is incorrect to frame Section 230 as a law designed to only foster small enterprises; from the very beginning it was intended to protect entrenched corporate incumbents, as well as everything that would follow.
Indeed, the historical evidence bears out this concern. For instance, in the United States, where, at least until now, there has been much more robust platform protection than in Europe, investment in new technologies and services has vastly outpaced that in Europe. (See the Copia Institute’s whitepaper Don’t Shoot the Message Board for more information along these lines.) Even in the United States there is a correlation between the success of new technologies and services and the strength of the available platform protection, where those that rely upon the much more robust Section 230 immunity do much better than those that depend on the much weaker Digital Millennium Copyright Act safe harbors.
Next, it is also incorrect to say that Section 230 was intended to be a subsidy for any particular enterprise, or even any particular platform. Nothing in the language of Section 230 causes it to apply to apply only to corporate interests. Per Section 230(f)(2) the statute applies to anyone meeting the definition of a service provider, as well as any user of a service provider. Many service providers are also small or non-profit, and, as we’ve discussed before, can even be individuals. Section 230 applies to them all, and all will be harmed if its language is changed.
Indeed, the point of Section 230 was not to protect platforms for their own sake but to protect the overall health of the Internet itself. Protecting platforms was simply the step Congress needed to take to achieve that end. It is clear from the preamble language of Section 230(a) and (b), as well as the legislative history, that what Congress really wanted to do with Section 230 was simultaneously encourage the most good online expression, and the least bad. It accomplished this by creating a two-part immunity that both shielded platforms from liability arising from carrying speech, as well as from any liability in removing it.
By pursuing a regulatory approach that was essentially carrot-based, rather than stick-based, Congress left platforms free to do the best they could to vindicate both goals: intermediating the most beneficial speech and allocating their resources most efficiently to minimize the least desirable. As we and others have many times pointed out, including in our earlier FCC comment, even being exonerated from liability in user content can be cripplingly expensive. Congress did not want platforms to be obliterated by the costs of having to defend themselves for liability in their users’ content, or to have their resources co-opted by the need to minimize their own liability instead of being able to direct them to running a better service. If platforms had to fear liability for either their hosting or moderation efforts it would force them to do whatever they needed to protect themselves but at the expense of being effective partners in achieving Congress’s twin aims.
This basic policy math remains just as true in 2020 as it did in the 1990s, which is why it is so important to resist these efforts to change the statute. Undermining Section 230’s strong platform protections will only undermine the overall health of the Internet and do nothing to help there be more good content and less bad online, which even the statute’s harshest critics often at least ostensibly claim to want.
While some have argued that platforms who fail to be optimal partners in meeting Congress’s desired goals should lose the benefit of Section 230’s protection, there are a number of misapprehensions baked into this view. One misapprehension is that Section 230 contains any sort of requirement for how platforms moderate their user content; it does not. Relatedly, it is a common misconception that Section 230 hinges on some sort of “platform v. publisher” distinction, immunizing only “neutral platforms” and not anyone who would qualify as a “publisher.” People often mistakenly believe that a “publisher” is the developer of the content, and thus not protected by Section 230. In reality, however, as far as Section 230 is concerned, platforms and publishers are actually one and the same, and therefore all are protected by the statute. The term “publisher” that appears in certain court decisions merely relates to the understanding of the word “publisher” to mean “one that makes public,” which is of course the essential function of what a platform does to distribute others’ speech. But content distribution is not the same thing as content creation. Section 230 would not apply to the latter, but it absolutely applies to the former, even if the platform has made editorial decisions with respect to that distribution. Those choices still do not amount to content creation.
In addition, the idea that a platform’s moderation choices can jeopardize their Section 230 protection misses the fact that it is not Section 230 that gives platforms the right to moderate however they see fit. As we explained in our previous comment and on many other occasions, the editorial discretion behind content moderation decisions is protected by the First Amendment, not Section 230. Eliminating Section 230 will not take away the right for platforms to exercise their discretion. What it will do, however, is make it practically impossible for platforms to avail themselves of this right because it will force them to have to expend their resources defending themselves. They might potentially eventually win, but, as we earlier explained, even exoneration can be an extinction-level event for a platform.
Furthermore, it would effectively eviscerate the benefit of the statute if its protection were conditional. The point of Section 230 is to protect platforms from the crippling costs of litigation; if they had to litigate to find out whether they were protected or not, there would be no benefit and it would be as if there were no Section 230 at all. Given the harms to the online ecosystem Section 230 was designed to forestall, this outcome should be avoided.
All of this information boils down to this essential truth: the NTIA petition should be rejected, and so should any other effort to change Section 230, especially one that embraces these misunderstandings.