from the 230-or-not-230-that-is-the-question dept
The biggest mistakes people make about Section 230 involve thinking that it is somehow a complicated law. In reality, its operation is not all that complex. Accordingly, determining whether it applies to any particular situation should not be all that difficult to evaluate, even when we think about hard or edge cases. Ultimately what we care about is who imbued the objectionable quality into the content at issue. This question, put like this, gets right to the heart of what Section 230’s operation pivots on and prevents us from getting sidetracked by other considerations that might lead to weakening the statute’s critical protection by suddenly making it seem a lot less applicable than it is.
When people incorrectly accuse Section 230 of being some sort of special development in law one thing they frequently overlook is how, at its core, Section 230 leaves in place something that law has long recognized: direct liability. If someone has done something wrong, then the law can hold them responsible for it. And Section 230 does nothing to overturn that general rule. What has always been more exceptional, however, is the notion of secondary liability, or whether someone can be held responsible for something that someone else has done wrong. Law has at times recognized such liability, although until recently it was largely an exception applied sparingly, and there are many good reasons for such restraint, including that it generally offends our sense of fairness to hold someone liable for something someone else has done.
Furthermore, and perhaps more importantly, holding them so liable also threatens to chill whatever they have done to somehow put them in association with the actual wrongdoer, which is a problem if it’s something that by and large we would ordinarily like them to do – like, in the case of Internet platforms, having them available to help facilitate all the non-problematic content they can (as well as minimize all the problematic content they can). Section 230 acts as a statutory barrier to prevent finding platforms secondarily liable for how others have used their services because we want to make sure platforms can be in the position to supply those services.
When it comes to applying Section 230, however, the complication arises from figuring out “who did it,” or, more specifically, who created the content in question at the center of a dispute. If it were the platform itself, then Section 230 would not apply, because of course the platform should have to answer for its own actions. But if it were some other third party who was responsible, then Section 230 should apply to insulate the platforms from any secondary liability arising from these others’ behavior.
Sometimes the answer to the “who did it” question is obvious. But where the relationship between platforms and the content they facilitate is more nuanced or sophisticated, there can be the temptation to overly complicate the answer to “who created the content” to find any assistance provided to those who did create it as somehow sharing in responsibility for that content creation. But the problem with too easily finding that platforms have somehow played an authorial role in the development of others’ content is that it can eat the whole rule and make it so that Section 230 could almost never apply. Simply asking whether others’ content would exist “but for” the platform is not enough because the answer would of course almost always be no. Indeed, that’s why we bother to protect platforms with Section 230 in the first place, because people need them in order to be able to create their online content. But if that important help platforms provide others to create their own content could be found to be the authorial cause of the content those others created, then Section 230 could never apply to platforms to enable them to provide that help.
When we ask “who did it,” or who created the content at issue that should be directly responsible for it, we must actually ask a more careful question to get back a meaningful answer that leaves Section 230 the reliably protective law it was intended to be. Framing the inquiry into “who created the content” as “who imbued the content at issue with the objectionable quality” applies that care to zero in on what we need to know to figure out whether Section 230 applies by ensuring we remain focused on the specific objection at issue, the act of making it objectionable, and the full range of objections content could excite, valid or otherwise, which could all prompt a Section 230 defense.
Specificity is important because, as discussed above, if we think about Section 230 in terms of content creation generally it can too easily make it seem like any platform has had a hand in creating the content it facilitates just by virtue of having facilitated it, which would therefore make Section 230 useless. For Section 230 to be meaningful, the inquiry into authorship needs to be tied to the specific content at issue. But more than that, it should also be focused on the specific objectionable quality of that content, because if there is to be any liability arising from that content it will be over that objectionable quality and not any of the content’s non-objectionable aspects. It therefore would make little sense to condition platforms’ Section 230 protection – and potentially risk burying them in litigation – premised on the platforms’ alleged role in creating the content’s non-objectionable aspects when those aspects wouldn’t end up mattering for liability purposes anyway.
Remaining focused on the act of making objected-to content so objectionable also helps us not get distracted by the things platforms do to intermediate others’ content in a useful way. Some platforms, for instance, tend to attract certain types of content that may be particularly contentious and thus prone to objection, but if merely attracting contentious content could be deemed the same as creating it, then platforms would be deterred from being available to facilitate any of it, no matter lawful or beneficial that content may be. Furthermore, as a practical matter, it is often desirable for platforms to moderate and curate content created by others to better serve their users, including by prioritizing the display of what they might want to see and removing what they don’t.
Section 230 also protects and even encourages platforms to perform these tasks, but engaging in them sometimes necessarily requires platforms to heavily interact with that content they are facilitating. If that interaction could foreclose a Section 230 defense it would deter platforms from intermediating others’ content as effectively as we would want them to, if how they intermediated it could jeopardize the liability protection they depend on to do it. Logically it would also not make sense if such interaction could amount to content creation because that content they interact with obviously already exists. So if we instead focus on the content creation question of who imbued it with its objectionable quality, that inquiry will be more useful in helping us see who should be responsible for it. After all, it can’t have been the platform interacting with the content if that quality had already been there, and so responsibility for it should then still lie with whoever had caused it to be there in the first place.
Meanwhile, one of the keys to this “imbuing” test comes from phrasing it as an inquiry as to the particular “objectionable quality” of the content at issue, and not, say, the content’s “wrongfulness” or “illegality,” because people can easily object to expression that is perfectly legal. Section 230 works to spare platforms from having to defend themselves against any attempt to hold them liable for content another has created, regardless of how valid the complaint. It is not just about relieving platforms from ultimate liability but the cost of having to expend the resources defending themselves from any challenge over content created by others, and so evaluating whether Section 230 should apply should not depend on the specific complaint raised, just that there was a complaint raised.
This “imbued” test leaves us with a measure that can still hold platforms responsible when truly warranted but not so casually that they will no longer be able to perform the needed function of intermediating others’ content. The test is also extremely flexible. As we think about new platforms, their services, and the new technologies that enable them, like Section 230 itself, it offers a framework that can readily scale to help determine where liability should be: directly on the party responsible and not on the platforms whose services Section 230 exists to protect.