Appeals Court Doubles Down On Dangerous Ruling: Says Website Can Be Blamed For Failing To Warn Of Rapists
from the bad-cases-make-bad-law dept
Back in late 2014, we wrote about a case where the somewhat horrifying details were likely leading to a bad result that would undermine Section 230 of the CDA (the most important law on the internet). Again, the details here are appalling. It involves two guys who would use other people’s accounts on a website called “Model Mayhem” to reach out to aspiring models, then lure them to their location in South Florida, drug them, and then film themselves having sex with the drugged women to then offer as online porn. Yes, absolutely everything about this is horrifying and disgusting. But here’s where the case went weird. A victim of this awful crime decided to sue the large company Internet Brands, who had purchased Model Mayhem, arguing that it knew about these creeps and had failed to warn users of the service. Internet Brands had argued that under Section 230 it was not liable and the appeals court said no. The case was then reheard en banc (with a large slate of 9th Circuit judges) and they’ve now, once again, said that Section 230 does not apply.
This case has been a favorite of those looking to undermine Section 230, so those folks will be thrilled by the results, but for everyone who supports an open internet, we should be worried. The rule here is basically that sites are protected from being held liable of actions of their users… unless those users do something really horrible. Then things change. It’s further important to note that the two sick creeps who pulled off this scam, Lavont Flanders and Emerson Callum, weren’t actually members of the Model Mayhem site. They would just use the accounts of others to reach out to people, so the site had even less control.
To get around the plain language and caselaw history around Section 230, the court has to quite carefully parse its words. It starts out by noting that Internet Brands clearly qualifies for the safe harbors as an internet platform. However, it bends over backwards to reinterpret a key part of CDA 230, that says you cannot treat such a platform “as a publisher or speaker” of information posted by users. Here, the court decides that the law requiring services to warn of potential danger do no such thing:
Jane Doe?s claim is different, however. She does not seek to hold Internet Brands liable as a ?publisher or speaker? of content someone posted on the Model Mayhem website, or for Internet Brands? failure to remove content posted on the website. Jane Doe herself posted her profile, but she does not seek to hold Internet Brands liable for its content. Nor does she allege that Flanders and Callum posted anything to the website. The Complaint alleges only that ?JANE DOE was contacted by Lavont Flanders through MODELMAYHEM.COM using a fake identity.? Jane Doe does not claim to have been lured by any posting that Internet Brands failed to remove. Internet Brands is also not alleged to have learned of the predators? activity from any monitoring of postings on the website, nor is its failure to monitor postings at issue.
Instead, Jane Doe attempts to hold Internet Brands liable for failing to warn her about information it obtained from an outside source about how third parties targeted and lured victims through Model Mayhem. The duty to warn allegedly imposed by California law would not require Internet Brands to remove any user content or otherwise affect how it publishes or monitors such content.
In other words, because the law only compels a form of speech — i.e., a duty to warn people about creeps on your service — as opposed to a duty to suppress speech, then Section 230 doesn’t apply here. Bizarrely, the court points to the so-called “Good Samaritan” clause in CDA 230 (CDA 230(c)(1)) that further notes that any action that a site takes to moderate content cannot be used to create liability around other content on the site, as further proof for its position:
Jane Doe?s failure to warn claim has nothing to do with Internet Brands? efforts, or lack thereof, to edit, monitor, or remove user generated content. Plaintiff?s theory is that Internet Brands should be held liable, based on its knowledge of the rape scheme and its ?special relationship? with users like Jane Doe, for failing to generate its own warning. Thus, liability would not discourage the core policy of section 230(c), ?Good Samaritan? filtering of third party content.
The court also rejects the idea that this ruling might chill free speech by leading to greater monitoring and censorship, basically just tossing it off to the side as unlikely to be a big deal:
It may be true that imposing any tort liability on Internet Brands for its role as an interactive computer service could be said to have a ?chilling effect? on the internet, if only because such liability would make operating an internet business marginally more expensive. But such a broad policy argument does not persuade us that the CDA should bar the failure to warn claim. We have already held that the CDA does not declare ?a general immunity from liability deriving from third-party content.? Barnes, 570 F.3d at 1100. ?[T]he Communications Decency Act was not meant to create a lawless no-man?s-land on the Internet.? Roommates.Com, 521 F.3d at 1164. Congress has not provided an all purpose getout- of-jail-free card for businesses that publish user content on the internet, though any claims might have a marginal chilling effect on internet publishing businesses. Moreover, the argument that our holding will have a chilling effect presupposes that Jane Doe has alleged a viable failure to warn claim under California law. That question is not before us and remains to be answered.
Some will, undoubtedly, argue that this limiting of Section 230 is a good thing, either because they already dislike 230, or because they believe that the behavior described above was so beyond the pale that it’s fine to punish the platform for it. This is problematic. No one denies that the two individuals who committed these acts deserve to be in jail (for a long time). But blaming the platform that they used for not posting a warning seems extreme and does seem to confuse how Section 230 is supposed to work. The key point is in accurately putting liability on the parties who caused the action. That wasn’t the website and it shouldn’t be blamed.
You can now expect lots of cases citing this case as they look for any way to get past Section 230’s protections.