DOJ's Latest Ideas For Section 230 Reform Dumber Than Even I Expected
from the what-the-actual-fuck,-guys? dept
Ever since the DOJ started attacking Section 230 of the Communications Decency Act, it was obvious that it was simply a ploy to attack big tech companies that are (unfairly) seen as being “anti-conservative.” Remember, that Section 230 explicitly exempts federal crimes — the kind of law enforcement the DOJ is engaged in. That is, there is literally nothing in Section 230 stopping the DOJ from doing its job. But as Barr and the DOJ continued to attack 230, it also became clear that this was going to be a wedge issue he could use to undermine encryption — encryption that keeps us all safe.
Last month, the DOJ hosted a “workshop” regarding Section 230, and again Barr and the DOJ’s agenda became quite clear. For all the talk of “dangers” to children online, the DOJ ignored the fact that it has failed to abide by Congressional mandates regarding fighting child sexual exploitation, and Congress itself has failed to fund programs it has put forth to deal with the issue.
At the DOJ’s 230 hearing, plenty of speakers highlighted why messing with 230 would create all sorts of problems. Unsurprisingly, the DOJ has ignored all of that, and sent out Deputy Attorney General Jeffrey Rosen to pitch four changes to Section 230, each one dumber and more counterproductive than even I had expected. The speech starts out with a misleading history of antitrust law — the other big tool in the DOJ’s toolbox to whack at internet companies — and then gets to 230. If I have a chance I may get back to Rosen’s antitrust discussion, but to keep this post from getting too long, we’ll just focus on the 230 argument (ignoring the fact that evidence shows that 230 increases competition, rather than diminishes it).
Shifting gears just a little, as we focus on market-leading online platforms, there is another major issue that has come to the forefront: Section 230 of the Communications Decency Act of 1996. Section 230 has played a role in both the good and the bad of the online world. It has no doubt contributed to new offerings and growth online. At the same time, as DOJ is the agency tasked with protecting the American public through enforcing the law, we are concerned that Section 230 is also enabling some harm.
“Some harm” carries a lot of weight here. Lots of things “enable some harm.” Not all of them require massive structural changes. After giving some background on 230 and why it’s been useful, he then turns to “the dark side.”
But there is a dark side, too. To say the least, the online world has changed a lot in the 25 years since the original drafting of Section 230. The Internet is no longer made up of the rudimentary bulletin boards and chat rooms of AOL, CompuServe, and Prodigy, but instead, it is now a vital and ever-present aspect of both personal and professional modern lives. Indeed, it is now quaint to think of the likes of CompuServe hosting online comments. Instead, major online platforms now actively match us with news stories and friends and, whether by algorithm or otherwise, effectively choose much of what our children see, and enable a seemingly limitless number of people to connect with us or with our children. And platforms are often themselves speakers and publishers of their own content, and not mere forums for others to communicate.
It’s not that quaint. Many sites — such as this one — still rely on 230 daily. And when platforms are speakers and publishers of their own content, that content is not covered by 230. So it’s not clear why that matters.
Now, a quarter century after its enactment, there also is recognition that Section 230 immunity has not always been a force for good, particularly in light of some of the extraordinarily broad interpretation given to it by some courts. For example, platforms have been used to connect predators with vulnerable children, to facilitate terrorist activity, and as a tool for extreme online harassment.
And, again, the DOJ has failed to prosecute many of those situations in which that violated federal laws. That’s not on 230, that’s on the DOJ. It also leaves out that 230 is what allows platforms to put in place programs to protect children, remove terrorist activity, and block “extreme online harassment. Rather than point that out, Rosen falsely suggests that 230 is to blame for it, rather than the reason it can be blocked.
The drafters of Section 230 might be surprised by this development. Remember, Section 230 was but one provision in the much larger Communications Decency Act of 1996. As its name suggests, the primary aim of the Communications Decency Act was to promote decency on the Internet and to create a safe environment for children online.
The drafters — Chris Cox and Ron Wyden — are both still with us. And both of them have made it clear in recent days that, no, they are not “surprised by this development.” As for why 230 was a part of the Communications Decency Act, well, Rosen should read Jeff Kosseff’s book. The CDA was a separate act, and 230 was attached to it by Cox and Wyden to enable platforms to have that moderation capability so that they could moderate, since without it, following the ruling in the Stratton Oakmont v. Prodigy case, moderation would have been effectively impossible.
As it turned out, the Supreme Court rejected most of the Communications Decency Act on First Amendment grounds. The most significant piece that survived was Section 230. But rather than furthering the purposes of its underlying bill, some scholars have argued that Section 230 has instead immunized platforms ?where they (1) knew about users? illegal activity, deliberately refused to remove it, and ensured that those responsible could not be identified; (2) solicited users to engage in tortious and illegal activity; and (3) designed their sites to enhance the visibility of illegal activity and to ensure that the perpetrators could not be identified and caught.?
Those scholars are wrong. Nearly every major platforms works hard to remove such content — and, again, if it violates federal law, nothing has ever stopped the DOJ from enforcing the law.
To address these concerns and the others that have been raised, we see at least four areas that are potentially ripe for engagement. First, as a threshold matter, it would seem relatively uncontroversial that there should be no special statutory immunity for websites that purposefully enable illegality and harm to children. Nor does someone appear to be a ?Good Samaritan? if they set up their services in a way that makes it impossible for law enforcement to enforce criminal laws. In these particular situations, why should not the website or platform have to defend and justify the reasonableness of their conduct on the merits just like businesses operating outside the virtual world?
This is the most disingenuous bit in the whole discussion. Rosen and the DOJ are obnoxiously linking two separate issues into a misleading “but think of the children” argument. Separately, the word “purposefully” is carrying a lot of water in this paragraph as well — especially since the EARN IT act does not say “purposefully” but rather has a “reckless” standard (which is already probably unconstitutional). The second half of the paragraph, about the “Good Samaritan” part is a direct attack on encryption — basically saying that if you offer end-to-end encrypted services, you should lose 230 protections. This is bizarre and stupid at the same time.
First, messaging communication services rarely need 230 protections anyway, since those communications are private, between users. There’s no moderation issues in the first place. Second, framing encryption that keeps everyone safe as setting up a service to make “it impossible for law enforcement to enforce criminal laws” is super misleading. Encryption protects us all, and law enforcement has a ton of other tools to do their jobs. The problem is that the DOJ hasn’t used those tools and now just wants to whack tech companies.
Finally, the last line is utter bullshit. It’s framed in a manner to suggest that companies offering end-to-end encryption are no different than those (never actually identified) platforms who “purposefully enable illegality and harm to children” and then saying they should have to “justify the reasonableness of their conduct.” You shouldn’t have to justify protecting your users from getting hacked. It’s self-evident to everyone but this DOJ.
Next, Rosen tries to get around the fact that the DOJ has failed to actually do its job by still blaming 230… because it’s blocking civil cases.
Second, the Department of Justice is also concerned about Section 230?s impacts on our law enforcement function, and the law enforcement efforts of our partners throughout the executive branch. In our discussions with scholars and members of the public on this topic, many are surprised to learn that Section 230 is increasingly being claimed as a defense against the federal government in civil actions. To be clear, Section 230 has a carve-out for certain federal criminal enforcement. But not all problems can be solved by federal criminal law. Federal civil actions play a very important role in their own right. The increasing invocation of Section 230 in federal civil enforcement actions often goes beyond the purpose of the Communications Decency Act, and can undermine the goals of 230 itself.
Can he point to an actual example of useful, reasonable, non-crazy civil actions blocked by 230? Because the very fact that he doesn’t give any examples should give you that answer. 230 protects against frivolous ambulance chasing civil suits — the kind we’re already seeing from the last time we amended 230 — and not legitimate lawsuits.
Third, we are concerned about expansions of Section 230 into areas that have little connection to the statute?s original purpose. As I mentioned earlier, the core of Section 230 concerns defamation and other speech-related torts. And for good reason: The restaurant review platform, for example, has no idea whether the user is right when he says the soup was cold or when she says the service was poor. Nor does the social media site have any idea whether the nosy neighbor?s online comments about the new person down the block are true or not. Civil immunity from tort litigation for online platforms in these instances makes some sense. The alternative would be a quasi-heckler?s veto, where the restaurant or neighbor could complain about the comments, and the platform would be forced to take them down for fear of civil liability.
But some websites have tried to transform Section 230 into an all-purpose immunity for claims that are far removed from speech. For example, some platforms have argued that Section 230 permits them to circumvent or ignore city ordinances on licensing of rental properties. While these types of arguments have not always succeeded, they demonstrate the potentially overbroad scope that some advocates have given the immunity.
This is a misleading summary of a complex issue. What’s at issue is whether or not a service like Airbnb can be held liable for user listings on the site. But user listings are speech. Cities have every right to go after those actually posting the listings — since they’re the ones breaking the law. But many cities are trying to go after Airbnb directly, since it’s easier to go after just one player.
And here’s the big thing that Rosen absolutely leaves out: so far Airbnb has been losing in court. So for all his concerns that 230 is being used “beyond” what it should cover, the courts have already shut that aspect down (in my view, the courts have gone too far, in fact).
Fourth, we are concerned about the extent to which platforms have expanded the use of Section 230 to immunize taking down content beyond the types listed in the statute. Under the Good Samaritan provision, platforms have the ability to remove content that they have a ?good faith? belief is ?obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.? We are told that some platforms treat this provision as a blank check, ignoring the ?good faith? requirement and relying on the broad term ?otherwise objectionable? as carte blanche to selectively remove anything from websites for other reasons and still claim immunity.
Perhaps there needs to be a more clear definition of ?good faith,? and the vague term ?otherwise objectionable? should be reconsidered. Of course, platforms can choose whether or not to remove any content on their websites. But should they automatically be granted full statutory immunity for removing lawful speech and given carte blanche as a censor if the content that is not ?obscene, lewd, lascivious, filthy, excessively violent, or harassing? under the statute?
If the attack on encryption is the most ridiculous, this part is the most frivolous. First of all, having the government determine whether or not moderation choices are made in “good faith” or if they fit outside of “obscene, lewd, lascivious, filthy, excessively violent, or harassing” under the statute would almost certainly violate the 1st Amendment. Second, the removal of “otherwise objectionable” would create a massive problem for online moderation, and would, in fact bring back the “extreme online harassment” he was complaining about just a few paragraphs earlier.
Of course, we all know what is being dog-whistled in this prong: he wants to penalize platforms for “anti-conservative bias” and push for some nonsensical, impossible “neutral” standard for platforms who want 230 protections. But just the fact that this would wipe out the ability to block harassment, while at the same time he’s complaining about harassment should show you how brain dead and backwards this proposal is.
This is not a serious proposal, but in this time and place, with the DOJ’s backing, it has as serious chance of being put into law. It would devastate large parts of the internet, but that seems like collateral damage for a DOJ who is not focused on what’s actually best for society and the economy, but rather what’s best for itself.