Republicans Blame CDA 230 For Letting Platforms Censor Too Much; Democrats Blame CDA 230 For Platforms Not Censoring Enough
from the which-is-it? dept
It certainly appears that politicians on both sides of the political aisle have decided that if they can agree on one thing, it’s that social media companies are bad, and that they’re bad because of Section 230, and that needs to change. The problem, of course, is that beyond that point of agreement, they actually disagree entirely on the reasons why. On the Republican side, you have people like Rep. Louis Gohmert and Senator Ted Cruz who are upset about platforms using Section 230’s protections to allow them to moderate content that those platforms find objectionable. Cruz and Gohmert want to amend CDA 230 to say that’s not allowed.
Meanwhile, on the Democratic side, we’ve seen Nancy Pelosi attack CDA 230, incorrectly saying that it’s somehow a “gift” to the tech industry because it allows them not to moderate content. Pelosi’s big complaint is that the platforms aren’t censoring enough, and she blames 230 for that, while the Republicans are saying the platforms are censoring too much — and incredibly, both are saying this is the fault of CDA 230.
Now another powerful Democrat, Rep. Frank Pallone, the chair of the House Energy and Commerce Committee (which has some level of “oversight” over the internet) has sided with Pelosi in attacking CDA 230 and arguing that companies are using it “as a shield” to not remove things like the doctored video of Pelosi:
.@Facebook?s failure to appropriately address intentional political disinformation harms its users, the public discourse, and our democracy. Sec 230 is meant to enable platforms to take down harmful content. It should not be a shield for inaction. https://t.co/HMJ9ARhKo9
— Rep. Frank Pallone (@FrankPallone) May 30, 2019
But, of course, the contrasting (and contradictory) positions of these grandstanding politicians on both sides of the aisle should — by itself — demonstrate why mucking with Section 230 is so dangerous. The whole point and value of Section 230 was in how it crafted the incentive structure. Again, it’s important to read both parts of part (c) of Section 230, because the two elements work together to deal with both of the issues described above.
(c) Protection for ?Good Samaritan? blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of?
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
It’s these two elements together that make Section 230 so powerful. The first says that we don’t blame the platform for any of the actions/content posted by users. This should be fairly straightforward. It’s about the proper application of liability to the party who actually violated the law, and not the tools and services they used to violate the law. Some people want to change this, but much of that push is coming from lawyers who just want the bigger pockets to sue. It involves, what I’ve referred to as “Steve Dallas lawsuits” after the character in the classic comic strip Bloom County, who explains why you should always focused on suing those with the deepest pockets, no matter how tangentially they are to the law violating.
But, part (2) of the law is also important. It’s the part that actually allows platforms the ability to moderate. Section 230 was an explicit response to the ruling in Stratton Oakmont v. Prodigy, in which a NY state judge ruled that because Prodigy wanted to provide a “family friendly” service, and therefore moderated out content it found objectionable (in order to support that “family friendly” goal), it therefore became automatically liable for any of the content that was left up. But, of course, that’s crazy. The end result of such a rule would be either that platforms wouldn’t do anything to moderate content, which would mean everything would be a total free for all — and you couldn’t have a “family friendly” forum at all, and everything would quickly fill up with spam/porn/harassment/abuse/etc — or platforms would basically restrict almost everything to create a totally anodyne and boring existence.
The genius of Section 230 is that it enabled a balance that allowed for experimentation and this includes the ability to experiment with different forms of moderation. Everyone focuses on Facebook, YouTube and Twitter — which all take moderately different approaches — but having a Section 230 is also what allowed for the radically different approaches taken by other sites: like Wikipedia and Reddit (and even us at Techdirt). These use very different approaches, some of which work better than others, but much of which is community-dependent. It’s that experimentation that is good.
But the very fact that both sides of the political aisle seem to be attacking CDA 230 but for completely opposite reasons really should highlight why messing with CDA 230 would be such a disaster. If Congress moves the law in the direction that Gohmert/Cruz want, then you’d likely get many fewer platforms, and some would just be overrun by messes, while others would be locked down and barely usable. If Congress moves the law in the direction that Pelosi/Pallone seem to want, then you would end up with effectively the same result: much greater censorship as companies try to avoid liability.
Neither solution is a good one, and neither would truly satisfy the critics in the first place. That’s part of the reason why this debate is so silly. Everyone’s mad at these platforms for how they moderate, but what they’re really mad at is humanity. Sometimes people say mean and awful things. Or they spread disinformation. Or defamation. And those are real concerns. But there need to be better ways of dealing with it than Congress stepping in (against the restriction put on it by the 1st Amendment), and saying that the internet platforms themselves either must police humanity… or need to stop policing humanity altogether. Neither is a solution to the problems of humanity.