Nancy Pelosi Joins Ted Cruz And Louis Gohmert In Attacking CDA 230

from the not-great dept

Well, it appears that the attacks on Section 230 of the CDA are now officially bi-partisan. Following the path of Republicans Rep. Louis Gohmert and Senator Ted Cruz, now we have Democratic Speaker of the House Nancy Pelosi deciding it's time to attack Section 230 of the CDA, by completely misrepresenting what it does, why it does that, and what it means to the internet. In a podcast with Kara Swisher, Pelosi said the following:

“230 is a gift to them, and I don’t think they are treating it with the respect that they should,” she said. “And so I think that that could be a question mark and in jeopardy. ... For the privilege of 230, there has to be a bigger sense of responsibility on it, and it is not out of the question that that could be removed.”

This is wrong on so many levels. Section 230 is not a "gift" to the tech companies. It's a gift to the public and their ability to speak freely on the internet. Section 230 is what enables all of these websites out there that allow us to speak out without having to get what we want to say approved.

And to argue that companies don't "respect" Section 230 is weird, given that internet companies have spent basically the past 20 years fighting for Section 230 and explaining why it was so important, while almost everyone else downplayed it, didn't care about it, or didn't understand it. The only internet company right now that doesn't seem to "respect" Section 230 would be Facebook, which caved in and supported chipping away at Section 230's important protections.

Look, it is completely fair to argue that the big internet companies have lots of very real problems -- including questions about how they treat their users, and about privacy. But the focus on Section 230 is bizarre and misguided. And attacking it in this way will literally do the opposite of what Pelosi seems to think it will. Removing Section 230 won't help bring about more competition. It won't help make the companies "act better." Rather, stripping 230 protections means that you won't get smaller companies building competing services to Facebook and Google, because it will be way too risky on the liability side. Facebook and Google can afford the fight. Others cannot.

Stripping 230 protections won't encourage companies to act better. It will encourage them to either not accept any user-generated content (removing the key communications function of the internet) or to stop moderating entirely, meaning that you end up with just the worst parts of the internet -- spam-filled, troll-filled garbage. Anyone who knows the first thing about Section 230, and why it was put in place, understands this. Unfortunately, there's the idea out there that Section 230 was a "gift" to the internet companies. It is not. It's a gift to the internet itself, meaning to all of us as users of the internet.

But, given that it's now a bi-partisan thing to misrepresent and attack CDA 230, perhaps we're reaching the end of the open internet experiment.

Filed Under: cda 230, free speech, intermediary liability, internet, nancy pelosi, responsibility, section 230


Reader Comments

The First Word

Stripping 230 protections won't encourage companies to act better. It will encourage them to either not accept any user-generated content (removing the key communications function of the internet) or to stop moderating entirely, meaning that you end up with just the worst parts of the internet -- spam-filled, troll-filled garbage. Anyone who knows the first thing about Section 230, and why it was put in place, understands this.

This is absolutely correct and it’s sad to see politicians either display ignorance of the origins and purpose of 47 USC 230, or to just outright lie about it out of the idea that it will gain them some minor advantage (which it won’t).

To briefly recap, prior to the enactment of the safe harbor there were three applicable legal precedents. The first was the old rule that the publisher of defamatory content was responsible for it just as the author was, because they had the opportunity to review it and verify it. The second was Cubby, Inc. v Compuserve, Inc., 776 F.Supp. 135 (SDNY 1991), which held that online services that hosted defamatory content were not responsible for it if it was uploaded by the users without the knowledge or approval of the service. Basically, this gave sites protection so long as they didn’t moderate. The third was Stratton Oakmont, Inc v. Prodigy Services, Co., 1995 WL 323710 (NY Sup. Ct. 1995) which held that if the online service moderated anything at all, then it was liable even for things that it approved, ignored, or had been in error about.

The result was predictable: the only two safe options were to 1) not moderate anything, which would lead to ads, spam, defamation, hate speech, etc. proliferating, or 2) not allow posting, which would prevent even benign users from having a voice.

At about the same time, Congress decided it wanted online services to take voluntary steps to remove porn from online. But none of the services were stupid enough to try, since they couldn’t moderate everything perfectly, requiring them to either moderate nothing or not allow posting.

Exasperated, Congress gave the services protection — if they moderated imperfectly it wouldn’t be held against them, and as they couldn’t compel moderation, it would be up to each site to determine how much or how little to do. Thus, a site could remove porn and spam and malware but allow users to talk with one another without careful policing of every single post.

Cutting into this protection for any reason will just put us back to the earlier position of allow everything or allow nothing, with no in between.

Complaints of free speech, political bias, etc are totally ridiculous and should be ignored. That’s not what the issue is.

—cpt kangarooski

Subscribe: RSS

View by: Time | Thread


  1. icon
    Stephen T. Stone (profile), 12 Apr 2019 @ 11:22am

    …fine, I’ll bite.

    asserts that corporations get to CONTROL all access

    Yes, that is true. The company that owns Twitter can revoke your access to Twitter if it so desires. You are not owed a platform for your speech by anyone else, least of all Twitter.

    Now, if you want to discuss the idea of ISPs, domain registrars, and hosting companies “control[ling] all access”, that would be a discussion worth having.

    NO ONE is mistaking a user's opinion for the "platform". There is NO direct or implied "association" that justifies the unilateral control of American's speech by mega-corporations

    Three things.

    1. Of course no one is mistaking the message for the platform.

    2. That said, if the platform allows certain kinds of messages to flourish without intervention, one could come to a reasonable assumption that the platform neither cares about or minds being associated with those messages. If Twitter decided not to ban White supremacists, and White supremacist messaging became more widespread, the idea that Twitter at least tolerates White supremacists would not be an unbelievable proposition.

    3. Control of a privately-owned platform, even one open to the public, rests in the hands of its owners and operators. If you can come up with a good reason why a Mastodon instance such as octodon.social should be controlled by anyone other than the people who currently own and operate it, feel free to give it.

    If you accept the Section 230 protections and have immunity, then you must NOT exercise editorial control.

    By this logic, no privately-owned platform that is open to the public could moderate for anything other than nakedly illegal content, since moderating otherwise-legal speech that the owners/operators do not want on the platform would amount to “editorial control”. A forum for Black Lives Matter supporters, for example, would have no power to erase White supremacist propaganda from the forum out of the fear that doing so would be “editorial control”.

    sites are REQUIRED to remove comments that violate common law / statute terms, that part is NOT to be optional, but that does not empower absolute and arbitrary control to remove comments that are within common law.

    Please show me the law, statute, or court ruling (i.e., common law) that says a platform cannot remove legally protected speech for arbitrary reasons. Be sure to offer the necessary citations required for the verification of facts. (Note: An answer that contains only your opinion on the matter is playground horseshit that has no place in this discussion.)

    Similarly, any site can add text to a page to disclaim.

    Most platforms and social media services already have such disclaimers, even if they are not front-and-center on the main page. That fact does not revoke a platform’s 230 immunity and prevent it from policing speech on the platform.

    Section 230 is the practical way for "natural persons" to have a site, but the hosts are explicitly immunized, that's THE DEAL.

    No. Just…no. Section 230 is the practical way for platforms such as Twitter, Imgur, DeviantArt, and other UGC-heavy sites to stay open for “business” without facing legal liability for what users (and only users) do on those platforms. If you have a website that does not allow for UGC, you would have no reason to invoke Section 230, for you would be the sole publisher and user of your site. (Of course, that would also mean you carry all the legal liability for what shows up on your site.)

    sites assert total arbitrary control against those they regard as political opponents which is simply corporate censorship

    If’n you hate the moderation of a specific platform, find another one or make your own. 8chan, for example, became the new home of the worst parts of 4chan after that wretched hellsite started banning Gamergaters. Nobody asserted that they had an unalienable right to use 4chan, nobody sued to have their bans undone, and nobody said the government should shut down 4chan over so-called “viewpoint discrimination”. If 4channers/8channers can understand this concept, I have to wonder how you cannot.

    [Mike] simply DELETES the "in good faith" requirement! -- And then blows it off as not important

    To quote Mike:

    So far the courts have made it clear that "good faith" means "whatever the platform wants." And, as you know, "common law" actually means what the courts say about the law, and so -- you'll love this -- under the common law you insist is so important, "good faith" means as long as the platform has a reason to moderate your bullshit content, that's clearly allowed.

    If you can cite any law, statute, or court ruling that says a platform cannot moderate speech on that platform based on what the platform’s owners/operators do and do not want showing up on that platform…well, you would be the first.


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Techdirt Gear
Shop Now: Techdirt Logo Gear
Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.