Platform Liability Doesn't -- And Shouldn't -- Depend On Content Moderation Practices

from the stifling-free-speech-the-other-way dept

In April 2018, House Republicans held a hearing on the “Filtering Practices of Social Media Platforms” that focused on misguided claims that Internet platforms like Google, Twitter, and Facebook actively discriminate against conservative political viewpoints. Now, a year later, Senator Ted Cruz is taking the Senate down the same path: he lead a hearing earlier this week on “Stifling Free Speech: Technological Censorship and the Public Discourse.”

While we certainly agree that online platforms have created content moderation systems that remove speech, we don’t see evidence of systemic political bias against conservatives. In fact, the voices that are silenced more often belong to already marginalized or less-powerful people.  

Given the lack of evidence of intentional partisan bias, it seems likely that this hearing is intended to serve a different purpose: to build a case for making existing platform liability exemptions dependent on "politically neutral" content moderation practices. Indeed, Senator Cruz seems to think that’s already the law. Questioning Facebook CEO Mark Zuckerberg last year, Cruz asserted that in order to enjoy important legal protections for free speech, online platforms must adhere to a standard of political neutrality in their moderation decisions. Fortunately for Internet users of all political persuasions, he’s wrong.

Section 230—the law that protects online forums from many types of liability for their users’ speech—does not go away when a platform decides to remove a piece of content, whether or not that choice is “politically neutral.” In fact, Congress specifically intended to protect platforms’ right to moderate content without fear of taking on undue liability for their users’ posts. Under the First Amendment, platforms have the right to moderate their online platforms however they like, and under Section 230, they’re additionally shielded from some types of liability for their users’ activity. It’s not one or the other. It’s both.

In recent months, Sen. Cruz and a few of his colleagues have suggested that the rules should change, and that platforms should lose Section 230 protections if those platforms aren’t politically neutral. While such proposals might seem well-intentioned, it’s easy to see how they would backfire. Faced with the impossible task of proving perfect neutrality, many platforms—especially those without the resources of Facebook or Google to defend themselves against litigation—would simply choose to curb potentially controversial discussion altogether and even refuse to host online communities devoted to minority views. We have already seen the impact FOSTA has had in eliminating online platforms where vulnerable people could connect with each other.

To be clear, Internet platforms do have a problem with over-censoring certain voices online. These choices can have a big impact in already marginalized communities in the U.S., as well as in countries that don’t enjoy First Amendment protections, such as places like Myanmar and China, where the ability to speak out against the government is often quashed. EFF and others have called for Internet companies to provide the public with real transparency about whose posts they’re taking down and why. For example, platforms should provide users with real information about what they are taking down and a meaningful opportunity to appeal those decisions. Users need to know why some language is allowed and the same language in a different post isn’t. These and other suggestions are contained in the Santa Clara Principles, a proposal endorsed by more than 75 public interest groups around the world. Adopting these Principles would make a real difference in protecting people’s right to speak online, and we hope at least some of the witnesses tomorrow will point that out.

Reposted from the EFF Deeplinks blog

Filed Under: cda 230, content moderation, intermediary liability, section 230, ted cruzy

Reader Comments

Subscribe: RSS

View by: Time | Thread

  1. This comment has been flagged by the community. Click here to show it
    Dino Palmer -- now there's a palindrome for ya, 12 Apr 2019 @ 1:45pm

    "we don't see evidence" because willfully blind.

    From bottom of page 41:

    [footnote] 7 Willful blindness can also satisfy the requirement of actual knowledge. Global-Tech Appliances, Inc. v. SEB S.A., 563 U.S. 754, 766 (2011) ("[P]ersons who know enough to blind themselves to direct proof of critical facts in effect have actual knowledge of those facts."); see also In re Aimster Copyright Litig., 334F.3d 643, 650 (7th Cir. 2003) ("Willful blindness is knowledge, in copyright law . . . as it is in the law generally.")

    Several times references "common law" too, in way which makes clear is separate from court decisions. (By the way, I upper-case the words only to make stand out here, but when lawyers write it's taken as ordinary and well-known so doesn't need even that distinction, like "hot water".)

    Congress specifically intended to protect platforms’ right to moderate content without fear of taking on undue liability for their users’ posts.

    But to "moderate" does not mean enforce a political viewpoint. Period. You're just using a word trick there to elide the practical fact of viewpoint discrimination.

    Corporations have unlimited tricks that they can pull behind the scenes (as Techdirt does, the myth of "community" doing the hiding when Techdirt provides the code and is administrator okayed), mainly of internal definitions such as "hate speech" that they don't have to enforce uniformly.

    Let's error as benefits The Public rather than ultra-rich corporations. Anyone here against that? -- No, you can't dodge, because only other choice is to support the unlimited arbitrary censorship as masnicks want. You are acting against your own interest and promoting corporations that don't care beans about you -- see last week for the porn ban which you all objected to. These here corporatists say that such ban is all right.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Techdirt Gear
Show Now: Takedown
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Report this ad  |  Hide Techdirt ads
Recent Stories
Report this ad  |  Hide Techdirt ads


Email This

This feature is only available to registered users. Register or sign in to use it.