Platform Liability Doesn't -- And Shouldn't -- Depend On Content Moderation Practices

from the stifling-free-speech-the-other-way dept

In April 2018, House Republicans held a hearing on the “Filtering Practices of Social Media Platforms” that focused on misguided claims that Internet platforms like Google, Twitter, and Facebook actively discriminate against conservative political viewpoints. Now, a year later, Senator Ted Cruz is taking the Senate down the same path: he lead a hearing earlier this week on “Stifling Free Speech: Technological Censorship and the Public Discourse.”

While we certainly agree that online platforms have created content moderation systems that remove speech, we don’t see evidence of systemic political bias against conservatives. In fact, the voices that are silenced more often belong to already marginalized or less-powerful people.  

Given the lack of evidence of intentional partisan bias, it seems likely that this hearing is intended to serve a different purpose: to build a case for making existing platform liability exemptions dependent on "politically neutral" content moderation practices. Indeed, Senator Cruz seems to think that’s already the law. Questioning Facebook CEO Mark Zuckerberg last year, Cruz asserted that in order to enjoy important legal protections for free speech, online platforms must adhere to a standard of political neutrality in their moderation decisions. Fortunately for Internet users of all political persuasions, he’s wrong.

Section 230—the law that protects online forums from many types of liability for their users’ speech—does not go away when a platform decides to remove a piece of content, whether or not that choice is “politically neutral.” In fact, Congress specifically intended to protect platforms’ right to moderate content without fear of taking on undue liability for their users’ posts. Under the First Amendment, platforms have the right to moderate their online platforms however they like, and under Section 230, they’re additionally shielded from some types of liability for their users’ activity. It’s not one or the other. It’s both.

In recent months, Sen. Cruz and a few of his colleagues have suggested that the rules should change, and that platforms should lose Section 230 protections if those platforms aren’t politically neutral. While such proposals might seem well-intentioned, it’s easy to see how they would backfire. Faced with the impossible task of proving perfect neutrality, many platforms—especially those without the resources of Facebook or Google to defend themselves against litigation—would simply choose to curb potentially controversial discussion altogether and even refuse to host online communities devoted to minority views. We have already seen the impact FOSTA has had in eliminating online platforms where vulnerable people could connect with each other.

To be clear, Internet platforms do have a problem with over-censoring certain voices online. These choices can have a big impact in already marginalized communities in the U.S., as well as in countries that don’t enjoy First Amendment protections, such as places like Myanmar and China, where the ability to speak out against the government is often quashed. EFF and others have called for Internet companies to provide the public with real transparency about whose posts they’re taking down and why. For example, platforms should provide users with real information about what they are taking down and a meaningful opportunity to appeal those decisions. Users need to know why some language is allowed and the same language in a different post isn’t. These and other suggestions are contained in the Santa Clara Principles, a proposal endorsed by more than 75 public interest groups around the world. Adopting these Principles would make a real difference in protecting people’s right to speak online, and we hope at least some of the witnesses tomorrow will point that out.

Reposted from the EFF Deeplinks blog

Filed Under: cda 230, content moderation, intermediary liability, section 230, ted cruzy


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 14 Apr 2019 @ 4:35pm

    Re: Re: Re: Re:

    Since the previous poster (the "crybaby" who whines about others) made no logical points, there is nothing to argue against logically. It seems the poster cannot refute the notion that Section 230 has harmed innocent people. Supporters of Section 230 are effectively calling female victims of revenge porn "collateral damage" or "acceptable loss."


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Techdirt Gear
Shop Now: I Invented Email
Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.