Platform Liability Doesn't -- And Shouldn't -- Depend On Content Moderation Practices

from the stifling-free-speech-the-other-way dept

In April 2018, House Republicans held a hearing on the “Filtering Practices of Social Media Platforms” that focused on misguided claims that Internet platforms like Google, Twitter, and Facebook actively discriminate against conservative political viewpoints. Now, a year later, Senator Ted Cruz is taking the Senate down the same path: he lead a hearing earlier this week on “Stifling Free Speech: Technological Censorship and the Public Discourse.”

While we certainly agree that online platforms have created content moderation systems that remove speech, we don’t see evidence of systemic political bias against conservatives. In fact, the voices that are silenced more often belong to already marginalized or less-powerful people.  

Given the lack of evidence of intentional partisan bias, it seems likely that this hearing is intended to serve a different purpose: to build a case for making existing platform liability exemptions dependent on "politically neutral" content moderation practices. Indeed, Senator Cruz seems to think that’s already the law. Questioning Facebook CEO Mark Zuckerberg last year, Cruz asserted that in order to enjoy important legal protections for free speech, online platforms must adhere to a standard of political neutrality in their moderation decisions. Fortunately for Internet users of all political persuasions, he’s wrong.

Section 230—the law that protects online forums from many types of liability for their users’ speech—does not go away when a platform decides to remove a piece of content, whether or not that choice is “politically neutral.” In fact, Congress specifically intended to protect platforms’ right to moderate content without fear of taking on undue liability for their users’ posts. Under the First Amendment, platforms have the right to moderate their online platforms however they like, and under Section 230, they’re additionally shielded from some types of liability for their users’ activity. It’s not one or the other. It’s both.

In recent months, Sen. Cruz and a few of his colleagues have suggested that the rules should change, and that platforms should lose Section 230 protections if those platforms aren’t politically neutral. While such proposals might seem well-intentioned, it’s easy to see how they would backfire. Faced with the impossible task of proving perfect neutrality, many platforms—especially those without the resources of Facebook or Google to defend themselves against litigation—would simply choose to curb potentially controversial discussion altogether and even refuse to host online communities devoted to minority views. We have already seen the impact FOSTA has had in eliminating online platforms where vulnerable people could connect with each other.

To be clear, Internet platforms do have a problem with over-censoring certain voices online. These choices can have a big impact in already marginalized communities in the U.S., as well as in countries that don’t enjoy First Amendment protections, such as places like Myanmar and China, where the ability to speak out against the government is often quashed. EFF and others have called for Internet companies to provide the public with real transparency about whose posts they’re taking down and why. For example, platforms should provide users with real information about what they are taking down and a meaningful opportunity to appeal those decisions. Users need to know why some language is allowed and the same language in a different post isn’t. These and other suggestions are contained in the Santa Clara Principles, a proposal endorsed by more than 75 public interest groups around the world. Adopting these Principles would make a real difference in protecting people’s right to speak online, and we hope at least some of the witnesses tomorrow will point that out.

Reposted from the EFF Deeplinks blog

Filed Under: cda 230, content moderation, intermediary liability, section 230, ted cruzy

Reader Comments

Subscribe: RSS

View by: Time | Thread

  1. icon
    Stephen T. Stone (profile), 12 Apr 2019 @ 2:43pm

    Facebook is a global communication network that can block my account preventing me from communicating with anyone else on the network.

    So can Twitter. And YouTube. And Soundcloud. And basically any other website that facilitates communication between two or more third parties. What is your point?

    Non bigoted people unanimously agree that should cover political beliefs.

    If a White supremacist joins a forum for Black Lives Matter supporters and starts espousing White supremacist ideology, should the law punish the forum for booting the asshole over his “political beliefs”?

    The problem with theoretical protections against “viewpoint discrimination” is that they would tie the hands of platforms such as Facebook and Twitter in re: moderation. An anti-gay message could be called a “political belief” by the person who expresses it and nothing could be done about their expressing it on a pro-LGBT Facebook page. If gay people knew they would have to put up with such bullshit because Facebook could do nothing about it because “viewpoint discrimination” was made illegal, gay people would be less likely to use Facebook. I may dislike Facebook, but the idea of people being unable to use it only because of assholes spewing speech that Facebook could not legally delete? I despise that even more.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Insider Shop - Show Your Support!

Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Report this ad  |  Hide Techdirt ads
Recent Stories
Report this ad  |  Hide Techdirt ads


Email This

This feature is only available to registered users. Register or sign in to use it.