India McKinney's Techdirt Profile

India McKinney

About India McKinney

Posted on Techdirt - 2 April 2025 @ 12:11pm

230 Protects Users, Not Big Tech

Once again, several Senators appear poised to gut one of the most important laws protecting internet users – Section 230 (47 U.S.C. § 230)

Don’t be fooled – many of Section 230’s detractors claim that this critical law only protects big tech. The reality is that Section 230 provides limited protection for all platforms, though the biggest beneficiaries are small platforms and users. Why else would some of the biggest platforms be willing to endorse a bill that guts the law? In fact, repealing Section 230 would only cement the status of Big Tech monopolies.

As EFF has said for years, Section 230 is essential to protecting individuals’ ability to speak, organize, and create online. 

Congress knew exactly what Section 230 would do – that it would lay the groundwork for speech of all kinds across the internet, on websites both small and large. And that’s exactly what has happened.  

Section 230 isn’t in conflict with American values. It upholds them in the digital world. People are able to find and create their own communities, and moderate them as they see fit. People and companies are responsible for their own speech, but (with narrow exceptions) not the speech of others. 

The law is not a shield for Big Tech. Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.  

If Big Tech is at the table in any future discussion for what rules should govern internet speech, EFF has no confidence that the result will protect and benefit internet users, as Section 230 does currently. If Congress is serious about rewriting the internet’s speech rules, it must spend time listening to the small services and everyday users who would be harmed should they repeal Section 230.  

Section 230 Protects Everyday Internet Users 

There’s another glaring omission in the arguments to end Section 230: how central the law is to ensuring that every person can speak online, and that Congress or the Administration does not get to define what speech is “good” and “bad”.   

Let’s start with the text of Section 230. Importantly, the law protects both online services and users. It says that “no provider or user shall be treated as the publisher” of content created by another. That’s in clear agreement with most Americans’ belief that people should be held responsible for their own speech—not that of others.   

Section 230 protects individual bloggers, anyone who forwards an email, and social media users who have ever reshared or retweeted another person’s content online. Section 230 also protects individual moderators who might delete or otherwise curate others’ online content, along with anyone who provides web hosting services

As EFF has explained, online speech is frequently targeted with meritless lawsuits. Big Tech can afford to fight these lawsuits without Section 230. Everyday internet users, community forums, and small businesses cannot. Engine has estimated that without Section 230, many startups and small services would be inundated with costly litigation that could drive them offline. Even entirely meritless lawsuits cost thousands of dollars to fight, and often tens or hundreds of thousands of dollars.

Deleting Section 230 Will Create A Field Day For The Internet’s Worst Users  

Section 230’s detractors say that too many websites and apps have “refused” to go after “predators, drug dealers, sex traffickers, extortioners and cyberbullies,” and imagine that removing Section 230 will somehow force these services to better moderate user-generated content on their sites.  

These arguments fundamentally misunderstand Section 230. The law lets platforms decide, largely for themselves, what kind of speech they want to host, and to remove speech that doesn’t fit their own standards without penalty. 

 If lawmakers are legitimately motivated to help online services root out unlawful activity and terrible content appearing online, the last thing they should do is eliminate Section 230. The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content, and in cases of illegal behavior, work with law enforcement to hold those users responsible. 

If Congress deletes Section 230, the pre-digital legal rules around distributing content would kick in. That law strongly discourages services from moderating or even knowing about user-generated content. This is because the more a service moderates user content, the more likely it is to be held liable for that content. Under that legal regime, online services will have a huge incentive to just not moderate and not look for bad behavior. This would result in the exact opposite of their goal of protecting children and adults from harmful content online.

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 12 April 2019 @ 01:31pm

Platform Liability Doesn't — And Shouldn't — Depend On Content Moderation Practices

In April 2018, House Republicans held a hearing on the ?Filtering Practices of Social Media Platforms? that focused on misguided claims that Internet platforms like Google, Twitter, and Facebook actively discriminate against conservative political viewpoints. Now, a year later, Senator Ted Cruz is taking the Senate down the same path: he lead a hearing earlier this week on ?Stifling Free Speech: Technological Censorship and the Public Discourse.?

While we certainly agree that online platforms have created content moderation systems that remove speech, we don?t see evidence of systemic political bias against conservatives. In fact, the voices that are silenced more often belong to already marginalized or less-powerful people.  

Given the lack of evidence of intentional partisan bias, it seems likely that this hearing is intended to serve a different purpose: to build a case for making existing platform liability exemptions dependent on “politically neutral” content moderation practices. Indeed, Senator Cruz seems to think that?s already the law. Questioning Facebook CEO Mark Zuckerberg last year, Cruz asserted that in order to enjoy important legal protections for free speech, online platforms must adhere to a standard of political neutrality in their moderation decisions. Fortunately for Internet users of all political persuasions, he?s wrong.

Section 230?the law that protects online forums from many types of liability for their users? speech?does not go away when a platform decides to remove a piece of content, whether or not that choice is ?politically neutral.? In fact, Congress specifically intended to protect platforms? right to moderate content without fear of taking on undue liability for their users? posts. Under the First Amendment, platforms have the right to moderate their online platforms however they like, and under Section 230, they?re additionally shielded from some types of liability for their users? activity. It?s not one or the other. It?s both.

In recent months, Sen. Cruz and a few of his colleagues have suggested that the rules should change, and that platforms should lose Section 230 protections if those platforms aren?t politically neutral. While such proposals might seem well-intentioned, it?s easy to see how they would backfire. Faced with the impossible task of proving perfect neutrality, many platforms?especially those without the resources of Facebook or Google to defend themselves against litigation?would simply choose to curb potentially controversial discussion altogether and even refuse to host online communities devoted to minority views. We have already seen the impact FOSTA has had in eliminating online platforms where vulnerable people could connect with each other.

To be clear, Internet platforms do have a problem with over-censoring certain voices online. These choices can have a big impact in already marginalized communities in the U.S., as well as in countries that don?t enjoy First Amendment protections, such as places like Myanmar and China, where the ability to speak out against the government is often quashed. EFF and others have called for Internet companies to provide the public with real transparency about whose posts they?re taking down and why. For example, platforms should provide users with real information about what they are taking down and a meaningful opportunity to appeal those decisions. Users need to know why some language is allowed and the same language in a different post isn?t. These and other suggestions are contained in the Santa Clara Principles, a proposal endorsed by more than 75 public interest groups around the world. Adopting these Principles would make a real difference in protecting people?s right to speak online, and we hope at least some of the witnesses tomorrow will point that out.

Reposted from the EFF Deeplinks blog