Everything You Know About Section 230 Is Wrong (But Why?)

from the take-the-quiz dept

There are a few useful phrases that allow one instantly to classify a statement. For example, if any piece of popular health advice contains the word “toxins,” you can probably disregard it. Other than, “avoid ingesting them.” Another such heuristic is that if someone tells you “I just read something about §230…” the smart bet is to respond, “you were probably misinformed.” That heuristic can be wrong, of course. Yet in the case of §230 of the Communications Decency Act, which has been much in the news recently, the proportion of error to truth is so remarkable that it begs us to ask, “Why?” Why do reputable newspapers, columnists, smart op-ed writers, legally trained politicians, even law professors, spout such drivel about this short, simple law?

§230 governs important aspects of the liability of online platforms for the speech made by those who post on them. We have had multiple reasons recently to think hard about online platforms, about their role in our politics, our speech, and our privacy. §230 has figured prominently in this debate. It has been denounced, blamed for the internet’s dysfunction, and credited with its vibrancy. Proposals to repeal it or drastically reform it have been darlings of both left and right. Indeed, both former President Trump and President Biden have called for its repeal. But do we know what it actually does? Here’s your quick quiz: Can you tell truth from falsity in the statements below? I am interested in two things. Which of these claims do you believe to be true, or at least plausible? How many of them have you heard or seen?

The §230 Quiz: Which of These Statements is True? Pick all that apply.

A.) §230 is the reason there is still hate speech on the internet. The New York Times told its readers the reason “why hate speech on the internet is a never-ending problem,” is “because this law protects it.” quoting the salient text of §230.

B.) §230 forbids, or at least disincentivizes, companies from moderating content online, because any such moderation would make them potentially liable. For example, a Wired cover story claimed that Facebook had failed to police harmful content on its platform, partly because it faced “the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.”

C.) The protections of §230 are only available to companies that engage in “neutral” content moderation. Senator Cruz, for example, in cross examining Mark Zuckerberg said, “The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum. Do you consider yourself a neutral public forum?”

D.) §230 is responsible for cyberbullying, online criminal threats and internet trolls. It also protects against liability when platforms are used to spread obscenity, child pornography or for other criminal purposes. A lengthy 60 Minutes program in January of this year argued that the reason that hurtful, harmful and outright illegal content stays online is the existence of §230 and the immunity it grants to platforms. Other commentators have blamed §230 for the spread of everything from child porn to sexual trafficking.

E.) The repeal of §230 would lead online platforms to police themselves to remove hate speech and libel from their platforms because of the threat of liability. For example, as Joe Nocera argues in Bloomberg, if §230 were repealed companies would “quickly change their algorithms to block anything remotely problematic. People would still be able to discuss politics, but they wouldn’t be able to hurl anti-Semitic slurs.”

F.) §230 is unconstitutional, or at least constitutionally problematic, as a speech regulation in possible violation of the First Amendment. Professor Philip Hamburger made this claim in the pages of the Wall Street Journal, arguing that the statute is a speech regulation that was passed pursuant to the Commerce Clause and that “[this] expansion of the commerce power endangers Americans’ liberty to speak and publish.” Professor Jed Rubenfeld, also in the Wall Street Journal, argues that the statute is an unconstitutional attempt by the state to allow private parties to do what it could not do itself — because §230 “not only permits tech companies to censor constitutionally protected speech but immunizes them from liability if they do so.”

What were your responses to the quiz? My guess is that you’ve seen some of these claims and find plausible at least one or two. Which is a shame because they are all false, or at least wildly implausible. Some of them are actually the opposite of the truth. For example, take B.) §230 was created to encourage online content moderation. The law before §230 made companies liable when they acted more like publishers than mere distributors, encouraging a strictly hands-off approach. Others are simply incorrect. §230 does not require neutral content moderation — whatever that would mean. In fact, it gives platforms the leeway to impose their own standards; only allowing scholarly commentary, or opening the doors to a free-for-all. Forbidding or allowing bawdy content. Requiring identification of posters or allowing anonymity. Filtering by preferred ideology, or religious position. Removing posts by liberals or conservatives or both.

What about hate speech? You may be happy or sad about this but, in most cases, saying bad things about groups of people, whether identified by gender, race, religion, sexual orientation or political affiliation, is legally protected in the United States. Not by §230, but by the First Amendment to the US Constitution. Criminal behavior? §230 has an explicit exception saying it does not apply to liability for obscenity, the sexual exploitation of children or violation of other Federal criminal statutes. As for the claim that “repeal would encourage more moderation by platforms,” in many cases it has things backwards, as we will see.

Finally, unconstitutional censorship? Private parties have always been able to “censor” speech by not printing it in their newspapers, removing it from their community bulletin boards, choosing which canvassers or political mobilizers to talk to, or just shutting their doors. They are private actors to whom the First Amendment does not apply. (Looking at you, Senator Hawley.) All §230 does is say that the moderator of community bulletin board isn’t liable when the crazy person puts up a libelous note about a neighbor, but also isn’t liable for being “non neutral” when she takes down that note, and leaves up the one advertising free eggs. If the law says explicitly that she is neither responsible for what’s posted on the board by others, nor for her actions in moderating the board, is the government enlisting her in pernicious, pro-egg state censorship in violation of the First Amendment?! “Big Ovum is Watching You!”? To ask the question is to answer it. Now admittedly, these are really huge bulletin boards! Does that make a difference? Perhaps we should decide that it does and change the law. But we will probably do so better and with a clearer purpose if we know what the law actually says now.

It is time to go back to basics. §230 does two simple things. Platforms are not responsible for what their posters put up, but they are also not liable when they moderate those postings, removing the ones that break their guidelines or that they find objectionable for any reason whatsoever. Let us take them in turn.

1.) It says platforms, big and small, are not liable for what their posters put up, That means that social media, as you know it — in all its glory (Whistleblowers! Dissent! Speaking truth to power!) and vileness (See the internet generally) — gets to exist as a conduit for speech. (§230 does not protect platforms or users if they are spreading child porn, obscenity or breaking other Federal criminal statutes.) It also protects you as a user when you repost something from somewhere else. This is worth repeating. §230 protects individuals. Think of the person who innocently retweets, or reposts, a video or message containing false claims; for example, a #MeToo, #BLM or #Stopthesteal accusation that turns out to be false or even defamatory. Under traditional defamation law, a person republishing defamatory content is liable to the same extent as the original speaker. §230 changes that rule. Perhaps that is good or perhaps that is bad — but think about what the world of online protest would be like without it. #MeToo would become… #Me? #MeMaybe? #MeAllegedly? Even assuming that the original poster could find a platform to post that first explosive accusation on. Without §230, would they? As a society we might end up thinking that the price of ending that safe harbor was worth it, though I don’t think so. At the very least, we should know how big the bill is before choosing to pay it.

2.) It says platforms are not liable for attempting to moderate postings, including moderating in non-neutral ways. The law was created because, before its passage, platforms faced a Catch 22. They could leave their spaces unmoderated and face a flood of rude, defamatory, libelous, hateful or merely poorly reasoned postings. Alternatively, they could moderate them and see the law (sometimes) treat them as “publishers” rather than mere conduits or distributors. The New York Times is responsible for libelous comments made in its pages, even if penned by others. The truck firm that hauled the actual papers around the country (how quaint) is not.

So what happens if we merely repeal §230? A lot of platforms that now moderate content extensively for violence, nudity, hate speech, intolerance, and apparently libelous statements would simply stop doing so. You think the internet is a cesspit now? What about Mr. Nocera’s claim that they would immediately have to tweak their algorithms or face liability for anti-Semitic postings? First, platforms might well be protected if they were totally hands-off. What incentive would they have to moderate? Second, saying hateful things, including anti-Semitic ones, does not automatically subject one to liability; indeed, such statements are often protected from legal regulation by the First Amendment. Mr. Nocera is flatly wrong. Neither the platform nor the original poster would face liability for slurs, and in the absence of §230, many platforms would stop moderating them. Marjorie Taylor Greene’s “Jewish space-laser” comments manage to be both horrifyingly anti-Semitic and stupidly absurd at the same time. But they are not illegal. As for libel, the hands-off platform could claim to be a mere conduit. Perhaps the courts would buy that claim and perhaps not. One thing is certain, the removal of §230 would give platforms plausible reasons not to moderate content.

Sadly, this pattern of errors has been pointed out before. In fact, I am drawing heavily and gratefully on examples of misstatements analyzed by tech commentators and public intellectuals, particularly Mike Masnick, whose page on the subject has rightly achieved internet-law fame. I am also indebted to legal scholars such as Daphne Keller, Jeff Kosseff and many more, who play an apparently endless game of Whack-a-Mole with each new misrepresentation. For example, they and people like them eventually got the New York Times to retract the ludicrous claim featured above. That story got modified. But ten others take its place. I say an “endless game of Whack-a-Mole” without hyperbole. I could easily have cited five more examples of each error. But all of this begs the question. Why? Rather than fight this one falsehood at a time, ask instead, “why is ‘respectable’ public discourse on this vital piece of legislation so wrong?”

I am a law professor, which means I am no stranger to mystifying error. It appears to be an endlessly renewable resource. But at first, this one had me stumped. Of course, some of the reasons are obvious.

  • “I am angry at Big Tech because (reasons). Big Tech likes §230. Therefore, I am against it.”
  • “I hate the vitriol, stupidity and lies that dominate our current politics. I hate the fact that a large portion of the country appears to be in the grips of a cult.” (Preach, brother, preach!) “I want to fix that. Maybe this §230 lever here will work? Because it looks ‘internet-ty’ and the internet seems to be involved in the bad stuff?”
  • “I know what I am saying is nonsense but it serves my political ends to say it.”

I share the deep distrust of the mega-platforms. I think that they probably need significantly more regulation, though I’d start with antitrust remedies, myself. But beyond that distrust, what explains the specific, endlessly replicated, fractal patterns of error about a simple law?[1] I think there is an answer. We are using §230 as a Rorschach blot, an abstraction onto which we project our preconceptions and fears, and in doing so we are expressing some fascinating tendencies in our political consciousness. We can learn from this legal ink-blot.

The Internet has messed up the public/private distinction in our heads. Analog politics had a set of rules for public actors — states or their constituent parts — that were large, enormously powerful and that we saw as the biggest threats in terms of endless disinformation (Big Brother in 1984) and repressive censorship (ditto). It also had a set of rules for private actors — citizens and companies and unions. True, the companies sometimes wielded incredible power themselves (Citizen Kane) and lots of us worried about the extent to which corporate wealth could coopt the public sphere. (Citizens United.) But the digital world introduced us to network effects. Network effects undercut the traditional liberal solutions: competition or exit. Why don’t you leave Facebook or Instagram or Twitter? Because everyone else is on there too. Why don’t you start a competitor to each of them? Same reason. Platforms are private. But they feel public. Twitter arguably exercised considerably more power over President Trump’s political influence than impeachment. What are we to make of that? We channel that confusion, which contains an important insight, into nonsensical readings of §230. Save the feeling of disquiet. But focus it better

The malign feedback loops of the attention economy reward speed, shallowness, and outrage. (Also, curiosity and TikTok videos.) The algorithms only intensify that. They focus on what makes us click, not what makes us think. We rightly perceive this as a huge problem. The algorithms shape our mental nourishment the same way that Big Fast Food shapes our physical nourishment. Our health is not part of the equation. The people who are screaming “This is big! We need to focus on it right now!” are correct….

…but it’s not all bad. We need to recognize that the same networks that enabled QAnon, also enabled #Metoo and Black Lives Matter. Without the cell phone video of a police stop, or the tweet recounting sexual harassment, both connected to a global network, both demanding our attention, we would not have a vital megaphone for those who have been silenced too long. §230 makes possible a curated platform. It cannot guarantee one. (No law could. Read Daphne Keller on the problems of content-moderation on a large-scale basis.) It lets users post videos or experiences without the platform fearing libel suits from those pictured, or even suits from those whose postings are removed. 30 years ago that was impossible. The “good old days” were not so good in providing a voice to the silenced. Still, much of what we have today is awful and toxic. The temptation is to blame the dysfunction on an easy target: §230. Fixing the bad stuff and keeping the good stuff is hard. Mischaracterizing this law will not aid us in accomplishing that task. But knowing what it does say, and understanding why we mischaracterize it so frequently, well, that might help.

James Boyle © 2021 This article is licensed under a Creative Commons, Attribution, Non-commercial, Sharealike license.&mbsp;CC BY-NC-SA 3.0 I am indebted to the work of Mike Masnick, Daphne Keller and Jeff Kosseff, together with many others. Thanks for your service.

[1] The pertinent parts of §230 are these: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider… No provider.. shall be held liable on account of.. any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” [Emphasis added] Not so hard, really? Yet all the errors I describe here persist.

Filed Under: , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Everything You Know About Section 230 Is Wrong (But Why?)”

Subscribe: RSS Leave a comment
73 Comments
sumgai (profile) says:

Re: Re:

Stephan, the problem with your request is by asking for even just one example of something that is "common knowledge", it’s considered to be counter-productive. IOW, you are the one disrupting the conversation, as far as they are concerned. IOOW, you are attempting to have a battle of wits with someone who is unarmed…. does that seem like you’re being fair to them? 😉

My opinion on the matter can be summed up nicely: As noted in the days of old, an armed society is a polite society. Here in the Internet Age, that’s been modified to read:

An Internet-dependent society is a lowest-common-denominator society.

And that’s a fookin’ shame, cause it sure didn’t start out to be that way.

Scary Devil Monastery (profile) says:

Re: Re: Re:

At some point I just wish that one, just one of the alt-right trolls squeaking about being "censored" would have the bravery they usually display when anonymously posting in the far-right echo chambers.

"They dun bannt meh fer yusin the N-word!! Whut kinda nashun is we livin in where a red-blooded ‘murican caint say a <N-word>’s a <N-word>?!"

Christenson says:

Re: Disagree here

Sec 230 really enables all kinds of forums of all kinds of sizes, including badly moderated, horrible ones — whether you think that’s facebook, parler, 4chan, onlyfans, techdirt, or whatever.

Somebody was going to discover the moderation rules that maximized clicks, and sold the most to the advertisers…I think Robert Heinlein in the 1950s or so made the rather obvious suggestion to just score for emotionally loaded words. There’s no good legal way to distinguish between your horrible site (on which people can reasonably disagree) and a great site, like Techdirt here, and there’s nothing legal to prevent Techdirt from deciding to use such an algorithm and become horrible (hopefully Masnick’s opinion too) tomorrow.

For Mr Boyle, moderation requires context, and I think you and Corey Doctorow have it right… the monopoly is the problem, and revoking the monopolies is what we need. Heedless of the law of unintended consequences, abolishing copyright seems like the answer, because it prevents Toom and his friends from copying the best of Facebook and creating an absolutely great (his opinion; I want to be on the one Mike Masnick’s friends create, and Toom doesn’t care for NC but I do) experience. I just know he’ll give me the wrong number of cat videos, too because he chose the number and any number he chooses will be wrong.

Not that I don’t applaud the GPL folks for forcing Trump and others to disclose the code; that’s an important thing to preserve somehow.

Scary Devil Monastery (profile) says:

Re: Re: Disagree here

"Heedless of the law of unintended consequences, abolishing copyright seems like the answer…"

You realize that’ll spell the end for a vast industry of grifters, con men, and gatekeepers who fulfill no real function in the market but moving money and value away from artist and consumer alike?
Prosperous megastars won’t earn a steady paycheck for the job they did 30 years ago.
Hell, thousands of artists won’t be collecting their paychecks from beyond the grave!

Those would be the major consequences. Same as the buggy whip manufacturers having to trim sails and adapt once the automobiles no longer had to move with an extra machinist and a guy toting a red flag walking in front of them.

This comment has been deemed insightful by the community.
Anonymous Coward says:

"Why hate speech is a problem on the Internet" is the same reason it’s a problem anywhere else. Because there are horrible people in the world.

Section 230 is a politically convenient tool to use to mislead people into campaign contributions and votes. Republicans will blame it for "excessive censorship by Big Tech" and Democrats will blame it for "Big Tech not doing enough to remove harmful material" when all it really is, is just a shortcut to avoid litigation costs over actions that would be protected under the First Amendment anyway.

I also find it odd that the actions protected under 230 wouldn’t be given a second thought in real world equivalents. Consider:

*I’m in a Walmart, and I walk by someone saying something I don’t like. Do I get to sue Walmart because they didn’t throw that person out of the store?

*I’m in a Walmart, and I’m saying things that Walmart doesn’t like, so they throw me out. Do I get to sue Walmart because I’m being "censored?"

The answer in both cases is "no, that’s dumb," and I think most people would agree. Walmart isn’t responsible for what their customers say, and Walmart is completely within their rights to throw people out for objectionable behavior. So if Walmart is safe from these actions, why isn’t Twitter? Or conversely, why is it Twitter’s job to police the speech of its users, but it’s not Walmart’s?

Rocky says:

Re: Re: Re:

I see people mentioning Marsh vs Alabama and in almost every case they don’t understand the decision (and many haven’t actually read it) because they don’t understand the context and that makes them think it applies to other situations which it doesn’t.

So I have to ask, why are you asking how to "square" Marsh vs Alabama with what the OP said? It’s almost like you haven’t read the decision and are just making noise about something you don’t actually understand.

That One Guy (profile) says:

Re: Re: Re:2 Re:

Marsh V Alabama said it’s not reasonable to label someone as trespassing if you haven’t reasonably marked your property as private.

As applied to an online platform it certainly seems like a clearly available TOS would more than cover that, making it even more of a waste to bring it up since it wouldn’t be applicable anyway.

This comment has been deemed insightful by the community.
Toom1275 (profile) says:

Re: Re: Re:5 Re:

Okay, once more, from the top, let’s see I can get them all straight this time:

Marsh is about private corporations performing public functions in the form of company towns.

If DNY knew anuthing about the subject, they’d know that the Supreme Court precedent in Manhattan v Halleck rejected the idea that creating a forum for communication is sich a traditional and exclusive government action, therefore Marsh has no relevance private speech platforms.

Pruneyard was about a mall banning leaflet handing in certain parts of its mall. The court basically ruled that the students couldn’t have reasonably known they were on private property where that was disallowed due to inadequate signage. It had nothing do do about speech, as the mall did not object to the contents of the pamphlets. It is extremely narrow in scope, covering only that one mall and having no precedent anywhere else. With increased signage, Pruneyard doesn’t even apply to Pruneuard any more.

Packingham ruled it was unconstitutional for the government to ban someone from all social media. Completely irrelevant to moderation, as nothing can make a privately-owned platform a state actor.

Scary Devil Monastery (profile) says:

Re: Re:

"The answer in both cases is "no, that’s dumb," and I think most people would agree."

Yup. And while we’re at that subject, let me introduce you to US tort law.

Section 230 needs to exist because it really doesn’t matter whether you have a case or not when you can swamp entity X with lawsuits against which entity X then needs to defend itself.

I think both cases you reference actually have happened. I mean I’d be surprised if they hadn’t. It’s just that Wal-mart has the legal muscle to defend itself.

Online, so does Facebook and…few other social media networks. One guess as to why Facebook is in favor of changing 230 these days. If you’re the only one with shark repellant it’s just considered good business to pour blood in the water.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Fixing the bad stuff and keeping the good stuff is hard.

It’s especially hard when tech-focused groups keep spouting "Crypto crypto decentralization crypto, NFT NFT Web3!!" in the saddest of ways as potential solutions. It’s starting to clog up too many spaces, leaving solutions that don’t involve trading one form of hypercapitalist hellscape for another without a lot of breathing room to get a word in.

ECA (profile) says:

I wonder

If you really let 230, Do and BE, what it says.
The corps wouldnt Touch anything posted by a 3rd party on their sites.
But the RIAA and MPAA and a few others DO stomp around and bitch at anything they see that will TAKE money out of their pockets.
Youtube would love it.

OPEN and FREE speech, would show you MORE about people then whats happening now. Insted of all this BELOW ground upset people, you could read what people really think.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: I wonder

Open and free speech with no moderation would have the result that the trolls take over the public internet while reasonable people are invited to join private clubs on the Internet. The right would still complain that they are being censored, and agitate for private clubs to be forced to let them in, or be banned. Those arguing that they are being censored are doing so in bad faith, as what they desire is the ability to derail all conversation that are not about the politics they agree with.

This comment has been flagged by the community. Click here to show it.

DNY (profile) says:

Re: I wonder

What makes you even think that might happen? I refer you to the censorship, yes, that’s the right word in this context, of any mention of the New York Post story about Hunter Biden’s laptop by Facebook and Twitter in the run up to the 2020 presidential election. (See Glenn Greenwald’s Substack for details.) No RIAA or MPAA pressure, just tech execs enforcing their own political preferences on the content they host via their "content moderation" powers under Section 230. Nor was there any pressure from copyright maximalists in the destruction of a platform that was committed to open and free speech, Parler, by a conspiracy of Google, Apple and Amazon.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re:

The First Amendment protects your rights to speak freely and associate with whomever you want. It doesn’t give you the right to make others listen. It doesn’t give you the right to make others give you access to an audience. And it doesn’t give you the right to make a personal soapbox out of private property you don’t own. Nobody is entitled to a platform or an audience at the expense of someone else.

Now that some relevant copypasta has been served, here are some facts:

  1. Facebook and Twitter have no legal obligation to host any link to any third-party website.
  2. Neither Facebook nor Twitter censored the New York Post by blocking the sharing of the link to the “Hunter Biden’s laptop” story⁠—after all, the original story is still available to read on the Post website.
  3. Nobody, including you, has proven that any conspiracy between two or more tech companies to “censor” the Post took place.
  4. Parler is still alive, even if nobody outside of its devoted userbase gives a shit about it.
  5. Questions about infrastructure-level moderation aside: Amazon had no legal obligation to keep hosting Parler after Parler refused to play by Amazon’s rules, and no app store had a legal obligation to distribute the Parler app.
  6. Nobody, including you, has proven that a conspiracy between two or more tech companies to “destroy” Parler took place.

And to make sure you fully understand my point, I have one more copypasta to finish this off:

Moderation is a platform/service owner or operator saying “we don’t do that here”. Personal discretion is an individual telling themselves “I won’t do that here”. Editorial discretion is an editor saying “we won’t print that here”, either to themselves or to a writer. Censorship is someone saying “you won’t do that anywhere” alongside threats or actions meant to suppress speech.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: 'Sure you have that right, but can you afford to use it?'

A right you cannot afford to exercise is a right you effectively do not have.

The entire point of 230 is to allow platforms and those running them to exercise their first amendment rights without being sued into the ground for it so attacking 230 is very much attacking the first amendment which is all sorts of funny/hypocritical given how often the justification given is to protect ‘free speech.’

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: 'Sure you have that right, but can you afford to use it?

Pre Internet people had the same rights to freedom of speech, but few were able to reach a large audience because the publishers,labels and studios were and are highly selective about what they published, and self publishing was expensive and frequently ineffective. That’s why ‘the silent majority’ existed as a phrase.

Nowadays, anybody can find somewhere on the Internet to publish and discuss their ideas, and gain a larger audience, even if it only a few people, than they ever could pre Internet. ‘I am being silenced’ is a lie used by those who want to force their views onto an unwilling audience, which is abusing the rights of others, and not a case of not being able to exercise their own rights.

This comment has been deemed insightful by the community.
Toom1275 (profile) says:

Re: Re: Re: 'Sure you have that right, but can you afford to use it?

Authors Of CDA 230 Do Some Serious 230 Mythbusting In Response To Comments Submitted To The FCC
Several commenters, including AT&T, assert that Section 230 was conceived as a way to protect an infant industry, and that it was written with the antiquated internet of the 1990s in mind – not the robust, ubiquitous internet we know today. *IAs authors of the statute, we particularly wish to put this urban legend to rest.**

Section 230, originally named the Internet Freedom and Family Empowerment Act, H.R. 1978, was designed to address the obviously growing problem of individual web portals being overwhelmed with user-created content. This is not a problem the internet will ever grow out of; as internet usage and content creation continue to grow, the problem grows ever bigger. Far from wishing to offer protection to an infant industry, our legislative aim was to recognize the sheer implausibility of requiring each website to monitor all of the user-created content that crossed its portal each day.

Critics of Section 230 point out the significant differences between the internet of 1996 and today. Those differences, however, are not unanticipated. When we wrote the law, we believed the internet of the future was going to be a very vibrant and extraordinary opportunity for people to become educated about innumerable subjects, from health care to technological innovation to their own fields of employment. So we began with these two propositions: let’s make sure that every internet user has the opportunity to exercise their First Amendment rights; and let’s deal with the slime and horrible material on the internet by giving both websites and their users the tools and the legal protection necessary to take it down.

The march of technology and the profusion of e-commerce business models over the last two decades represent precisely the kind of progress that Congress in 1996 hoped would follow from Section 230’s protections for speech on the internet and for the websites that host it. The increase in user-created content in the years since then is both a desired result of the certainty the law provides, and further reason that the law is needed more than ever in today’s environment.

AC says:

Some people lie because they are desperate

I have been here long enough to know these are all false, but the reason people would misrepresent what 230 is and isn’t, either willfully or ignorantly, is because they see a big problem and want to put us in a direction to fix it.

The emergence of social network in the last decade opened a pandora’s box to a world that is no longer run mostly on truth based objective evidence. Yes, misinformation and conspiracy always existed, but they are now amplified because now people encounter them much more. I call it a pandora’s box because it is so disruptive to our established societal order that it’s even too late that even an entire ban to social network is not going to fix.

This problem meddles our democratic process. This problem leads to distrust of facts and scientific methods so we made less informed decisions for our society. This problem prevents us from correcting climate change, even when it becomes a life and death situation.

This is a problem where a world with 1st amendment and s230 intact would be almost impossible to solve. Even when facebook and friends can be perfectly moderate misinformation away, Truth Social and 8chan can keep them alive, and we do not lack people buying into these platforms. When people say they want to repeal s230, they actually mean they want to 1st amendment to be different, yet constitutional amendment is a taboo, so they skipped that part.

First of all, freedom of speech was first a few words, but subsequently elaborated over hundreds of years, by an appointed court, not a democratic process. Also, interpretations made a hundred years ago may not apply to today’s world as good as it did. It is also different from how most of the rest of the world sees freedom of speech.

One important difference is how much US stress on the difference between public and private entities. Given how anti-trust already ceased to exist, out-grown private companies can chill speech as effective as the government. Existence of large platforms also implies that right to speech and right to moderation can clash, and not both of them can be an absolute right.

In a world where misinformation has a tendency to outgrow facts due to its market economics, naively allowing every speech (both actual speech and moderation speech) to freely happen may not lead to the most favorable outcome for humanity. There is, of course, a consequence to every restriction of speech, and we all know the current US Congress is definitely not capable to handle this matter, but it doesn’t mean this direction should never be discussed.

Don’t ask me how I would like to amend 1st amendment. Don’t ask me what speech should not be allowed. I am not smart enough for these questions. I just want to say that seeing these questions as already answered is becoming a life-threatening problem. It’s a problem that people who see it don’t have solutions, and people who have solution (i.e. you) don’t acknowledge.

Frankly, if I had to choose, I tend to accept repealing s230, if helps reveal the truth that 1st amendment is the underlying culprit and germinates the discussion. I am happy to acknowledge that the collateral damage is taking down the internet as we know it, but it is still way better than the alternative.

This comment has been flagged by the community. Click here to show it.

DNY (profile) says:

Why Section 230 is hard to understand.

The problem with understanding Section 230 comes from people thinking in intuitive moral terms, rather than as lawyers:

The moral justification for shielding platforms from liability is that they are not responsible for what people post on them, the same way a telephone service provider is not responsible for what people say on the phone. But, it allows content moderation, which means that if something stays up on a platform it has some sort of approval by the platform operator, which to an ordinary person removes their lack of responsibility for what is on their platform.

Understanding this might actually be a guide to any reforms that could gain general acceptance: laws which comport with the public’s moral sense are generally accepted, while laws which offend it are often objected to or even resisted.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Why Section 230 is hard to understand.

But, it allows content moderation, which means that if something stays up on a platform it has some sort of approval by the platform operator, which to an ordinary person removes their lack of responsibility for what is on their platform.

A very confused ‘ordinary person’ perhaps as just because you haven’t been shown the door doesn’t mean you’ve gotten the property owner’s seal of approval, they might simply not be aware of you or don’t consider what you said/done enough to kick you out, and in neither case does that remove the original person’s personal responsibility.

This comment has been deemed insightful by the community.
Rocky says:

Re: Why Section 230 is hard to understand.

The problem with understanding Section 230 comes from people thinking in intuitive moral terms, rather than as lawyers

No, the problem with understanding Section 230 is that a lot of shady people have put out so much FUD about it in an effort to confuse people.

Ask yourself this, almost every detractor of Section 230 misconstrue, lie or otherwise misrepresent what it does, why is that?

Even you don’t understand Section 230 as evidenced by your post above, why is that?

This comment has been deemed insightful by the community.
Toom1275 (profile) says:

Re: Re: Why Section 230 is hard to understand.

shady people have put out so much FUD about it in an effort to confuse people.

Disinformation such as, for example:

  • "if something stays up on a platform it has some sort of approval by the platform operator"
  • "censorship, yes, that’s the right word in this context"
  • "destruction of a platform that was committed to open and free speech, Parler"
  • "How do you square your view with the Supreme Court precedent set by Marsh v. Alabama"
Anonymous Coward says:

You're right about the law, but are you about the politics?

The news I’ve seen (which may well be wrong) is that the censorship-crazed bureaucrats looking to repeal 230 have no intention of allowing companies to go back to "not moderating anything". In that sense the first "falsehood" is sort of right because they intend to go from a system where a company can choose not to censor so-called "hate speech" to one where they are not allowed to not censor it under any circumstance. When there is so much basic hostility to rights and freedoms and democracy of all sort, from left and right, we probably don’t want to see any legal change succeed, regardless of what it is.

At the same time though, repealing 230 would destroy the internet as a forum, and perhaps eventually that would lead to a deterioration in the rest of its functions. We should take a moment to pause and consider whether Ted Kaczinski was the only rational philosopher of our era, and whether "the medium is the message" and owners of twinkling boxes can’t help but become vicious tyrants until one owns all the boxes, no matter what we intended.

nasch (profile) says:

Much of

Still, much of what we have today is awful and toxic.

I’m skeptical of this "common knowledge." I think the awful and toxic is just front and center, because it’s not only very noticeable but causes a strong emotional reaction when encountered. This leads to a cognitive bias giving inordinate weight to such content. I would expect the huge majority of content on for example YouTube to be pretty innocuous. For every piece of hate speech online consider how many vacation photos or recipes or lifestyle blogs or city council meeting minutes there are. That stuff just doesn’t get any attention like neo-Nazis (appropriately) do.

Lostinlodos (profile) says:

Replace?

About the only way 230 could be replaced with something better would be a regulation that protected a company from legal ramifications if they didn’t moderate at all. As long as they took down illegal materials when notified by law.

230 has a tendency to support and forward moderation.
Which was the intent.

It protects a platform from responsibility for missed items when they actively take down other things.
The only way to truly make that better is to shelter a platform from repercussions for ALL content regardless of their choices of moderation, including none.

This comment has been deemed insightful by the community.
nasch (profile) says:

Re: Re: Re: Replace?

We need an absolute immunity to platforms.

c) 1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

What more is needed? A provider cannot, under any circumstances*, whether through any act of moderation or abstention from moderation, be held liable for something someone else said. Unless you’re suggesting they not be liable for their own speech, I don’t see how the protection can be any more absolute.

  • other than the exceptions for criminal law, etc.
Lostinlodos (profile) says:

Re: Re: Re:2 Replace?

Forgot to mention

In 230 no, but there is in law in general.

As a generic example:
YouTube was informed by user fukcyoutube that the video at andj72jsg5js was in violation of [name law] for displaying [name content]. As such YouTube knew or reasonably should have known that the content at andj72jsg5js was in violation of [name law] and is therefore liable for hosting said criminal material.

Lostinlodos (profile) says:

Re: Re: Re:4 Replace?

but that isn’t what most people are focusing on

Exactly.
The republicans run around screaming they’re being censored. Doesn’t matter. Private property, private service.
1st amendment doesn’t apply.

Dems run around demanding more censorship. Fuck you. Fuck deletionist scum.

Right now only MAFIAA shites are exploiting it but it’s still a hole in the law. Eventually some smart politician (I know, oxyM) will come along and exploit that.

The one thing this site did was force me to actually research 230. Change my mind. That’s partly you, Stephen, Mr Devil (my?).
The rake away, after reviewing many many dozens of legal filings: everyone in DC is so full of each other’s shite they speak only crap.

But here’s a real concern.
The question now is who figures this out and acts on it first.

Because everyone in congress thinks they’re some sort of grand prophet to a better way.

Scary Devil Monastery (profile) says:

Re: Re: Re:5 Replace?

"The republicans run around screaming they’re being censored."

They’d run around screaming they’d been <insert violation here> no matter what, because the currency they buy their base in these days is grievance.

"Dems run around demanding more censorship. Fuck you. Fuck deletionist scum."

I’ve seen very few examples of this. Biden did that one horrifying blooper in this regard…but by and large democrats have settled for the equivalent of a PSA. And government issuing an ask for an industry to observe rationality and reason isn’t really a thing to get upset about.

"Right now only MAFIAA shites are exploiting it but it’s still a hole in the law."

What the copyright cult is exploiting has very little to do with the law everyone else has to abide by; You’ll find that the DMCA – their Red Flag Act – provides a gross number of exceptions to the normal judicial process. From the way third party liability is assumed to the part where in order to obtain and keep safe harbor the platforms are forced to assume every complaint is considered "proven" or face substantial risk.

So it’s a bit wrong to blame 230 for not working right when it comes to anything concerning a DMCA takedown on youtube or copyright cult intervention online because they have that special law written just for them which turns many basics of law upside down.

nasch (profile) says:

Re: Re: Re:6 Replace?

I’ve seen very few examples of this.

https://www.technologyreview.com/2021/02/08/1017625/safe-tech-section-230-democrat-reform/

https://eshoo.house.gov/media/press-releases/reps-eshoo-and-malinowski-introduce-bill-hold-tech-platforms-liable-algorithmic

https://www.reuters.com/article/us-usa-tech-liability/democrats-prefer-scalpel-over-jackhammer-to-reform-key-u-s-internet-law-idUSKBN27E1IA

https://www.vox.com/recode/22221135/capitol-riot-section-230-twitter-hawley-democrats

Democrats have been frequently calling for section 230 "reform", generally as Lostinlodos says with the aim to get companies to moderate more heavily.

Lostinlodos (profile) says:

Re: Re: Re:7 Replace?

Both sides have bad arguments.
The Dems want censorship.
The Reps want forced speech.
And party wise there’s little outspoken in between.

But it’s even worse when you get outside of the big two parties.
Like the ACP demanding state-level moderation.
Or the NCR trying to ban all porn.
Or the or the or the…

It’s like the whole of politics forgot just how free speech works!

Scary Devil Monastery (profile) says:

Re: Re: Re:3 Replace?

"As such YouTube knew or reasonably should have known that the content at andj72jsg5js was in violation of [name law] and is therefore liable for hosting said criminal material."

For that particular example you’re either looking at;

1) the DMCA (if the content is deemed to infringe on someone’s copyright). I.e. civil suit under "special" protectionist circumstance. or

2) Outright crimes under the penal code. Snuff movies, CP, incitement to violence, etc.

230 very specifically protects the platform from litigation if a user-generated comment could invite civil liability. And does so irrespective of whether the platform moderates or not.

So we’re back to nasch’s assertion again, where 230 already does everything you ask for.

The issue here is that due to a number of factors – US tort law, some glaring gaps in US basic telecommunications acts, etc – america is one of the few countries in the world which actually requires a law like section 230 in the first place. And such a law needs to be very specifically designed to accomplish the exact outcome of sparing a platform from third party liability without taking shit too far one way or the other.

230 is one very rare example of a law which as written is unambiguous, proportional, and founded in good jurisprudens. There really is no way to change it which won’t make it either a lot worse or too complex for anyone but a specialist legal eagle to observe and utilize.

Lostinlodos (profile) says:

Re: Re: Re:4 Replace?

2. And there’s more than just those.

And this isn’t a 230 question as much as we would need to add another layer of protection.

A site should not be required, or implied to be required, to remove anything user generated, including copyright material or your suggestions, unless and until told to do so by a state or federal law enforcement officer or a court.
The point of issue being right now citizen, not legal or justice, opinion is all it takes to be held accountable.
This should further amend the DMCA as copyright holders are proven incapable of accurate reporting.

Lostinlodos (profile) says:

Re: Re: Re:6 Re:

Minor correction:
Further amend the DMCA as well.

What all the bickering has created is a baby and bath water situation.
A forest and trees situation.

When a private site takes down protected speech they disagree with, it’s private. That’s the site’s right.
When they take something down because someone somewhere may come after them, be it government or civil, that’s wrong.

I’m gonna crap out an example here. Stay on topic, don’t discuss the subject.
When a site removes anti vax info because it’s bull and they don’t want to spread it, that’s their right.
When a site removes anti vax info because they’re worried someone will sue them, that’s wrong.

When a site takes down What Is Love sketches because they’re against nudity, that’s their right.
When they take down those sketches because they could be heals liable under some ultra conservative or ultra liberal judge that equates non sexual nudity with cp that’s wrong.

When people take down a game because the fighters are topless and they don’t like nudity, that’s their right.
When they take down the game because some parents group threatens them with a law suite that’s wrong.

Within the boundaries of current law, a site’s content choices must be with the site. Their choice of moderation, or lack of moderation.
And any decision on law should be made within and by members of, the judicial-legal system.
And the public should not have the ability to be judge to the content in action. Be it an angry prude parent, a politician or PAC, or the MAFIAA.

We need to respect the underlying fundamental right to the choices even when we disagree with them.

Because in the widest, and my, view, yes, Facebook censored a few republicans. And yes, Twitter censored trump.
You know what? They’re private companies. It’s their right.
Censorship is in itself a form of speech.
I think the heavily redacted children’s Bible has just as much right to exist as Poole’s Bible, a film that was nothing but the sex out of the Bible.

How many internet for all programs have been funded. How many feed the poor programs have been funded. How many house the homeless programs.
How many bridges to nowhere?
How’s that working out?

We have more important things than arguing what should and should not be regulated by private companies in public speech.
Hey congress, knock knock, there’s a nasty bug running around the country.
Maybe we can spend some time debating further funding on anti-viral research rather than cry about who took down your post and why.

Christenson says:

Re: Re: Replace?

So, we replace 230 with a re-write. As tech people, we all disfavor this kind of performative replacement of a perfectly good law, but at some point it may be better to respond to these calls of "do something! do something!".

Also, sorry, but 230 is running into the problem of it being very hard to convince people whose jobs depend on not understanding it to understand it, and, the wording could be significantly clearer and more accessible to non-lawyers.

Finally, dreaming, the new law acknowledges the reality of the internet, and the strongest copyright protection available for anything broadcast free of charge (assuming the original isn’t itself infringing) is either GPL or a corresponding CC-BY-SA license, (meaning link to source and tell us who you got it from!) and its only available in the presence of an explicit copyright statement. Cue screams and the law of unintended consequences, so this is a dream only, Cathy Gellis and Tim Cushing take note.

Still dreaming, there’s a copyright anti-slapp in there too, with compliance to the most restrictive available license being a presumptive defence.

That One Guy (profile) says:

Re: Re: Re: Congrats, they played you

Oh the politicians would love you, 230 is a political punching bag that they’re not going to just give up because you made the wording ‘clearer’, all scrapping and replacing it would do is give them a chance to gut it and replace it with a worthless and toothless copy that might sound good but would be utterly useless at the goal of making clear that platforms do indeed have a right to moderate as they wish.

Scary Devil Monastery (profile) says:

Re: Re: Re: Replace?

"So, we replace 230 with a re-write. As tech people, we all disfavor this kind of performative replacement of a perfectly good law, but at some point it may be better to respond to these calls of "do something! do something!"."

Well, as a "techie" i usually strongly object to replacing a formula like ‘2+2=4’ with an exercise in math longer than the Boolean Pythagorean Triples Problem. Which is what would happen to 230. And to continue my metaphor you just know the unsigned legislation will mysteriously end up with a + reverted into a – where it will do the most damning harm.

Leave a Reply to DNY Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...