Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act

from the duty-calls dept

Hello! Someone has referred you to this post because you’ve said something quite wrong about Section 230 of the Communications Decency Act.

I apologize if it feels a bit cold and rude to respond in such an impersonal way, but I’ve been wasting a ton of time lately responding individually to different people saying the same wrong things over and over again, and I was starting to feel like this guy:

Duty Calls

And… I could probably use more sleep, and my blood pressure could probably use a little less time spent responding to random wrong people. And, so, for my own good you get this. Also for your own good. Because you don’t want to be wrong on the internet, do you?

Also I’ve totally copied the idea for this from Ken “Popehat” White, who wrote Hello! You’ve Been Referred Here Because You’re Wrong About The First Amendment a few years ago, and it’s great. You should read it too. Yes, you. Because if you’re wrong about 230, there’s a damn good chance you’re wrong about the 1st Amendment too.

While this may all feel kind of mean, it’s not meant to be. Unless you’re one of the people who is purposefully saying wrong things about Section 230, like Senator Ted Cruz or Rep. Nancy Pelosi (being wrong about 230 is bipartisan). For them, it’s meant to be mean. For you, let’s just assume you made an honest mistake — perhaps because deliberately wrong people like Ted Cruz and Nancy Pelosi steered you wrong. So let’s correct that.

Before we get into the specifics, I will suggest that you just read the law, because it seems that many people who are making these mistakes seem to have never read it. It’s short, I promise you. If you’re in a rush, just jump to part (c), entitled Protection for ?Good Samaritan? blocking and screening of offensive material, because that’s the only part of the law that actually matters. And if you’re in a real rush, just read Section (c)(1), which is only 26 words, and is the part that basically every single court decision (and there have been many) has relied on.

With that done, we can discuss the various ways you might have been wrong about Section 230.

If you said “Once a company like that starts moderating content, it’s no longer a platform, but a publisher”

I regret to inform you that you are wrong. I know that you’ve likely heard this from someone else — perhaps even someone respected — but it’s just not true. The law says no such thing. Again, I encourage you to read it. The law does distinguish between “interactive computer services” and “information content providers,” but that is not, as some imply, a fancy legalistic ways of saying “platform” or “publisher.” There is no “certification” or “decision” that a website needs to make to get 230 protections. It protects all websites and all users of websites when there is content posted on the sites by someone else.

To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a “platform” or a “publisher.” What matters is solely the content in question. If that content is created by someone else, the website hosting it cannot be sued over it.

Really, this is the simplest, most basic understanding of Section 230: it is about placing the liability for content online on whoever created that content, and not on whoever is hosting it. If you understand that one thing, you’ll understand most of the most important things about Section 230.

To reinforce this point: there is nothing any website can do to “lose” Section 230 protections. That’s not how it works. There may be situations in which a court decides that those protections do not apply to a given piece of content, but it is very much fact-specific to the content in question. For example, in the lawsuit against Roommates.com for violating the Fair Housing Act, the court ruled against Roommates, but not that the site “lost” its Section 230 protections, or that it was now a “publisher.” Rather, the court explicitly found that some content on Roommates.com was created by 3rd party users and thus protected by Section 230, and some content (namely pulldown menus designating racial preferences) was created by the site itself, and thus not eligible for Section 230 protections.

If you said “Because of Section 230, websites have no incentive to moderate!”

You are wrong. If you reformulated that statement to say that “Section 230 itself provides no incentives to moderate” then you’d be less wrong, but still wrong. First, though, let’s dispense with the idea that thanks to Section 230, sites have no incentive to moderate. Find me a website that doesn’t moderate. Go on. I’ll wait. Lots of people say things like one of the “chans” or Gab or some other site like that, but all of those actually do moderate. There’s a reason that all such websites do moderate, even those that strike a “free speech” pose: (1) because other laws require at least some level of moderation (e.g., copyright laws and laws against child porn), and (2) more importantly, with no moderation, a platform fills up with spam, abuse, harassment, and just all sorts of garbage that make it a very unenjoyable place to spend your internet time.

So there are many, many incentives for nearly all websites to moderate: namely to keep users happy, and (in many cases) to keep advertisers or other supporters happy. When sites are garbage, it’s tough to attract a large user base, and even more difficult to attract significant advertising. So, to say that 230 means there’s no incentive to moderate is wrong — as proven by the fact that every site does some level of moderation (even the ones that claim they don’t).

Now, to tackle the related argument — that 230 by itself provides no incentive to moderate — that is also wrong. Because courts have ruled Section (c)(1) to have immunized moderation choices, and Section (c)(2) explicitly says that sites are not liable for their moderation choices, sites actually have a very strong incentive provided by 230 to moderate. Indeed, this is one key reason why Section 230 was written in the first place. It was done in response to a ruling in the Stratton Oakmont v. Prodigy lawsuit, in which Prodigy, in an effort to provide a “family friendly” environment, did some moderation of its message boards. The judge in that case rules that since Prodigy did moderate the boards, that meant it would be liable for anything it left up.

If that ruling had stood and been adopted by others, it would, by itself, be a massive disincentive to moderation. Because the court was saying that moderation itself creates liability. And smart lawyers will say that the best way to avoid that kind of liability is not to moderate at all. So Section 230 explicitly overruled that judicial decision, and eliminated liability for moderation choices.

If you said “Section 230 is a massive gift to big tech!”

Once again, I must inform you that you are very, very wrong. There is nothing in Section 230 that applies solely to big tech. Indeed, it applies to every website on the internet and every user of those websites. That means it applies to you, as well, and helps to protect your speech. It’s what allows you to repeat something someone else said on Facebook and not be liable for it. It’s what protects every website that has comments, or any other third-party content. It applies across the entire internet to every website and every user, and not just to big tech.

The “user” protections get less attention, but they’re right there in the important 26 words. “No provider or user of an interactive computer service shall be treated as the publisher or speaker….” That’s why there are cases like Barrett v. Rosenthal where someone who forwarded an email to a mailing list was held to be protected by Section 230, as a user of an interactive computer service who did not write the underlying material that was forwarded.

And it’s not just big tech companies that rely on Section 230 every day. Every news organization (even those that write negative articles about Section 230) that has comments on its website is protected thanks to Section 230. This very site was sued, in part, over comments, and Section 230 helped protect us as well. Section 230 fundamentally protects free speech across the internet, and thus it is more properly called out as a gift to internet users and free speech, not to big tech.

If you said “A site that has political bias is not neutral, and thus loses its Section 230 protections”

I’m sorry, but you are very, very, very wrong. Perhaps more wrong than anyone saying any of the other things above. First off, there is no “neutrality” requirement at all in Section 230. Seriously. Read it. If anything, it says the opposite. It says that sites can moderate as they see fit and face no liability. This myth is out there and persists because some politicians keep repeating it, but it’s wrong and the opposite of truth. Indeed, any requirement of neutrality would likely raise significant 1st Amendment questions, as it would be involving the law in editorial decision making.

Second, as described earlier, you can’t “lose” your Section 230 protections, especially not over your moderation choices (again, the law explicitly says that you cannot face liability for moderation choices, so stop trying to make it happen). If content is produced by someone else, the site is protected from lawsuit, thanks to Section 230. If the content is produced by the site, it is not. Moderating the content is not producing content, and so the mere act of moderation, whether neutral or not, does not make you lose 230 protections. That’s just not how it works.

If you said “Section 230 requires all moderation to be in “good faith” and this moderation is “biased” so you don’t get 230 protections”

You are, yet again, wrong. At least this time you’re using a phrase that actually is in the law. The problem is that it’s in the wrong section. Section (c)(2)(a) does say that:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

However, that’s just one part of the law, and as explained earlier, nearly every Section 230 case about moderation hasn’t even used that part of the law, instead relying on Section (c)(1)’s separation of an interactive computer service from the content created by users. Second, the good faith clause is only in half of Section (c)(2). There’s also a separate section, which has no good faith limitation, that says:

No provider or user of an interactive computer service shall be held liable on account of… any action taken to enable or make available to information content providers or others the technical means to restrict access to material….

So, again, even if (c)(2) applied, most content moderation could avoid the “good faith” question by relying on that part, (c)(2)(B), which has no good faith requirement.

However, even if you could somehow come up with a case where the specific moderation choices were somehow crafted such that (c)(1) and (c)(2)(B) did not apply, and only (c)(2)(A) were at stake, even then, the “good faith” modifier is unlikely to matter, because a court trying to determine what constitutes “good faith” in a moderation decision is making a very subjective decision regarding expression choices, which would create massive 1st Amendment issues. So, no, the “good faith” provision is of no use to you in whatever argument you’re making.

If you said “Section 230 is why there’s hate speech online…”

Ooof. You’re either the The NY Times or very confused. Maybe both. The 1st Amendment protects hate speech in the US. Elsewhere not so much. Either way, it has little to do with Section 230.

If you said “Section 230 means these companies can never be sued!”

I regret to inform you that you are wrong. Internet companies are sued all the time. Section 230 merely protects them from a narrow set of frivolous lawsuits, in which the websites are sued either for the content created by others (in which case the actual content creators remain liable) or in cases where they’re being sued for the moderation choices they make, which are mostly protected by the 1st Amendment anyway (but Section 230 helps get those frivolous lawsuits kicked out faster). The websites can and do still face lawsuits for many, many other reasons.

If you said “Section 230 is a get out of jail card for websites!”

You’re wrong. Again, websites are still 100% liable for any content that they themselves create. Separately, Section 230 explicitly exempts federal criminal law — meaning that stories that blame things like sex trafficking and opioid sales on 230 are very much missing the point as well. The Justice Department is not barred by Section 230. It says so quite clearly:

Nothing in this section shall be construed to impair the enforcement of… any other Federal criminal statute

So many of the complaints about criminal activity are not about Section 230, but about a lack of enforcement.

If you said “Section 230 is why there’s piracy online”

You again may be the NY Times or someone who has not read Section 230. Section 230 explicitly exempts intellectual property law:

Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.

If you said “Section 230 gives websites blanket immunity!”

The courts have made it clear this is not the case at all. In fact, many courts have highlighted situations in which Section 230 does not apply, from the Roommates case, to the Accusearch case, to the Doe v. Internet Brands case, to the Oberdorf v. Amazon case, we see plenty of cases where judges have made it clear that there are limits to Section 230 protections, and the immunity conveyed by Section 230 is not as broad as people claim. At the very least, the courts seem to have little difficulty targeting what they consider to be “bad actors” with regards to the law.

If you said “Section 230 is why big internet companies are so big!”

You are, again, incorrect. As stated earlier, Section 230 is not unique to big internet companies, and indeed, it applies to the entire internet. Research shows that Section 230 actually helps incentivize competition, in part because without Section 230, the costs of running a website would be massive. Without Section 230, large websites like Google and Facebook could handle the liability, but smaller firms would likely be forced out of business, and many new competitors might never get started.

If you said “Section 230 was designed to encourage websites to be neutral common carriers”

You are exactly 100% wrong. We’ve already covered why it does not require neutrality above, but it was also intended as the opposite of requiring websites to be “common carriers.” Specifically, as mentioned above, part of the impetus for Section 230 was to enable services to create “family friendly” spaces, in which plenty of legal speech would be blocked. A common carrier is a very specific thing that has nothing to do with websites and less than nothing to do with Section 230.

If you said “If all this stuff is actually protected by the 1st Amendment, then we can just get rid of Section 230”

You’re still wrong, though perhaps not as wrong as everyone else making these bad takes. Without Section 230, and relying solely on the 1st Amendment, you still open up basically the entire internet to nuisance suits. Section 230 helps get cases dismissed early, whereas using the 1st Amendment would require lengthy and costly litigation. 230 does rely strongly on the 1st Amendment, but it provides a procedural advantage in getting vexatious, frivolous nuisance lawsuits shut down much faster than they would be otherwise.

There seems to be more and more wrong stuff being said about Section 230 nearly every day, but hopefully this covers most of the big ones. If you see someone saying something wrong about Section 230, and you don’t feel like going over all of their mistakes, just point them here, and they can be educated.

Filed Under: , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act”

Subscribe: RSS Leave a comment
160 Comments
This comment has been deemed funny by the community.
TechLawFreedom (profile) says:

Re: 3 Cheers!

I really liked your post as well!

In terms of "tech, law and freedom" , created my account for to reply to this article.

I don’t see why it would be too difficult to pass legislation create a standard of informed responsibility so that any public platform be asked to label each post as "publish verified" or "not publish verified" therefore opinion. Even if someone pulls a link from a new source and posts it, their post should be labeled opinion. That could be part of the educational process and legislation aimed to not hamper tech growth but respect our constitution and responsibility to ethically inform the people.

I just don’t see why that’s too big of a deal. We have to react somehow as stated their exist types of legislation due to things like copyright laws, child abuse and moderation to stop abuse. It’s sad, but it is a problem that are people are ill informed, but tech is evolving so we need to evolve with the times in such a manner that protects and educates our people and our freedom. And that’s ok. We know that Russia influenced our social media on websites, you know we monitor because otherwise you mentioned it turns to "spam, harassment, and abuse". We know our open forum has been used against United States interests, so it seems like it wouldn’t be too difficult to come up with different ways to discourage, separate and inform.

I am stymied about what to do about algorithms such as google that can be influenced by politics. I don’t particularly want to stymie development of such platforms, however we need some innovation here between tech, business, and law. They are verging on monopoly status, so as you state perhaps google does not hold full responsibility here. Perhaps, we do. As we have always done as we define and protect our constitutional rights and freedoms.

This comment has been deemed insightful by the community.
nasch (profile) says:

Re: Re: 3 Cheers!

I don’t see why it would be too difficult to pass legislation create a standard of informed responsibility so that any public platform be asked to label each post as "publish verified" or "not publish verified" therefore opinion.

It’s a big deal because (if I understand your proposal correctly) it would be the government compelling speech, which has an extremely high bar to clear. It happens (safety labels, food labeling, etc.) but there needs to be a specific compelling need to do so, and I don’t personally see that flying.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

In re: the “family friendly” stuff, I’m posting the on-the-Congressional-record words of Republican lawmaker Chris Cox, who helped craft 47 U.S.C. § 230, so we can all see his exact intent in that regard:

We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see.

[O]ur amendment will do two basic things: First, it will protect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet, let us say, who takes steps to screen indecency and offensive material for their customers. It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem. Second, it will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the Internet because frankly the Internet has grown up to be what it is without that kind of help from the Government. In this fashion we can encourage what is right now the most energetic technological revolution that any of us has ever witnessed. We can make it better. We can make sure that it operates more quickly to solve our problem of keeping pornography away from our kids, keeping offensive material away from our kids, and I am very excited about it.

This comment has been deemed insightful by the community.
allengarvin (profile) says:

Very nice summary!

It might be worth adding a section for when someone says "tech gets immunity that no other industry gets", which I know you’ve covered before. Bookstores, newspapers publishing wire service stories, radio and television for content created elsewhere–even if not a blanket grant of freedom of liability, decreased liability is quite similar to the goals of 230.

That One Guy (profile) says:

Re: Very nice summary!

Other than those arguing dishonestly that is probably the most annoying argument, because if 230 is a ‘gift’ then it’s one that every other industry has.

Making clear that online platforms have the same shield against being held liable for content that others create/post using their platform/product is not a gift by any stretch of the term, it’s codifying equal treatment under the law because apparently some people needed to be told that yes, the rule of liability applies online as well as offline.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

This is a good post. However, I noticed one mistake:

"there is nothing any website can do to "lose" Section 230 protections"

Actually, there is. It’s what many sites are actively doing. Google, Twitter, YouTube, and Facebook are the biggest culprits.

By silencing non-leftist voices, Big Tech is going to lose Section 230 … because by narrowly defining what their terms of service consider to be "acceptable" speech, they’re ensuring someone will step in and repealing Section 230 altogether.

If only Big Tech decided not to deplatform and moderate away non-leftist opinions, they could’ve saved themselves from this fate. But as usual, give left wingers an inch, they’ll take 1,000 miles.

Leftists are why we can’t have nice things.

This comment has been deemed insightful by the community.
Toom1275 (profile) says:

Re: Re: Re: Re:

Eroding the Constitution is what many of these politicians are actively doing. Hawley, Trump, Barr are the biggest culprits.

By trying to silence non-extremist-Right voices, these corrupt politicians are going to lose their fight as soon as it reaches a court. By directly attacking the First Amendment like this, they’re ensuring anyone with >2 brain cells won’t take any of their statements on law seriously, and hopefully repeal them altogether.

If only the fascist/Republican party decided not to be butthurt about anti-terrorist opinions, they could save themselves from this fate. But as history shows, give a Nazi an inch, and they’ll take 10,000 miles.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

Toom, I know you’re subliterate, and that you enjoy copy/pasting that phrase that you once saw someone much more intelligent than you write once. But Techdirt has numerous stories (evidence) that what I said is about to happen is … about to happen (facts). In fact, Masnick has spent a good deal of time with smelling salts in one hand and a keyboard in the other swooning about it.

Big Tech censors reality (i.e. non-leftist worldviews), and the federal government is discussing removing Section 230 protections from Big Tech. It’s what keeps Masnick and his fellow propagandists up at night (not, for instance, something as minor as, oh, I don’t know, violent savages burning down cities.)

This comment has been deemed funny by the community.
Stephen T. Stone (profile) says:

Re:

This is a good post. However, I noticed one mistake:

"there is nothing any website can do to "lose" Section 230 protections"

Actually, there is. It’s what many sites are actively doing. Google, Twitter, YouTube, and Facebook are the biggest culprits.

By silencing non-conservative voices, Big Tech is going to lose Section 230 … because by narrowly defining what their terms of service consider to be "acceptable" speech, they’re ensuring someone will step in and repealing Section 230 altogether.

If only Big Tech decided not to deplatform and moderate away non-conservative opinions, they could’ve saved themselves from this fate. But as usual, give conservatives an inch, they’ll take 1,000 miles.

Conservatives are why we can’t have nice things.

(Do you see how fucking ignorant your comment sounds now?)

This comment has been deemed funny by the community.
Anonymous Coward says:

Re: Re:

This is a good post. However, I noticed one mistake:

"there is nothing any website can do to "lose" Section 230 protections"

Actually, there is. It’s what many sites are actively doing. Google, Twitter, YouTube, and Facebook are the biggest culprits.

As it happens, I just came across a useful article to refer you to:

Hello! You’ve Been Referred Here Because You’re Wrong About Section 230 Of The Communications Decency Act

This comment has been deemed funny by the community.
Derek Kerton (profile) says:

Re: Re:

Sir/Ma’am,

I have read the comment you posted, and you are quite wrong.

In lieu of a thorough rebuttal here in the comment section, I refer you instead to this comprehensive explanation of why you are wrong, backed with links to the actual case law and the text of Section 230 itself.

Good reading!

https://www.techdirt.com/articles/20200531/23325444617/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act.shtml

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Reform

An excellent dissertation on how section 230 currently works. However, many of us who want to see section 230 reforms already know what it says, and can see how it operates. That’s why we want reforms.

Once a company starts moderating content, it OUGHT to choose between being a platform, or a publisher.

A platform that has political bias is not neutral, and thus OUGHT to lose its Section 230 protections.

Section 230 requires all moderation to be in "good faith" and if moderation is "biased" then you SHOULDN’T get 230 protections.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: Reform

Once a company starts moderating content, it OUGHT to choose between being a platform, or a publisher.

A platform that has political bias is not neutral, and thus OUGHT to lose its Section 230 protections.

Here’s the problem with that: Who gets to choose that? Do you really want the government saying you’re a) a "platform" or "publisher" in one area and whether or not you’re b) "biased" or "unbiased" in another? Seems ripe for abuse and would arm would-be censors in the US Government. That’s why we have a 1st amendment to protect against that shit.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Re: Re: Reform

Do you really want the government saying you’re a) a "platform" or "publisher" in one area and whether or not you’re b) "biased" or "unbiased" in another?

No, the internet service itself would choose. They could declare things such as "We are a free speech platform. We will not censor messages based upon a political bias", or perhaps they could say "We’re a bunch of hardcore Democrats. We will gladly publish those with a left wing viewpoint, and we will ban anyone who sounds like a Republican".

Also, the government wouldn’t determine whether there is bias or not. Instead, private court action would allow aggrieved parties to present their case if they believe that the internet service violated their own terms of service.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Re: Re: Re:4 Re:

If I say in public, "Apples are my favorite fruit", I would not want a prosecutor to file charges against me. But if I had previously signed a contract as a promoter for an orange company, then I would expect the company to file a court complaint against me if I violated my contract.

Government prosecutors and regulators are not the same thing as a court adjudicating a private contract violation.

This comment has been deemed insightful by the community.
Toom1275 (profile) says:

Re: Re: Re:5 Re:

And it’s also an egregious mischaracterization to claim that a "conservative" being banned means that a platform is not open to conservatives – after all, they were perfectly able to make full use of the service until they made the choice to behave abusively.

Choices that the gay wedding cake couple, or black patrons in the Jim Crow South – people actually facing bias against them – didn’t even get the chance to.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Ian says:

Re: Re: Re:7 Re:

Go read Heart of Atlanta Motel v. USA, which answered your asinine question decades ago. It is not incumbent on the consumer to choose a business that will decide to respect civil rights and anti-discrimination statutes (which is what the racist owner of HoAM essentially argued, among other things), it is the business’s responsibility to be compliant with the law – and such requirements are permitted under the Commerce Clause. By your logic, this would also mean that a business could choose not to provide accessible parking and wheelchair ramps under the ADA while brandishing a large banner on their front that says "PISS OFF TO ALL WHEELCHAIR USERS", and it’d be the disabled individual’s fault for not choosing an ADA-compliant business. People patron businesses for a multitude of reasons that may or may not be underpinned by an actual freedom of choice, and the courts should not be determining whether a customer actually had alternatives. The only question is whether the business is in compliance – and in this case, the baker was in violation of Title VII of the Civil Rights Act.

OBloodyhell says:

Re: Re: Re:8 Re:

Funny, I have quite a bit of doubt that, if I attempt to get a Jewish Bakery to make a swastika cake, they’d turn me down.

I am also pretty sure a black cake whom I demanded made a confederacy cake, would also turn me down.

"Oh, but that’s not ‘civil rights’" ?

Really? OK, how about this:
Let’s see that same gay could go to an Islamic bakery and make their demands.

Right. Because Islam is VASTLY more virulently opposed to homosexuality than Christianity is. But, strangely, no one does this? I wonder why.

What you ARE doing is teling Christians they need to be violent and militant, not patient and understanding, towards people who deliberately violate THEIR civil rights to practice their RELIGION by precepts they see fit.

Because geniuses like you never fucking ack this — this is not choice of "no civil rights get violated" vs. "civil rights get violated".

It’s merely a question of WHOSE civil rights get violated.

And frankly, saying "find another baker" is the lesser violation BY FAR.

Also, and this is IMPORTANT:

The bakers in these cases were NOT REFUSING TO BAKE THE CAKE.

They were refusing the MESSAGING the customers wanted put on the cake.

So this is not being "refused service" as in your case, it is being refused SPECIFIC service having nothing to do with their sexuality. They would have refused ANYONE the exact same service which they refused (a specific message they considered inappropriate) It had ZERO to do with their being gay.

The CORRECT analogy is far more akin to a motel refusing to provide a gay couple with room service, not because they are gay, but because THE MOTEL DOESN’T GIVE any customer room service.

This comment has been deemed insightful by the community.
Uriel-238 (profile) says:

Re: Re: Re:9 Swastika Cake

One more time: Yes, if you’re a Fourteen-Worder and you go to a bakery for a swastika cake, they don’t have to provide you with one. But according to most public accommodation laws, they do have to provide you with a basic blank cake and leave you to decorate it. Want a designer to design you a cake shaped like a giant swastika, you may have to shop around. These days you’d have to shop around much less than you would have say twenty years ago.

A Muslim bakery that makes wedding cakes will have to provide one for a gay couple, just like the ones he provides to straight couples. He doesn’t have to put the two-grooms (or two brides) mannequins on top. He doesn’t have to make any statements he doesn’t like. But if he provides a generic cake to some people, he has to provide them to the rest of the public.

Cake-makers are allowed to choose what kind of decorations they provide, what symbols and phrases they’re willing to write on a cake, but not in provision of the cake itself. And your framing it as one thing at the top of your post, and then another at the bottom is confusing me.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:5

Government prosecutors and regulators are not the same thing as a court adjudicating a private contract violation.

Your words:

the government wouldn’t determine whether there is bias or not. Instead, private court action would allow aggrieved parties to present their case if they believe that the internet service violated their own terms of service

If “showing bias against a political ideology” is the argument for whether a service violated its own TOS, should the courts (i.e., the government) determine whether that bias exists — and, by extension, whether that service should be denied 230 protections on the basis of such bias?

This comment has been deemed insightful by the community.
Celyxise (profile) says:

Re: Re: Re:5 Re:

Even if we were to assume this silly hypothetical "contract", this brings up yet another problem with this whole political bias on social media thing: evidence.

There has yet to be any actual evidence of bias against any political view. There looks to be good evidence of an anti-asshole and anti-racist bias but so far nothing for anti-conservative views.

That One Guy (profile) says:

Re: Re: Re:6 'They banned me for my views!' 'Which are?' 'Oh, you know...'

It would be nice if those objecting to the ‘anti-conservative’ bias would just come out and admit that either their idea of ‘conservative’ includes ‘asshole and/or racist’, or they’re just using the conservative label to avoid admitting that that is the content they care about ‘protecting’, but as doing that would leave them looking all sorts of bad I rather doubt most of them will have the honesty to admit it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: Reform

So do Right wing nut job sites not silence, remove, and outright ban leftist ideas that are posted to their sites?

So you are saying ‘conservative only’ websites that currently exist should lose all section 230 protections because they are not ‘neutral’ by your definition?

Sounds fair, if google/twitter are going to lose protections than info wars and all the other right wing nut job sites should lose their protection as well, so they can be sued by any leftist who doesn’t like what they have done. Sounds fair…

nasch (profile) says:

Re: Re: Re: Reform

No, the internet service itself would choose. They could declare things such as "We are a free speech platform. We will not censor messages based upon a political bias", or perhaps they could say "We’re a bunch of hardcore Democrats. We will gladly publish those with a left wing viewpoint, and we will ban anyone who sounds like a Republican".

Why would they ever choose the former if it opened them up to increased liability?

Instead, private court action would allow aggrieved parties to present their case if they believe that the internet service violated their own terms of service.

Then the terms of service will say, if they do not already, "we will remove, hide, shadowban, or otherwise moderate any content at our sole discretion for any or no reason, with or without any notice." There will never be any violations of their terms of service.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

Once a company starts moderating content, it OUGHT to choose between being a platform, or a publisher.

And if you had paid attention to the article, 230 doesn’t make that distinction. It has no meaning here.

A platform that has political bias is not neutral, and thus OUGHT to lose its Section 230 protections.

…says the guy who refuses to directly say, one way or the other, whether the law should force any interactive computer service of any size to host objectionable-yet-legal speech against the wishes of that service’s owner(s).

Section 230 requires all moderation to be in "good faith" and if moderation is "biased" then you SHOULDN’T get 230 protections.

Oh look, another chance for you to answer One Simple Question.

Yes or no: If the admins of a service like Twitter deletes White supremacist propaganda from said service, should that service lose its 230 protections? And I’ll remind you that White supremacy is a sociopolitical ideology, so any moderation of such content would be “biased” against that ideology.

(I can’t wait to see how you avoid answering that one.)

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:

…says the guy who refuses to directly say, one way or the other, whether the law should force any interactive computer service of any size to host objectionable-yet-legal speech against the wishes of that service’s owner(s).

Which, ironically enough, has answered that question, and that answer is very clearly ‘yes‘, even if they’d never be honest enough to admit it.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Reform

It’s funny that you say you know what it says, and then repeat a bunch of the myths, Koby.

Once a company starts moderating content, it OUGHT to choose between being a platform, or a publisher.

That literally makes no sense.

A platform that has political bias is not neutral, and thus OUGHT to lose its Section 230 protections.

Why "ought" it lose those protections when the whole point is that it’s supposed to allow sites to moderate as they see fit? And while there remains no evidence of having political bias, what is illegal about having a political bias? Do you think Fox News should not be allowed?

And, just the fact that people like you insist there is political bias, despite the total lack of evidence for political bias should demonstrate the problem: it’s based on conjecture by ignorant people like yourself. If you can’t prove it, then how do you show it in court?

Section 230 requires all moderation to be in "good faith" and if moderation is "biased" then you SHOULDN’T get 230 protections.

This is part of what is covered above, that you insist you understood, and yet with this statement, proves you do not. Only one part requires good faith and the crafters of the law understood and encouraged "bias" in moderation.

This comment has been flagged by the community. Click here to show it.

Uriel-238 (profile) says:

Re: Re: Re:2 Market Supremacy / Market Monopoly

There is a question raised once a specific company’s service becomes the accepted norm for certain kinds of interaction. I point to Wikipedia when referring to a specific historic event or scientific phenomenon or proper thing. I refer to YouTube when I want to point towards a song or film clip or other media. There are no other good substitutions for these things.

When it comes to non-media resources like this, (say electricity or water or internet access) we shift them into a category that provides for accommodations to make sure everyone can get what they need. (More or less.) I think that’s was the whole point of title-II classification of telecommunication services.

The question is, at what point is a given data source such a significant part of society that it becomes treated like a title-ii commodity? At what point is access to a forum (both to read and to speak) relevant enough to participation in a community that it should be regulated like a type-2 commodity?

I don’t have the answer to this, nor do I really have an opinion, except to say we have a few services in which a competitor is plausible but nonexistent, and we’re waiting for that competitor and a standard that makes them cross compatible (So that Twitter members can engage with BirdChirp members seamlessly, and vice versa). We have monopolies that we treat as non-monopolies as if they’re in a temporarily embarrassed robust, competitive market…for years at a time.

So there may be a point when we have to accommodate everyone in the dialog, even the assholes and racists, who may or may not be automatically directed to disclaimer pages like this that explain why they’re being wrong and stupid in a society of hundreds of millions.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:3 You can build a new online platform, ISP not so much

If you don’t like your ISP, you can’t just make another one.

If you don’t like the company selling electricity, you can’t really just built one of those either.

When online platforms reach the point that you can’t create and/or help fund an alternative then it might be time to consider treating them the same as physical monopoly services, but as the fact that alternatives keep popping up(even if only a small number of people actually want to use them for some strange reason) show that time is not yet now.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:3

disallow corporations from building an open speech platform while engaging in political bias

You’re implying that the law should force Twitter to host all political speech, no matter how vile. You’re implying that Twitter’s punishment for showing a bias against racist, homophobic, transphobic, and xenophobic language (among other forms of politically charged speech) is to shut down. For what reason are you so afraid to say that out loud and be done with it?

This comment has been deemed insightful by the community.
Uriel-238 (profile) says:

Re: Re: Re:4 open speech platform

The channel sites like 4chan work largely this way, in that it’s possible to post anonymously. Though there is support to link posts to the same source, there is no effort to make them easily identifiable (say by letting them have a name and avatar). The client software was meant for anonymous media sharing (pictures and video clips) and are only moderated to to stop child porn and people asking for child porn.

They’re a messy mixed bag with hate speech, racism, incitement to / declarations of intent to commit violence (lots of suicide) openly on display, often accompanied by both encouragement, criticism and trolling.

These sites often serve as a good example of what the rest of the internet doesn’t want, as they’re really rather spicy, and while videos of puppy killings or cat killings are generally frowned upon, they surface often enough. Almost everyone wants limits to free speech. We just cannot agree on where, specifically, we want them. I certainly don’t want to look at cat massacres even if I know they happen.

This is why we have the whole moderation paradox.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:3 Reform

What if one party engaged in spam more than the other? The filter would then be politically biased against the party which engaged in spam. Or, at least, that’s what the party with a tendency towards spam would claim.

Now, change "spam" to "abusive behaviour," and we have the current state of social media.

This comment has been deemed insightful by the community.
Wyrm (profile) says:

Re: Re: Reform

I understand his post and there is a part that you ignored. I still disagree with him, but you seem to have missed the part where he said "That’s why we want reforms."

He does misunderstand the current version, but he’s clear that he understands it enough to know that it doesn’t do what he wants. Namely, allowing censorship against speech he doesn’t like.

Like many other conservatives, trolls and right-wing extremists, he likes the First Amendment as long as it covers him and his like-minded comrades. And he outright ignores it when it comes to speech that ranges from center to left-wing. Or even moderate right-wing. Thus he wants section 230 to match his view of the First Amendment. He wants a law that he can defend or ignore at his leisure depending on the content, not one that gets thrown in his face every time his pride or sensitivity is hurt.

For him, it doesn’t matter what the law states (even as clearly as section 230), nor what the initial intention was (despite – for some of them – pretending to be "originalists" or something of the like). He has his idea of what the law should be and what it actually is doesn’t matter.

Problem is, some politicians have the same idea, and they have the power to change the law to match this idea of an empty shell that cries of so-called "good intentions", but without any power to actually be enforced.

Anonymous Anonymous Coward (profile) says:

Re: Re: Reform

Especially parties that include political animals (insert lame furry joke here) that aren’t of a particular stripe. This is why donkey’s don’t like Zebras and elephants don’t like hippo’s. Neither want their nemeses to be allowed. That’s not allowed on websites or on land or on sea or in the air or in voting booths.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

The same as how a political convention hall is not open to any speech, and would probably kick out protesters.

If those places can “censor” speech based on not wanting that speech on their property, for what reason should Twitter not be extended the same courtesy?

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Re: Re: Re:2 Re:

If those places can “censor” speech based on not wanting that speech on their property, for what reason should Twitter not be extended the same courtesy?

Because then there are consequences for choosing to be a publisher. For example, if they declare "Our service is for Democrats only", it would perhaps be very honest of them. But then they would lose a large portion of their user base, and break the ubiquity and monopoly that they might currently enjoy.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:3

there are consequences for choosing to be a publisher

Section 230 does not refer to either “publishers” or “platforms”. That dichotomy has no bearing on this discussion.

For what reason should the owners of Twitter not be allowed to decide what speech is and isn’t acceptable on their property (i.e., their service)?

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Re: Re: Re:4 Re:

Section 230 does not refer to either “publishers” or “platforms”. That dichotomy has no bearing on this discussion.

It certainly has bearing, in that section 230 reformers see this as a deficiency in current law, and desire a change. We understand that the current law does not appear to make this distinction, but we want that distinction to be made.

Stephen T. Stone (profile) says:

Re: Re: Re:5

We understand that the current law does not appear to make this distinction, but we want that distinction to be made.

Y’all only want that distinction made so y’all can hold services like Twitter liable for any speech that its employees neither published nor created in the first place. For what reason should that ever happen?

This comment has been deemed insightful by the community.
Leigh Beadon (profile) says:

Re: Re: Re:5 Re:

We understand that the current law does not appear to make this distinction, but we want that distinction to be made.

lol, maybe you understand that now – after months or years of everyone who knows the law explaining to people that they are wrong when they insist that the publisher/platform distinction already exists. It finally got through most people’s heads that the distinction they dreamed into existence is not real, and so you pivoted to calling yourself a "reformer" who "wants" that distinction added.

Notably absent, of course, was the intermediary phase where you "reformers" took a breath, acknowledged that you had critically misunderstood the law and been loudly repeating an utterly false claim, and spent some time learning and listening before demanding that the law be changed to render your false claims into reality.

This comment has been deemed insightful by the community.
Celyxise (profile) says:

Re: Re: Re:5 Re:

As has been pointed out numerous times before, but you do realize that the ability for people and their businesses to control their own speech and with who to associate themselves with does not actually come from Section 230 right?

Please tell me what kind of reform would not violate the 1st Amendment?

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:4 Re:

Oh I believe they’ve created a few platforms, the problem is that only a comparative handful of people wanted to actually switch to those platforms, and since having the platform is useless if there’s no-one actually on it they switched to demanding that the big platforms(and their equally large audiences) give them special treatment instead.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Reform

Did you read the article?
There was a specific section about good faith… There was also a section about why moderation is why 230 was created in the first place and another one about platform and publisher.
It’s not hard to grasp, unfortunately it just does not do what you want it to.

BTW how long would Fox last if it could not moderate comments?
https://help.foxnews.com/hc/en-us/articles/233194608-Do-you-moderate-comments
If you don’t believe Mike, maybe you will believe this as is is on Fox…

No one is moderating right wing speech – unfortunately a small number of far right feel that they deserve a platform to post (and I quote from Fox) "vulgar, racist, threatening, or otherwise offensive language". If they behaved more like civll human beings maybe they would not get moderated??? That isn’t bias, it is removing bigoted, hate speech and even Fox does it! (apparently)

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

The problem with that (that you have apparently missed) is that you would then not be able to have any sites dedicated to one political party or ideology at all.

Want to spin up a campaign site to organize supporters for Republicans? You can do that but if a bunch of Democrats start posting on it you can’t kick them off because then you aren’t "neutral". See the problem there?

This comment has been deemed insightful by the community.
allengarvin (profile) says:

Re: Reform

‘Section 230 requires all moderation to be in "good faith"’

And "in good faith" is a famously vague term, but there is a vast amount of common and case law, for centuries, on what it means in context of contract law, where the term most frequently appears–at its root, not twisting the contract wording by one party to give themselves an unfair advantage. Here, the terms of service act as a contract, and the amount of "bad faith" re-interpretation of rules that were written by teams of well-paid lawyers, that typically allow the platform the complete right to remove any content at will, is asymptotic to 0.

"In good faith" occurs in a LOT of statutory law. A search restricted to the US code at law.cornell.edu/uscode shows just shy of 1000 hits. You’re not going to be able to create some off-the-wall bullshit interpretation of the phrase and get any court to agree with you.

This comment has been deemed insightful by the community.
Anonymous Coward says:

The problem is that no matter how wrong anyone is over 230, no matter how many times they are told they are wrong or the areas where they are wrong, they never listen because they already know anyway! Even worse is not just knowing they’re wrong and ignoring it but continuing to push forward with their mistaken views because they’ve been encouraged, usually financially, as the result, if it happens, suits that person(s) but fucks everyone else up for decades!
These sort of selfish, self-serving exercises are a disgrace! More so are the members of Congress who are usually involved without having a clue of the outcome. They then kick off when they dont get reelected, thinking that crapping on the people for a few pieces of silver, giving a few what they want, is the best option.

Anonymous Coward says:

Re: The problem

You know, I don’t think you’re right about the problem. I agree with your depiction of the (mostly rightist) behaviour, but do you want to open their minds to a more realistic interpretation of reality or just sound of about your (admittedly superior) understanding, feeling full of righteousness and truth?

When you encounter a person whose mind is set on a particular viewpoint, simply haranguing him or her with a list of facts is a very poor way to, and very unlikely to, change their opinions to even the least degree. When is the last time that someone said something provably wrong and your stating the facts over and over changed anything? There are much more effective tactics. One particularly powerful one is to ask for an explanation of the other’s misunderstanding. This is even more powerful when their understanding involves demonstrable contradictions.

Now all that being said, can you explain to me how your final paragraph added anything useful to the current discourse?

This comment has been deemed insightful by the community.
Rocky says:

Re: Re: The problem

When it has been explained on multiple occasions to some people in polite terms how it works and what the facts are, and they still refuse to understand (and I use the word refuse intentionally here) even though they can’t give a coherent answer to why they think as they do except you hear a lot "I think", "I feel", "I believe" or "It ought to be" you get kind of tired of explaining the issue because the discussion just end up going in circles.

How do you reach a person that has emotionally decided that his or hers viewpoint are right regardless of the mountain of evidence saying otherwise? For them there is no misunderstanding, it’s the other party that have "an agenda" or "it’s a big tech conspiracy" or whatever explanation they can rationalize. How do you ask someone about their misunderstanding if they think they are right and they refuse to process what you are saying if it contradicts their beliefs? For them there is no contradictions, it’s the other party that are wrong.

How do you think an atheist fares if he asks a person believing in god about his or hers misunderstanding since there is no god? Because we are at that level of belief for some even though they can’t produce one shred of evidence to support their beliefs.

Anonymous Coward says:

Re: Re: The problem

Because actively, insistently ignorant (aka lying) people do not want their minds changed. They’ve already Krugered their Dunning, and Petered their principle. They are invested in their belief. They do not care.

So to which specific person do you propose to pose this mental exercise?

Also, what is the approved methodology for addressing a tone troll? Maybe you have that info as well.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Better reasoned than a reason.com article...

Mind if I refer this article to Stewart Baker? He seems to be trolling the internet from Volokh Conspiracy with articles like

https://reason.com/2020/06/19/reforming-section-230-of-the-communications-decency-act/

Much like Koby, above, Mr Baker uses the word ‘ought’ a fair amount. Somehow, the comments section did not have much favorable to say about the article. I wonder why?

ECA (profile) says:

It not..

The Censorship.
HERE, its kinda nie that we Can open up anothers openion that has been Bounced. To see if its a valid opinion or comment or what.
I would love to see more of this on the net…
If for nothing more then a way to SHOW/explain the reasons.

Iv explained to many that its HOW they express a comment, how you create a Foundation for a comment as you would a news article or paper written in school for your Composition teacher. State a comment and support it. Or give Support that leads to a Conclusion.
BUT keep it clean. Being irrational shows your OWN bias, and not reasoning. Showing how you got from 1 point to an end, and not taking ANY other choices along the way, shows Bias.

Then there is the thought process and Proof of concept that being TOLD and always hearing the SAME BS all your life, gives you the excuse you are looking for, without trying.
We have made this MAN’s world, and God aint here. Because we made it OURS not his. Until we do other wise.. Its our BS that we have to live/Lie(falsehood) with.

Rico R. (profile) says:

Good post, but you missed a spot...

What I want to know is how you would respond to someone who thinks Section 230 protections need to be stripped away from platforms (like Facebook) if they allow political ads to be run that contains misinformation. I don’t think that’s the case. In fact, I think that logic is very, very, VERY wrong. But my mom (who’s a full-fledged Biden supporter) sides with Biden in his statements about Section 230 because she doesn’t want misinformation to be run in ads placed by the Trump campaign. I’ve tried explaining what Section 230 actually does, and what it says and doesn’t say, but she doesn’t fully understand the big picture. Any ideas of what to say to target this logic directly?

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Be careful with that sword, it swings both ways

Beyond the blatant first amendment violation of saying ‘you’re not allowed to lie/allow others to lie’, have you tried pointing out that it would be the government, namely the government headed by Trump at the moment, someone who declares anything that he doesn’t like ‘fake news’, that would get to decide what counts as ‘misinformation’?

‘Would you trust Trump to decide what counts as ‘misinformation’, and be able to threaten platforms if they host content that Trump declares as such?’

Alternatively(though I’m not sure how much weight this would carry with her) you could point out that if carrying political ads with ‘misinformation’ carries such a huge risk then odds are good platforms simply won’t carry them, and while I’m sure that would be a great relief to many people that would mean that both Trump and Biden ads would be stricken from the platform and likely every other platform.

This comment has been deemed insightful by the community.
Thad (profile) says:

Re: Re: Be careful with that sword, it swings both ways

I think you’ve hit the nail on the head.

Laws are enforced by human beings, not perfect frictionless spheres. The people currently responsible for enforcing laws are Donald Trump and Bill Barr.

The appropriate response to "Facebook shouldn’t be allowed to lie" is to point out that if the government really had that power, that would mean that Donald Trump — or one of his surrogates, likely appointed by him and confirmed by the Republican Senate — would have the power to determine whether a statement is true or false and decide what ads Facebook is and isn’t allowed to run.

That’s the line for people who don’t like Trump.

For people who do like Trump, point out that, if 54,000 people in three states had voted differently, Hillary Clinton would be the president right now, and you’d be giving that same power to her. Conspiracy theories notwithstanding, Trump isn’t going to be president forever; sooner or later someone you really don’t like is going to be in office, and you should think carefully about the powers you want to give that person.

This comment has been deemed insightful by the community.
Rocky says:

Re: Good post, but you missed a spot...

If Biden’s campaign interviewed her and she he expressed her factually wrong views about 230, and the campaign made an ad with that interview and placed it on Facebook, it would count by her own logic as misinformation and thus should be removed.

Heck, any campaign ad from any party containing misinformation expressed as opinion by honest people not knowing better would have to be removed.

This comment has been deemed funny by the community.
Derek Kerton (profile) says:

Mike, will you be adding / appending to this

It seems you are going to find some other dumb arguments (or others will), oft repeated, that will need to be here.

Will you append them?

I’m only asking to be polite, because legally, you must write them up, because somebody (me) has asked you to do so here in the comments. Once you take on the publisher role, you are required to write content that people demand of you. That’s just the Internets rules.

But seriously, append or not?

Derek Kerton (profile) says:

Re: Mike, will you be adding / appending to this

As an example, I’ve seen a few references from dumb-dumbs on these very Internets that say that 230 requires platforms like Facebook, Google, and Twitter to promote

"a true diversity of political discourse"

and you know it’s true, because those very words are in the law, Section (a)(3).

Of course, it’s in the "Findings" section of the law, which carries no legal heft at all.

Check this out, from your friends at Prager U
https://twitter.com/prageru/status/1266527494296961030

j.s. (profile) says:

Double check the following statements with respect to the passing of FOSTA/SESTA and the Omnibus bill.
FOSTA/SESTA under the guise of fighting human trafficking, gave them prosecutorial power to hold host-ers responsible for the content they host (having not found a way to do what they wanted, they created this scary little wormhole to give them a way to put pressure on hosters (a’la silk road, escort ads)

"To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a "platform" or a "publisher." What matters is solely the content in question. If that content is created by someone else, the website hosting it cannot be sued over it.

Really, this is the simplest, most basic understanding of Section 230: it is about placing the liability for content online on whoever created that content, and not on whoever is hosting it. If you understand that one thing, you’ll understand most of the most important things about Section 230. "

While I know the context in which you are saying this, they do actually now have that power to hold them responsible if they choose to…for discriminating against conservatives, etc. they wouldn’t give a crap, but the law has changed. March 2017. The day after FOSTA/SESTA passed, and BURIED deep within the omnibus spending bill was the followup to FOSTA/SESTA which gave them the ability to enact this power retroactively (basically for things posted in the past). just fyi-and, maybe I’m wrong, and I definitely looked into it in regards to a specific scope when it was getting pushed through congress….but, scary stuff imo and I think it makes your point that they can’t hold hosters accountable or responsible for the content if they didn’t create it….outdated as an answer. Thanks! Interesting read tho, and see lots of stuff to check out on this site.

ryuugami says:

Mike — well done, and I expect to see the link to this article posted very often down here in the comment section 🙂

I do have a few suggestions:

1) Versioning. You said in an earlier comment that you plan to update the list as needed, and that you have already done so. I suggest adding a small "updated on …" or "added on …" text under the "title" of each section.

2) Anchors. Make each section title a link to itself, with "#" in-page anchors (like the "link to this" link that comments have). It would allow linking directly to the relevant section.

3) Quote the full text of the law, or at least the part (c), directly in the text. It would fit nicely in the middle of the "Before we get into the specifics …" paragraph. I know you linked it there, but I think it would be even better if the full context was on the same page.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re:

Regarding anchors: that was the original plan, but a few people pointed out that if we did it that way then people would (a) miss the title of the post, which is important and (b) miss some of the context, which is also important as some of the responses cite back to earlier answers.

Eric Wenger says:

100% I’m in complete agreement with your analysis. Could you also explain how the idea of "losing 230 protections" supposedly relates to the EARN-IT act? I’m a little lost when it is claimed that the law could result in the development of a best practice requiring the ability to break encryption in a private communication, which is then somehow tied to the ability to claim immunity against being held liable for the contents of communications that are published by a third party.

Anonymous Coward says:

230(c)(2)(B)

Your parsing of (c)(2)(B) seems…incredibly off.

make available to information content providers or others the technical means to restrict access to material….

This sounds like it’s describing, in terms that the 1990s non-internet-users could conceive, tagging something as 18+, or PG-13, or Y7, or something, so that third parties like Net Nanny can filter content out as they so choose. I can’t see how you can possibly stretch (c)(2)(B) to ever cover deleting content outright, which is the moderation decision people get the most mad about, not simply tagging.

How could deleting a post or banning an account (and in so doing, making all their posts inaccessible to read) ever possibly be construed as making "means to restrict access" available to OTHERS to use? It’s solely the act of using your own means to restrict access.

Only "good faith" (an all too subjective term) moderation is free of liability. Which might de facto mean all moderation, yeah.

(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—
>(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
>(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Anonymous Coward says:

Re: 230(c)(2)(B)

It actually makes perfect sense. Especially if you include the full beginning of that paragraph which you conveniently left out: "any action taken to enable or…"

Also, since the scope of "restrict access to" is not explicitly defined, that means it basically has no limit to its scope. So yes it absolutely means "restrict access to anyone we want, up to, and including everyone in the entire world".

This comment has been deemed insightful by the community.
This comment has been deemed funny by the community.
nasch (profile) says:

Re: 230(c)(2)(B)

Only "good faith" (an all too subjective term) moderation is free of liability.

"You are, yet again, wrong. At least this time you’re using a phrase that actually is in the law. The problem is that it’s in the wrong section. Section (c)(2)(a) does say that:
No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
However, that’s just one part of the law, and as explained earlier, nearly every Section 230 case about moderation hasn’t even used that part of the law, instead relying on Section (c)(1)’s separation of an interactive computer service from the content created by users. Second, the good faith clause is only in half of Section (c)(2). There’s also a separate section, which has no good faith limitation, that says:
No provider or user of an interactive computer service shall be held liable on account of... any action taken to enable or make available to information content providers or others the technical means to restrict access to material....
So, again, even if (c)(2) applied, most content moderation could avoid the "good faith" question by relying on that part, (c)(2)(B), which has no good faith requirement.

However, even if you could somehow come up with a case where the specific moderation choices were somehow crafted such that (c)(1) and (c)(2)(B) did not apply, and only (c)(2)(A) were at stake, even then, the "good faith" modifier is unlikely to matter, because a court trying to determine what constitutes "good faith" in a moderation decision is making a very subjective decision regarding expression choices, which would create massive 1st Amendment issues. So, no, the "good faith" provision is of no use to you in whatever argument you’re making."

Source:

https://www.techdirt.com/articles/20200531/23325444617/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act.shtml

This comment has been flagged by the community. Click here to show it.

Real2020 (profile) says:

S 230 needs to be repealed to protect the ordinary american

The problem with the Techdirt responses is that they are wholly misleading. They have skin in the game, as if S230 were changed, they would have to moderate their own website, comply with court orders requiring content removed. They don’t want to give up the immunity given to them by S230.

In any civilized society, their is a legal system that holds people accountable for their wrongs and provides effective remedies. The law upholds social order and is a foundation for democracy.

This basic tennant of an effective legal system is destroyed by S230 because it provides huge inescapable loopholes to big teck, aka GOOGLE and also website operators because GOOGLE is not responsible for removing defamatory search responses from its seach results, nor are websites responsible from removing defamatory speach. The result is peoples reputations and lives can be destroyed in eternity, as people place defamatory remarks on websites like ripoffreport.com, which then tries to profit from the defamation and destruction of others with impunity. So far, the online defamation extortion racket has been completely successful becuase of the loophole that CDA 230 immunity provides, courts and people are powerless to have defamatory or private information removed from the web about them.

We know that the teckdirt is not bad, nor is it google, and chances are it might moderate its website. It is not the bad guy here. But because of S230, the bad guys get away with -and the so called good samartian protection of S.230 is illusional, because, every single lawsuit has been lost when people try to have a website remove defamatory speach against them placed their by third parties, even when the website encourages it and defames others to profit from it, so long as the webside reproduces the original speach from the third party. Even when the website then tries to profit or refuses to remove the offending content from it, or even after a court has found the speach breaches the law or rights of others, S230 gives complete immunity so the website or GOOGLE does not have to do anything. Courts now have no power to make an order against websites, because it is precluded from doing so by S230, which is why S230 is loved so much by them.

This was bought and paid for legislation from big teck, and they h

Look people, this has been going on for almost 10 years, the court cases are in, and this is the undenyable effect.

Uriel-238 (profile) says:

Re: "wholly misleading"

When you say an opinion is wholly misleading in order for it to be valid you have to clarify:

  1. What the opinion says (the gist)
  2. What the truth is (and how it’s different from the opinion)
  3. The misleading part.

You did nothing of this sort. Instead, you suggested: Techdirt is biased because it benefits from Section 230.

That may mean we have to scrutinize its articles more, but that doesn’t mean they’re automatically making statements that are false and misleading.

Do the first bit above, and then you can suggest the second bit as to why they might TechDirt might have intentionally misled. But until you do the first bit, the second bit is meaningless.

Mike says:

One case that could potentially prevail

The Babylon Bee being demonetized by Facebook for "inciting violence" with a post that riffs on Monty Python would very likely count for the following reasons:

  1. It was clearly a work of satire.
  2. It was objectively not a violation of Facebook’s own content guidelines.
  3. Facebook stated, after two levels of human review, that they felt it was a violation of their content policies for "incitement to violence."

I don’t see this having any constitutional issues for the following reasons:

  1. It’s clearly not a good faith human decision when on appeal, they punish Babylon Bee by making an obviously absurd statement about the nature of their content.
  2. The case would rest on particular actions that particular people at Facebook chose, not the provision of moderation tools.
  3. The demonetization was the aftermath of the functionally defamatory, bad faith declarations against the Bee.
  4. The courts have strongly tended to hold that S230 and the 1A are not short-circuits against contract law.

Removing the Bee at any time is a legal and constitutional option, but how it’s removed is not. What’s clear is that FB wants to remove their content, but not go through the legal avenues that contract law, S230 and the 1A provide because those would require formally terminating a contract, closing out accounts lawfully (including paying out monies due) and taking a formal position that pisses off a large number of FB users.

And we should welcome such a lawsuit winning if it were to come because human absurdity should not be protected by the 1A and S230 in these cases. For example, if TechDirt were to be banned from Twitter or FB for "promoting white supremacy" for arguing for freedom here (which is not an unrealistic thing, sadly in 2020), that interpretation of TechDirt’s content should be protected by S230.

Also note: 99% of users would have no case under such a victory anyway. The very reason the Bee would have a real case is that they had a substantial business relationship where both sides benefited immensely financially. Random shit-posters on Twitter and Facebook would be entitled to no damages because they lost nothing of financial value.

This comment has been deemed insightful by the community.
nasch (profile) says:

Re: One case that could potentially prevail

What contract are you referring to? Unless the Bee had some special one off contract with Facebook it must be the standard terms of service, which though I haven’t read them I’m pretty sure don’t guarantee any money paid to the user.

Removing the Bee at any time is a legal and constitutional option, but how it’s removed is not.

Maybe you can provide the provision of the constitution or other law that you feel may have been violated. Be specific.

What’s clear is that FB wants to remove their content, but not go through the legal avenues that contract law, S230 and the 1A provide because those would require formally terminating a contract, closing out accounts lawfully (including paying out monies due) and taking a formal position that p***es off a large number of FB users.

1) the legal avenue is they moderate however they see fit, because section 230 says they can do that whenever they want

2) does the contract (terms of service) have a specific term, or does it state it can be terminated at any time?

3) was the Bee denied money that they had already earned prior to the decision to demonitize?

4) they don’t have to terminate the account in order to demonitize it

For example, if TechDirt were to be banned from Twitter or FB for "promoting white supremacy" for arguing for freedom here (which is not an unrealistic thing, sadly in 2020), that interpretation of TechDirt’s content should be protected by S230.

Well I agree, but I’m wondering if you meant "should not be protected".

The very reason the Bee would have a real case is that they had a substantial business relationship where both sides benefited immensely financially.

There is no exception for that case in section 230. Or the first amendment.

Mike says:

Re: Re: One case that could potentially prevail

What contract are you referring to? Unless the Bee had some special one off contract with Facebook it must be the standard terms of service, which though I haven’t read them I’m pretty sure don’t guarantee any money paid to the user.

That is technically correct, but I think somewhat beside the point. The Facebook Terms of Service says they call pull out any time. They all do. Where the problem comes in is when Facebook states that they are taking an action for a particular reason that is completely divorced from reality with regard to their Terms of Service.

So yes, Facebook can legally send Babylon Bee a letter saying "after consideration, we are terminating your services pursuant to clause X and we will pay out all monies due."

What Facebook cannot do is say "aha! Your monies are frozen because your post saying the sky is blue is white supremacy!" They cannot do that, even constitutionally, for a simple reason: the 1A is not a shortcut around contract law. It does not give you free clearance to shit all over another party in a matter related to speech and claim that white is black and black is white and other insane reasons for terminating the contract.

That is why I said, specifically, the Terms of Service are a form of contract. If they weren’t, Facebook would have few avenues in civil court to pursue users for violating them. The contract goes both ways. They can lawfully terminate it for reasons they allowed in advance. They cannot cry "declaring that the sky is blue is white supremacist logic and therefore we terminate your contract [and thus cost you $X]."

As a conservative-leaning individual, if I knowingly enter into a relationship with a Communist to publish their site, contract law doesn’t allow me to just jump in and say "fuck you, I hate Communism and down with your content." They can rightfully sue the hell out of me. Federal courts have already started to move in the direction of noting that S230 cannot shortcircuit basic contract law without essentially undermining the very purpose of Congress in creating a viable free Internet with a robust marketplace of ideas.

This comment has been deemed insightful by the community.
nasch (profile) says:

Re: Re: Re: One case that could potentially prevail

What Facebook cannot do is say "aha! Your monies are frozen because your post saying the sky is blue is white supremacy!"

Is that what they did? Had the Bee already earned some revenue prior to the decision to demonitize, which was then denied them? If so, what does that have to do with section 230? If not, why are you bringing it up?

As a conservative-leaning individual, if I knowingly enter into a relationship with a Communist to publish their site, contract law doesn’t allow me to just jump in and say "f*** you, I hate Communism and down with your content."

Only if the contract states that you won’t do that. If the contract is silent on the matter, other law (such as section 230) controls. In which case you absolutely can do exactly that.

Absolutely true, but the point is that the decisions contra the Bee would likely not be covered under S230 because they are that rare case involving only human actors exercising genuinely bad faith AND potentially substantial costs.

There is no exception in section 230 for "only human actors" or "substantial costs". As for "bad faith", scroll up to the part of the article starting with ‘If you said "Section 230 requires all moderation to be in "good faith" and this moderation is "biased" so you don’t get 230 protections"’. Read that section, and then don’t bother complaining about bad faith anymore.

Mike says:

Re: Re: One case that could potentially prevail

2) does the contract (terms of service) have a specific term, or does it state it can be terminated at any time?

This is why the concept of good faith is important. "Because it’s to my advantage to now screw you over" is generally not recognized as covered under that.

4) they don’t have to terminate the account in order to demonitize it

Absolutely true, but the point is that the decisions contra the Bee would likely not be covered under S230 because they are that rare case involving only human actors exercising genuinely bad faith AND potentially substantial costs.

Rocky says:

Re: Re: Re: One case that could potentially prevail

Absolutely true, but the point is that the decisions contra the Bee would likely not be covered under S230 because they are that rare case involving only human actors exercising genuinely bad faith AND potentially substantial costs.

You assert "genuinely bad faith" as the reason why the human review upheld the algorithmic decision of demonetize Babylon Bee. I do hope you have some citations backing that assertion up, otherwise you are just jumping to conclusions.

I hope you are aware that there are actually people in the world who have never seen Monty Python or who miss the satire/sarcasm/irony. If someone who works as a moderator for Facebook miss that context, the Babylon Bee post may well be misconstrued – especially in light of the current divisive rhetoric going on.

I doubt Facebook where out to get Babylon Bee, because we would then have seen a pattern of behavior where this is the norm. I chalk this whole incident up to the impossibility of (perfect) moderation at scale.

Graham The Cat says:

Disqus platform Doesn't Seem to Moderate at All

Gab loves white supremacy, so I guess you could say they have moderation of some sort there? You know, pro-white-supremacy "moderation"?

The Disqus web discussion platform doesn’t appear to moderate anything at all.

The mere fact that platforms have the incentive to moderate doesn’t mean they all actually perform that function in a socially responsible way.

This comment has been flagged by the community. Click here to show it.

Cory Tate (profile) says:

Private vs Public

I dont really know anything about this 230. But I do know that if you are in your house and anyone records you Talking about a Copyrighted Intellectual idea or even the sound of you Fukn, then they post it on the internet it is illegal. Now if your in a public park telling everyone about you Million Dollar idea or again Fukn they have every right to record and take credit. Even make a porno from 2 strangers and there is nothing anyone can do about it. So I guess we just need to treat the internet like a public park. Dont act like a drunk baby raper, so some kinda morals or class. But if you have platforms or people hacking your phones, PC’s, Mic’s and Webcams in your private home.Then posting that without your consent. They are breaking the law!!!! Dont believe me ask the girls gone wild guy. He has been sued so many times and always won each case.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed funny by the community.
Derpy says:

Too long

Great article, but absolutely useless for the purpose that you entitle it- nobody is going to link this to anyone that doesn’t understand Section 230, this is FAR too wordy for anyone like that and we all know it, you may as well have linked an entire book.

Compress this down to 4 sentences, or at least open by shutting the entire arguments against 230 down in that span or less, or this is utterly useless. You can still go on and on about all the nuances below it, but the mostly conservatives that need this information are no longer capable of digesting it in such long form.

This comment has been deemed insightful by the community.
Uriel-238 (profile) says:

Re: He don't know me very well, do he.

I think the audience of TechDirt is like the audience of the EFF, who recognized the value of the esoteric nitty since the devil is in the details. It’s easy to make a meme or soundbite that seems sensible, is technically true but fails to consider important context.

To borrow an example from Beau of the Fifth Column, the giant madman who axed through your door and dragged you out of your house and into a white van is a firefighter and your house is burning.

Victor (user link) says:

what would then say about Facebook using Section 230 in court arguments themselves:

https://www.theguardian.com/technology/2018/jul/02/facebook-mark-zuckerberg-platform-publisher-lawsuit

"In the Six4Three case, Facebook has also cited Section 230 of the Communications Decency Act, US legislation that paved the way for the modern internet by asserting that platforms cannot be liable for content users post on their sites. In court filings, Facebook quoted the law saying providers of a “computer service” should not be “treated as the publisher” of information from others."

charlie_jones (profile) says:

I was referred here, and I’m not wrong.

If you own a website, like Facebook or Reddit, and someone plans and executes a terror plot (or any other murder for that matter) you can make money off that content without liability.

That’s wrong. That needs to change. Firstly, no one should be hosting that, and secondly they certainly shouldn’t be profiting from that kind of content without liability.

I understand all the arguments for section 230, but it’s a bad law. And bad arguments against it doesn’t make it a good one.

This comment has been flagged by the community. Click here to show it.

charlie_jones (profile) says:

I was referred here, and I’m not wrong.

If you own a website, like Facebook or Reddit, and someone plans and executes a terror plot (or any other murder for that matter) you can make money off that content without liability.

That’s wrong. That needs to change. Firstly, no one should be hosting that, and secondly they certainly shouldn’t be profiting from that kind of content without liability.

I understand all the arguments for section 230, but it’s a bad law. And bad arguments against it doesn’t make it a good one.

This comment has been deemed insightful by the community.
nasch (profile) says:

Re: Re:

I understand all the arguments for section 230, but it’s a bad law. And bad arguments against it doesn’t make it a good one.

What’s your proposal? Keep in mind to be relevant it has to 1) not make things worse and 2) be consistent with the First Amendment and private property law. Holding Party A liable for what Party B says sounds tricky.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

Have you come up with a good argument by now?

A social media platform cannot monitor every post and private message between potential criminals. It’s not fair to hold them accountable for things they reasonably did not know about. If the government comes to them with warrants and credible information about criminal activity the platforms almost always cooperate. Section 230 would certainly not protect Facebook knowingly shielding criminal groups from the government’s lawful investigations which is why they don’t do that.

For a comparison, should AT&T be held accountable because criminals use their service to conduct illicit behavior? Of course not. You don’t have to be a corporate shill to understand why that would be completely unfair and not productive at all. All it would do is make them stop providing phone service to people from "high crime areas" so now poor people in urban areas can’t have phones since their neighbors might sell drugs.

This comment has been deemed funny by the community.
bobbi says:

"That's NOT the offending statute you're looking for" (h/t Obi)

[""If all this stuff is actually protected by the 1st Amendment, then we can just get rid of Section 230"] ===> Wrong. False. Incorrect. Untrue. BZZZT~!

If this is protected by the First Amendment, we MUST ABOLISH the first amendment.

Checkmate, america haters~!

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Barry LaFleur says:

You were sent here

I was not actually sent here, I stumbled on to it. I am not a lawyer nor do I study the law, I figure it out by bumping into it and seeing how far I can go.

I did assume that moderated comment sections acted like publishers, once they started deleting comments they found offensive for any reason, and used that as a soundboard to see how I could say what I wanted to say. I have no intentions of sharing porn or promoting violence as a means to an end. However, I am very frustrated with the state of the world as it is today and I use a lot of sarcasm to make that point. I have probably put eight to ten million words on the internet since 2000 in comments alone. In all that time, only one other commenter asked me why I said the things I said. Here was my answer. Their is an old Arabic saying, Think twice, once drunk and once sober. I do not drink so for me, being drunk is thinking emotionally as opposed to rationally and there is not a person on this internet entitled to my sober thoughts. I am not looking to teach people, I am looking for people who can correct me with logic and reason.

Even my drunken thoughts have a truth in them, If I am wrong about that, I would appreciate and argument that corrects me. This article clearly does that.

It will not change the way I comment, instead it lets me know exactly where a line is, in case I actually do get drunk with emotions as opposed to generating them myself.

I do not claim sarcasm lightly or as an umbrella defense, most of my writings are not sarcastic at all. When they are sarcastic, I know the backdrop, the motivation as well as my intent. If I ever have to go in front of a judge to explain myself, I would appreciate the opportunity to explain myself, on the record.

Thank you for taking the time to explain the law, I received it well and it provided comfort. That is one less wall I will have to walk into to understand.

Knowledge is power

Anonymous Coward says:

I’d like to notify that the link to “Don’t Shoot the Message Board” is no longer operational. Here is a functional link:
https://netchoice.org/wp-content/uploads/2020/04/Dont-Shoot-the-Message-Board-Clean-Copia.pdf

I’ve been thinking, maybe if Section 230 got appealed, many websites might only have to add “THE FOLLOWING INFORMATION MAY OR MAY NOT BE TRUE AND MAY OR MAY NOT BE FICTIONAL IN NATURE” at the top of every page before a user generated discussion or post. I think this might protect a website from defamation lawsuits dealing with user produced statements and links to fake news stories. At first, I thought the websites would need to add this to every post a user makes, then I realized it might only be necessary to add such a statement to the start of a page, similarly to how films append “THIS IS A WORK OF FICTION” to the end credits. I think this type of disclaimer is something websites should use anyway.

John Vail says:

People whom you say are wrong may turn out to be right. Here’s Ian Milhiser’s description of a pending Supreme Court case: https://www.vox.com/policy-and-politics/2022/10/6/23389028/supreme-court-section-230-google-gonzalez-youtube-twitter-facebook-harry-styles.

Ian is a well-respected legal journalist. I have represented victims of terrorism, and have consulted on this case, so offer Ian’s writing instead of my own, legitimately viewed as biased.

Leave a Reply to Derek Kerton Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...