In The Last Six Months Techdirt’s Antispam Algorithm Has Stopped Over A Million Spam Comments; Should We Lose 230 Protections For That?

from the giving-a-sense-of-the-scale dept

The Supreme Court is currently deliberating whether or not algorithms deserve protections under Section 230. And I hear from lots of people that maybe Section 230 wasn’t meant to cover algorithmic policing and recommendations of content. But that’s utter nonsense.

The whole area of content moderation first came about as a response to the earliest versions of spam. And one thing that people learned quite quickly is that you can’t manually police for spam if your site has even the slightest level of popularity. It will get flooded.

This is why any site needs to have some sort of automation to deal with spam. Indeed, for years, Techdirt has actually been using a combination of multiple different tools and setups to fight spam comments, but I’d never really looked into the numbers until just recently (mostly on a whim because I found the setting where those stats are!), and it’s kinda stunning. First off, we get way more spam attempts than I had even realized.

In the last six months alone, Techdirt received over 1.3 million attempts to spam our comments. That’s compared to the slightly over 40,000 legitimate comments we received in the same period. Here’s a chart of the spam comments per month:

I have no idea why spam grew so rapidly in January and February before falling in March, but even with 125,000+ spam messages in March, it completely dwarfs the amount of legitimate comments we got. The only possible way to keep up is to use automated systems. And, to be clear, while some small percentage of spam does get through, and we have a few legit comments caught in the spam filter, I’d argue we do a pretty good job of catching most spam, and allowing through most comments.

But, the larger point: without the multiple algorithmic systems we use to catch spam, we’d never be able to manage that amount of spam. Hell, we couldn’t handle manually dealing with less than 1% of the spam we actually get attempted right now. It would overwhelm us.

So if the courts (or, horror of horrors, Congress) were to decide that “algorithms” no longer are protected under Section 230, it would destroy Techdirt. While the 1st Amendment would eventually protect us, the lack of 230 protections would make using a spam filter a liability that would open up the risk of having to fight a full legal battle just to prove our right to block spam comments.

As such, our choices would be to turn off the algorithms and let spam flow, shut down our comments entirely, or risk ruinous lawsuits for the “harm” of trying to stop spam with an automated filter.

Technological filters (i.e., algorithms) should obviously be protected by Section 230, because without them, we lose the ability to fight spam, and the amount of such content is truly overwhelming. And that’s just for us, a pretty small site. Imagine how larger sites are dealing with this stuff.

Filed Under: , , , ,
Companies: techdirt

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “In The Last Six Months Techdirt’s Antispam Algorithm Has Stopped Over A Million Spam Comments; Should We Lose 230 Protections For That?”

Subscribe: RSS Leave a comment
375 Comments
Anonymous Coward says:

I’d argue we do a pretty good job of catching most spam, and allowing through most comments.

I would mostly agree (before the site migration I don’t think I ever had a comment caught). After the migration, it seems that some small percentage 5-10% (maybe?) fail to go through. Some times there’s a weird 429, which basically only occurs when I stop browsing to read an article, and then post a comment. If I got through and refresh several pages at once to look for new comments… it works fine. It also does seem like the comment system has been less ornery in the last month or so (but maybe that’s because I’ve been posting a bit less often).

Anyhow: Techdirt’s comment system is probably the best one I’ve seen. SCOTUS making its legality unclear would be a huge blow… probably too all the US. And it would also be pretty damaging for any self-hosting with user-generated content (and I guess most mastodon instances?).

Anonymous Coward says:

Re:

It also does seem like the comment system has been less ornery in the last month or so

It seems about the same to me, and the “Preview” button still does absolutely nothing (as has been the case since the “upgrade”). And the comment box still doesn’t show enough context, which results in errors such as accidentally-top-level posting.

Anonymous Coward says:

Re: Re: Re:

Some things don’t work.

For instance, when using the grater-than > sign to quote text, it does show up with the gray box.

Links using square brackets then parens []() don’t show up as links.

There are a couple of other ones that I have ran into as well, but these were the first two I could think of off the top of my head.

Tanner Andrews (profile) says:

Re: Re: Re: preview still borken

I suspect that it has to do with having javascript turned off. While that is generally speaking the right setting, some web sites think they are smarter than they really are. and that javascript lets them show off their cleverness.

Preview does not work. Flag does not work. Marking ``insightful” or ``funny” does not work. They all worked on the old platform.

I am guessing that the new platform was rolled out with little testing because these things really ought to have been caught.

Anonymous Coward says:

As I’ve said elsewhere on TD, you cannot have an open comment section without robust, and ideally pre-emptive, ways to stop bad actors. Otherwise you have essentially nothing but spam.

Granted, it was in the context of saying that TD could be more aggressive in removing trolls and harassers and still retain anonymous commentary, but either way, yeah, if you allow UGC you have to be a fortress.

This comment has been deemed insightful by the community.
Who Cares (profile) says:

It isn't just Techdirt

Block lists would be sued into the ground regardless of if they are located in the US or not if algorithmic filtering is not protected under 230 anymore.

They don’t need to go after Techdirt, just preventing access to for example Spamhaus (and similar efforts) would increase the spam that gets through.

Ninja (profile) says:

Re:

I always get astonished by the sheer amount of spam big companies report. It’s several hundred billions of messages if you consider email alone. This has a cost to send and filter both in terms of money and environmentally speaking as well. But crooks profit from it and these crooks are usually on countries that are not very interested in actually doing something to curb the problem. A coordinated international effort and some heavy prison time would help a lot, including putting pressure on those countries. Filtering is obvious and welcome but it’s like drying ice, it doesn’t go to the root of the problem.

sumgai (profile) says:

Re: Re:

This has a cost to send and filter both in terms of money and environmentally speaking as well.

No, it doesn’t. In fact, while filtering may be a cost on the receiving end, there is no such commensurate cost on the sending end. That’s the problem.

Institute an across-the-board fee to send messages, and all spam will stop in about 20 seconds, perhaps sooner. The details have to be worked out, such as volume per time segment, but I’ve no doubt that if Grandma were required to spend a penny to send an email to her grandson, she’d do it in a heartbeat.

Small clubs with a monthly newsletter, say 500 to 1,000 members, they’d incur a couple of dollars cost, and they’d gladly do it. Contrast that to spamming outfits that send more than a million pieces an hour, they’d not see enough ROI to make it worthwhile…. even at a penny a piece. But of course, if I were in charge, I’d raise the fee per piece on a tier basis – instead of a penny a piece, those million spams would cost a quarter a piece. $250K versus $10K starts to eat into the profit picture pretty quickly.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

To those who would answer “yes” to the headline, I have One Simple Question for you.

Yes or no: Do you believe the government should have the legal right to compel any privately owned interactive web service into hosting legally protected speech that the owners/operators of said service don’t want to host?

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Rocky says:

Re: Re: Re:

I mean, if you’re demanding ‘yes’ or ‘no’ to a question you need, by my count, thirty-four words just to state, you ain’t operating in any better faith than the trolls you’re trying to call out.

I mean, if you aren’t specific enough you’ll get a dishonest answer. They are labelled trolls for a reason and being very specific shows how dishonest they really are when they avoid answering.

If you don’t get this you may be a troll yourself.

Stephen T. Stone (profile) says:

Re: Re: Re:

I phrase the question that way to prevent someone from finding and exploiting loopholes in a shorter version of the question. My question also applies equally to all kinds of speech that I find objectionable but is otherwise legal. I don’t get into specific examples unless I feel a follow-up question is warranted.

If you can think of a simpler way to phrase that question without turning it into a partisan game or creating an exploitable loophole to say “yes” in some obscure-ass situation, by all means, hit me with your best shot. But I’ve spent more time than you might think on the phrasing of that question. You’re probably not going to do much better than I did.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:2

I mean, if you’re designing the question so’s to force only one possible answer, that kind of shouts pretty clearly that it isn’t a question.

So I wouldn’t bother. Someone saying ‘yes’ to ‘do you think a government should compel speech’ isn’t ‘exploiting a loophole’; it invites a conversation about which speech, and why.

Stephen T. Stone (profile) says:

Re: Re: Re:3

Someone saying ‘yes’ to ‘do you think a government should compel speech’ isn’t ‘exploiting a loophole’

“Should the government make Twitter host some kinds of speech?” That is the essential question at the heart of my much longer one. But if someone says “yes” to that, they can then specify that speech which would obviously offend a shitload of people (e.g., racial slurs) should be exempt from compelled hosting. It allows them to escape the radical nature of what they’re proposing⁠—and the “free speech absolutism” bullshit that often backs their views⁠—by abandoning a defense of The Worst Speech™.

I phrase the question the way I do because that offensive speech is legal. Racial slurs are “legally protected speech” under the First Amendment, but many social media services don’t want to host that kind of speech. The point of my question is to remind those who may like the idea of compelled hosting that “legally protected speech” includes bigoted speech.

I will defend, to my dying breath, the right of a bigot to say any bigoted thing they want. Let them speak their obscenities however they wish; they must have that right and the government must protect it. But nobody has to host their bullshit⁠—and the government has no right to make anyone host it. If someone is going to argue that Twitter not hosting bigoted speech is “censorship”, I’m going to ask them One Simple Question and find out which concept they’re really in favor of: free speech or free reach.

Anonymous Coward says:

Re: Re: Re:4

I would point out, for a start, that precisely none of your question, despite the time you say you spent on it and its specificity in other regards, makes any distinction between, for example, hosting comments by users, things said by the media company itself, or things the company is already legally compelled to say that you might not even think of as ‘speech’ as such because it’s such a matter of everyday.

Stephen T. Stone (profile) says:

Re: Re: Re:5

You raise a fair point. That said: Other than first-party speech such companies are already legally required to host (e.g., Terms of Service), all speech is fair game under my question. But I’ll make an appropriate edit to the question based on your point⁠—and as it turns out, that edit lets me cut down on a bit of the verbiage:

Yes or no: Do you believe the government should have the right to compel any interactive web service into hosting any third-party speech that it would otherwise refuse to host?

Tanner Andrews (profile) says:

Re: Re: Re:6 still a hard ``no''

Yes or no: Do you believe the government should have the right to compel any interactive web service into hosting any third-party speech that it would otherwise refuse to host?

I am still at a hard ``no” on this. I am in the same place as to my living room. Come in here, start spouting the sort of stuff that we have in mind as objectionable on the web, and you are likely to be shown the door.

My living room, my web site, my rules.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:7

I am still at a hard “no” on this. I am in the same place as to my living room. Come in here, start spouting the sort of stuff that we have in mind as objectionable on the web, and you are likely to be shown the door.

My living room, my web site, my rules.

Your living room is not a speech distribution center or a printing press. Just as your living room is not a grocery store. If you decide to sell propane out of your living room, you can be sure your government will have something to say about that. Because that activity is regulated and comes with certain responsibilities.

This is the same for speech distribution. When you take on that activity, you take on the responsibilities that come with it. This includes modest restrictions on your First Amendment rights. Restrictions that were long ago held Constitutional by the Supreme Court. Restrictions that have substantial positive effects on society. You don’t have to be a speech distributor if you don’t like these restrictions. Just as you don’t have to sell propane or open a grocery store.

Property arguments are bad arguments. It’s not about property, it’s about behavior. There are many restrictions on your “property rights” even in your own home. Are you allowed to play loud music at 2am that wakes your neighbors? No – virtually every city in the country has rules against that. Are you allowed to have noxious odors coming from your property? Again, virtually every city has rules against that. We allow people to enjoy their property in this country with few limits. But you hit those limits when your use of your property starts affecting other people. And what do you know? When you use your “property” to distribute the speech of other people, you are in fact affecting other people. This brings regulation. Certain restrictions. This is a basic concept in the law.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:9

Neither is Twitter.

Ask Masnick or any other Section 230 hero if this is true. This claim is so dumb that it’s a waste of my time to even respond. Speech distribution is literally their business. That’s it. They accept your speech and deliver it to the world. I’m not sure what you think their business is if it’s not speech distribution. These kind of comments are yet more demonstration that you are totally clueless on the legal issues here. It’s honestly not a good reflection on Masnick that regular users of his forum are so ill-informed.

Stephen T. Stone (profile) says:

Re: Re: Re:10

They accept your speech and deliver it to the world.

No, they don’t. They accept your speech and store in on their servers; only when it’s accessed by other end users is it “delivered”, and not in the sense that one might send/receive an email. Even if you come up with the most narrow and hyper-qualified definition of “distributor” to make your argument work, under the law, Twitter isn’t the distributor or publisher of any third-party speech on its service even if the speech is defamatory. Under your asinine logic, every Twitter employee⁠—past and present⁠—should serve jail time because they’re the distributors and publishers of every bit of CSAM on Twitter. I’d ask if you realize how ridiculous that sounds, but you clearly don’t give a fuck about that.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:3

I mean, if you’re designing the question so’s to force only one possible answer, that kind of shouts pretty clearly that it isn’t a question.

Arguably so, a lot of questions phrased that way are entirely intended to be rhetorical. But then, so the fuck what? Do you think trolls extend any kind of courtesy when they’re shilling their stupid shit like male-female relationships and abstaining from drug use?

It’s not our fucking responsibility to be better than the trolls, that’s what gives them opportunities to do more dangerous shit like get Roe v. Wade destroyed. “Be the better person” has always been a terrible argument and it always will be. If trolls don’t like being embarrassed they can fuck off back to 4chan like all the other cis straight scum.

This comment has been flagged by the community. Click here to show it.

Stephen T. Stone (profile) says:

Re: Re: Re:5

The only answers I’m looking for are “yes” and “no”. If the answer is “yes” (or “no, but…”), I’m going to drill down further. If the answer is “no”, I don’t have anything else to ask.

And yes, I openly admit that the question is designed to elicit a “no”. Someone who answers “yes” (or “no, but…”) is essentially supporting the idea of forcing even the smallest social media service to host their speech. The question should be enough to make anyone think that position is unacceptable. Besides, I’ve yet to see anyone answer “yes” and justify that answer with anything approaching a decent argument.

(And I’ll again note for the record that the question is content agnostic. No service, regardless of its sociopolitical leanings, should be required to host speech it would otherwise ban. Twitter and Truth Social should each have the right to ban speech that would be unacceptable on one and acceptable the other.)

Anonymous Coward says:

Re: Re: Re:6

I remind you you said this:

“If you can think of a simpler way to phrase that question without turning it into a partisan game or creating an exploitable loophole to say “yes” in some obscure-ass situation.”

That very much does not sound like you’re looking for ‘yes’ answers. That sounds like if someone could make a ‘yes’ answer and defend it, you’d promptly revise your query so’s to eliminate that possibility.

Stephen T. Stone (profile) says:

Re: Re: Re:7

That very much does not sound like you’re looking for ‘yes’ answers. That sounds like if someone could make a ‘yes’ answer and defend it, you’d promptly revise your query so’s to eliminate that possibility.

Partly, yes, but that’s also part of the point: If someone is willing to defend the compelled hosting of third-party speech, I’m going to find a way to test the limits of that defense until it breaks.

I own my position on the forced hosting of speech (it shouldn’t be a thing). My question, and any follow-ups thereof, are meant to make anyone who thinks otherwise own their position until they’d prefer to disown it. Don’t like it? Don’t answer it. Ain’t nobody here gonna compel you to say “yes” or “no”⁠—including me.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:8

Bluntly, I think there are circumstances where forcing a company to allow speech they’d prefer to remove is, in fact, going to be the correct choice — just as I think forcing a company to remove speech they would prefer to host can be the correct choice.

Both of those are evaluations that depend very much on the facts of a given decision (or, more likely, hypothetical, since stuff like this is far more common in the imagination than in practice), of course.

The idea that all speech must be treated equally in all circumstances by a government is not one I share and not one that’s really ever been (or should be) practised anywhere anyway.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:9

The idea that all speech must be treated equally in all circumstances by a government is not one I share and not one that’s really ever been (or should be) practised anywhere anyway.

This is literally the foundation of the 1st Amendment. The government must treat all speech equally unless they can justify treating it differently, which often requires the strictest of scrutiny.

sumgai (profile) says:

Re: Re: Re:11 Unless, unless, unless, etc.

it is straight up invented because the text of the Amendment is unworkable.

‘Sfunny, it’s been working just fine for all but a few years short of two and a half centuries. People keep on testing (pushing) the limits, and so far, they keep running into a very hard wall, one that bounces them right back into reality.

Now, where you’re coming from, there are constructs in various laws that would seem to contradict that statement, but if you read the whole bleeping Constitution and not just your favorite cherry-picked parts of it, then you’ll understand that 1A does NOT protect criminal acts. It only protects speech, period. The only speech that a reasonable person would consider to be criminal in nature is that which purports to incite a criminalized action. And criminalizing an act or action is something that Congress is both permitted and required to do, once again thanks to the Constitution. (In fact it is 10A that says States can do it too. Prior to that amendment, States were not sure just how far they could go in making their own laws.)

Do recall, and never stop recalling, that 1A protects any and all non-criminal speech as protected from government interference or proscription – it does NOT prohibit consequences from either the government (criminal) or individual persons (civil). Further, 1A allows individual persons or groups of persons to associate as they see fit, period. This means that no government proscription may be laid on anyone who wishes to meet with another person or group, either in-person or by means other than face-to-face. By that, the logical extension is that the government cannot force one or more persons to meet in any manner whatsoever. That is not the ‘free speech’ clause, that is the ‘right of association’ clause, both being found in 1A. This paragraph explains, in excruciating detail (at the third grade level), just why 1A applies to the whole mess of social media. Those who wish to twist it around to their personal desires are doomed to run into that very hard wall, ad finitum.

For the purposes of this reminder, “a reasonable person” is one who subscribes and adheres to the societal norm. The laws shaping that norm are those promulgated by laws, regulations and rules engendered by appropriate legislative and administrative bodies.

Stephen T. Stone (profile) says:

Re: Re: Re:12

1A allows individual persons or groups of persons to associate as they see fit, period. This means that no government proscription may be laid on anyone who wishes to meet with another person or group, either in-person or by means other than face-to-face. By that, the logical extension is that the government cannot force one or more persons to meet in any manner whatsoever.

This also extends to cyberspace: The government can’t force Twitter to associate with people who break Twitter’s rules⁠—or to host speech that Twitter’s higher-ups don’t want associated with the service (whether implicitly or explicitly). That applies to services other than Twitter, too. And anyone who thinks otherwise is one of those “free reach” dipshits.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:13

This also extends to cyberspace: The government can’t force Twitter to associate with people who break Twitter’s rules⁠—or to host speech that Twitter’s higher-ups don’t want associated with the service (whether implicitly or explicitly). That applies to services other than Twitter, too. And anyone who thinks otherwise is one of those “free reach” dipshits.

Except the government does require people to associate. As already stated and repeatedly ignored, all common carriers are required to associate with anyone who wants their services. Common carrier airlines are prohibited from refusing service to someone except for very narrow, practical reasons defined by statute (security/safety/etc).

This is a direct limitation imposed on common carriers. Similarly, speech distributors have always been limited in their right of association. Not all speech distributors – rather, just those who control the means of publication. Speech distributors who control a printing press have a special obligation. This special obligation requires that they either serve everyone unless legally prohibited or they accept the greater responsibility of publishers. This is an indirect limitation on their right of free association. It is less stringent than that imposed on common carriers, but a limitation nonetheless.

The Supreme Court has held that both these limitations are compatible with the 1st Amendment. They do not violate it. So you can complain all you want about “right of free association”, but you are just wrong. This was decided decades ago and has been well settled law. If you think that law is wrong, make that argument and argue for the total abolition of intermediary liability online AND offline. You will lose that argument, but you are free to make it. Section 230 “experts” know they will lose this argument, so they instead refrain from discussing all this law and fill people with these ignorant talking points about “free speech” and “right of free association”. And people like yourself think you know what you are talking about because you trust 230 supporters are being honest with you. This is your fatal mistake. They have not been honest with the public since 230 was written. The 1A right of free association has never protected publishers, distributors, or common carriers from intermediary liability law.

Anonymous Coward says:

Re: Re: Re:14

Common carriers implement a 1 to carriage service of some kind, and the interaction between tow users do not have any visibility or impact on other users. That is to say your phone conversations with your wife are distinct from your conversation with a hooker, and neither see you conversation with the other, and those conversations are not visible to anybody else unless they tap your phone.

Any public conversation you hold on Twitter with the same people are visible to not only both of those people, but also anybody else that uses Twitter.

Therefore what you say on Twitter can affect how other use Twitter, and therefore Twitter should be allowed to moderate those conversation as it sees fit, because they go beyond the you and whoever you are talking to.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:15

Common carriers implement a 1 to carriage service of some kind, and the interaction between tow users do not have any visibility or impact on other users.

This is not relevant to the legal reasoning. It was never a consideration. Nor is it true in all cases. In fact, as I mentioned, airlines are common carriers and you flying on an airplane certainly involves more than just yourself and one other person. Everyone on that plane is impacted by you being there. If people on an airplane don’t want to fly with Ivanka Trump because they don’t like her, that’s tough cookies. The airline has to fly her. There might be 200 people on that plane who are bothered by that, but the law doesn’t care.

Common carriers are common carriers because they provide essential services to the public. Many of these services are private services between two people, but not all of them. The idea that things like telephone service only involves two people is also illusory. While one conversation only involves two people (actually, it can include many), being denied phone service impacts tons of people. If I am denied phone service, no one can reach me by phone, including friends, family, marketers, employers, the government. There are hundreds or thousands of people impacted by that single decision to deny me service. The law doesn’t care how many are involved. It’s irrelevant to the principle that actually drives the rules.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:17

Again with the misrepresenting. I never said it had anything to do with online platforms. In fact, I specifically said online platforms are not common carriers. Where it has relevance is to demonstrate that limitations on “free association” are not prohibited by the 1st Amendment in these cases and in fact common carriers have a complete inability to exercise “free association”. Similarly, speech distributors, which is what online platforms are – have a lesser limitation. This limitation is that if they choose to exercise their right of free association, they must accept publisher liability. This has always been the case offline. It’s how intermediary liability law works.

Stephen T. Stone (profile) says:

Re: Re: Re:18 Congrats, you got me to reply again.

if they choose to exercise their right of free association, they must accept publisher liability

Yes or no: Can you cite a single law or binding legal precedent that says any interactive web service is reponsible/legally liable for all the third-party speech it doesn’t moderate if said service chooses to moderate some third-party speech?

If “yes”: Cite it.

If “no”: Shove it.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:19

Yes or no: Can you cite a single law or binding legal precedent that says any interactive web service is reponsible/legally liable for all the third-party speech it doesn’t moderate if said service chooses to moderate some third-party speech?

This is bizarre. It’s as if you don’t know intermediary liability law exists. It’s as if you don’t know that Section 230 was passed because this liability you claim does not exist was held against a company.

I am not going to lookup and cite every single state in the country, which is where these laws occur. These laws are a mix of state laws and common law. They are referenced heavily in Restatement of Torts (2d), although the Restatement is out of doubt & a bit out of touch.

For a citation, you can look at Stratton Oakmont v. Prodigy, where Prodigy was held to be a publisher due to their policies – due to their choice to “moderate” user speech (assuming by “moderate”, you mean viewpoint censorship).

See here: https://h2o.law.harvard.edu/cases/4540

For the prior case, where CompuServe was held to be a distributor because they did not “moderate” user speech, see Cubby v. CompuServe.

https://law.justia.com/cases/federal/district-courts/FSupp/776/135/2340509/

Or you could just read the paper I wrote, which has long sections explaining these cases, as well as an entire history of this law, including dozens of references to relevant caselaw. For more references, I will have to refer you to my paper and suggest you read it:

https://twitter.com/therealrthorat/status/1643270588864032768

Stephen T. Stone (profile) says:

Re: Re: Re:20

For a citation, you can look at Stratton Oakmont v. Prodigy

For the prior case … see Cubby v. CompuServe

Neither of those are binding legal precedent thanks to 47 U.S.C. § 230. Again: Please cite a single law or binding legal precedent that says any interactive web service is responsible/legally liable for all the third-party speech it doesn’t moderate if said service chooses to moderate some third-party speech.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:21

Neither of those are binding legal precedent thanks to 47 U.S.C. § 230

The entire point of this discussion is that 230 should not exist, so I’m not sure what you point is here. This whole discussion is based on the fact that 230 was a mistake that overrode the law as it applies offline. 230 wiped out intermediary liability online. Therefore, it goes without saying that no one can cite a law that assigns intermediary liability online because 230 literally prohibits that. None of this has anything to do with the discussion so far, other than being the motivating factor for the whole discussion.

Stephen T. Stone (profile) says:

Re: Re: Re:22

The entire point of this discussion is that 230 should not exist, so I’m not sure what you point is here.

My point is that it does exist. My point is that Twitter, Facebook, and the like have the right to decide what speech they will and will not host without facing legal liability for those decisions. My point is that you seem awfully confused⁠—possibly on purpose⁠—about how services like Twitter work, how the courts have ruled over and over again that they’re not common carriers or book publishers or whatever other analogous business/service you can think of, and how no government official at any level of the U.S. government can make Twitter host any kind of speech that would otherwise violate Twitter’s Terms of Service.

This whole discussion is based on the fact that 230 was a mistake that overrode the law as it applies offline.

And yet, you’re the only one here who’s willing to back that notion. Even the trolls who frequent this place⁠—the assholes who love to poke and prod and provoke the regular commentariat⁠—aren’t signing on to your bullshit.

Also: Online speech on platforms like Twitter should be treated differently than offline speech because Twitter isn’t being a publisher by holding back tweets for approval or editing other people’s content and claiming it as Twitter’s own content. I see no reason to hold Twitter liable for defamation if Twitter employees played no active role in writing/publishing defamatory speech on Twitter. You have yet to provide that reason⁠—and I doubt you ever will.

no one can cite a law that assigns intermediary liability online because 230 literally prohibits that

230 prohibits someone from doing the online speech equivalent of suing the toolmaker because someone used the tool to do something heinous. If you have a problem with that, tough shit.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:23

My point is that it does exist.

This discussion is about why it should not exist. And your argument seems to be, “It exists, therefore it is right.” That’s…something.

My point is that Twitter, Facebook, and the like have the right to decide what speech they will and will not host without facing legal liability for those decisions.

Fine. That’s your opinion. Now base that opinion in the law. And I don’t mean Section 230. Your goal here is to justify Section 230. If you cannot base it in the law, you cannot justify it. I have painstakingly demonstrated how 230 “experts” have misrepresented the prior law to make their case. Your response is basically to say, “This is the way.”

they’re not common carriers

I have repeatedly said they are not common carriers. They are distributors of speech who are being allowed to act like publishers without the legal responsibility of publishers. Why do you and the other Toom guy keep misrepresenting what I am saying? Can you not hold a discussion in good faith?

book publishers

Not “book publishers”, but publishers. Which has a defined meaning in the law. A definition I gave and you just ignored. I gave an explanation of what makes a publisher and linked to a fuller discussion in my paper, and your response to my detailed legal arguments is basically, “Uh, uh.” If you assert they are not publishers, explain your legal reasoning. Not your conclusory reasoning that they cannot be publishers because you don’t want them to have liability. Your reasoning grounded in the law of intermediary liabilities. If you cannot back your conclusion with legitimate legal arguments, your conclusion is useless.

And yet, you’re the only one here who’s willing to back that notion.

Bandwagon fallacy.

Online speech on platforms like Twitter should be treated differently than offline speech because Twitter isn’t being a publisher by holding back tweets for approval or editing other people’s content and claiming it as Twitter’s own content. I see no reason to hold Twitter liable for defamation if Twitter employees played no active role in writing/publishing defamatory speech on Twitter. You have yet to provide that reason⁠—and I doubt you ever will.

Again demonstrating you don’t understand this law AND you aren’t even paying attention to what I am saying. Another example: Scholastic Press publishes Harry Potter. Do you think they write Harry Potter in any way? Of course not. The publisher is even different in different countries, yet the book is exactly the same. Most, if not all, of these publishers have zero input into the contents of the book. Yet they have publisher liability. Seems like your idea of what makes a publisher is slightly off.

You keep getting this wrong because you assert that being a publisher means you were involved in writing the speech. That’s not the test. The test is whether you exercise editorial control. Scholastic Press does that with Harry Potter. Twitter does that with user tweets. If the publisher was involved in writing the text, they would be a co-author.

230 prohibits someone from doing the online speech equivalent of suing the toolmaker because someone used the tool to do something heinous. If you have a problem with that, tough shit.

Again with the bad analogies. This is wrong, wrong, wrong. 230 prohibits you from suing the toolmaker who made defective tools. A speech distributor who continues to sell defamatory speech after receiving notice it is defamatory is distributing speech in a defective manner. It’s not allowed. A publisher who distributes any defamatory speech is publishing in a defective manner. It’s not allowed. They are negligent. Those activities have responsibilities. If I sell you a car and you crash the car, I am not responsible for the injuries causes if the car is not defective. But if the car crashed because the brakes failed, I am responsible. I cannot behave negligently. Section 230 gives online sites the right to be negligent and they are immune to responsibility for that negligence. This is unique in the law and absolutely atrocious from a civil society standpoint.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:25

Reminder: Prodigy wasn’t sued over defamatory content.

They were sued for publishing allegedly defamatory content. Which is what any suit about defamation is about. It turns out the content was not defamatory, but the legal question was settled before that could be determined. The case presented a threshold question: could Prodigy be liable at all, if it was defamatory. The answer was “yes”. The two sides then settled without a trial on the merits. IIRC the settlement was simply that Prodigy apologized and removed the content, but there was little (or no) money exchanged.

Stephen T. Stone (profile) says:

Re: Re: Re:24

your argument seems to be, “It exists, therefore it is right.” That’s…something.

230 is right because it gives online services like Twitter “the ability to exercise what they deem to be appropriate editorial discretion within that open forum” (to quote Manhattan Community Access Corp. v. Halleck) without facing legal liability for choosing what speech it will or will not host post-publication.

Now base that opinion in the law.

I can cite Manhattan Community Access Corp. v. Halleck, Prager University v. Google LLC, and NetChoice v. Attorney General, State of Florida. And speaking of binding legal precedent…

They are distributors of speech

…can you make a single citation of such that backs up this claim? Because I don’t think you can.

Why do you and the other Toom guy keep misrepresenting what I am saying?

Because you keep making bullshit arguments that fundamentally misunderstand how social media services work. Fix your shit and we won’t need to point out that your shit needs fixing.

If you assert they are not publishers, explain your legal reasoning.

Twitter doesn’t exercise editorial control on third-party posts prior to their being posted on Twitter. The only speech for it could be considered a “publisher” is its own speech⁠—and I hope you’re aware that 230 doesn’t cover first-party speech. Someone posting potentially defamatory content on Twitter doesn’t automatically turn Twitter into the publisher of that speech. The only way Twitter would take on that liability is if a Twitter employee had a direct role in writing or publishing that speech.

Craftsman doesn’t get sued if someone uses one of its hammers to commit a murder. Twitter shouldn’t get sued if someone uses its service to commit defamation. We sue the people who used the tools, not the companies behind those tools.

Bandwagon fallacy.

Even if it is: The fact that no one, even the trolls, is willing to back your play should be a sign that maybe your play isn’t all that good. I mean, I’m a fucking dumbass with more problems than Jay-Z and I’m still able to shut your shit down.

Seems like your idea of what makes a publisher is slightly off.

No, that’s your problem. You’re the one who wants to insist that Twitter is the publisher of all third-party speech on the platform if Twitter decides to moderate even one post on the service. But…

The test is whether you exercise editorial control. Scholastic Press does that with Harry Potter. Twitter does that with user tweets.

…Twitter doesn’t pre-vet third-party posts before they go live. They’re not Scholastic or the New York Times or any other traditional content publisher you can think of. Everyone else here seems to understand that. Your inability to do that⁠—whether purposeful or not⁠—is your problem.

230 prohibits you from suing the toolmaker who made defective tools.

No, it doesn’t. 230 prohibits you from suing the company behind an online service if it can be proven the company played no role in writing/publishing speech that it merely hosts on that service. You can’t sue Twitter for defamatory speech that was posted on Twitter unless Twitter itself (i.e., a Twitter employee) did that shit. You would have to instead sue the person who actually wrote the post⁠—which is how the law should work. You haven’t even come close to explaining why the law should work any other way…or, at least, in a way that manages to convince anyone that you’re anything but ignorant.

A publisher who distributes any defamatory speech is publishing in a defective manner.

And if Twitter were a publisher, you might have a point. But it’s not, so you don’t. And you can’t cite any-fuckin’-thing that says it is.

Section 230 gives online sites the right to be negligent and they are immune to responsibility for that negligence.

By your logic, Twitter should be held responsible as the publisher for all CSAM on the service even if Twitter employees didn’t post it, create it, or know it was on the service until after being informed of that fact. I don’t know how you fail to fathom how absolutely fucked up that sounds to anyone who isn’t you, but that’s your problem to solve. I can’t fix your shit for you.

Toom1275 (profile) says:

Re: Re: Re:25

The only way Twitter would take on that liability is if a Twitter employee had a direct role in writing or publishing that speech.

to wit:

https://www.techdirt.com/2023/03/23/how-to-know-whether-section-230-applies/

https://www.techdirt.com/2020/07/21/case-where-courts-got-section-230-right-because-it-turns-out-section-230-is-not-really-all-that-hard/

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:26

to wit:

https://www.techdirt.com/2023/03/23/how-to-know-whether-section-230-applies/

https://www.techdirt.com/2020/07/21/case-where-courts-got-section-230-right-because-it-turns-out-section-230-is-not-really-all-that-hard/

These are cases in the context of Section 230. They require that Twitter has a direct role in writing the speech because Section 230 exempts Twitter from the normal rules. You cannot site cases that rely on Section 230 as justification for Section 230.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:28

And that’s because Twitter doesn’t operate like a traditional publisher. Prove it does and maybe you’ll have something that looks like a point.

It’s in my paper you haven’t bothered reading. Although that will predictably be unconvincing to you as well. You already have all the answers. Any contrary information just cannot be true.

Stephen T. Stone (profile) says:

Re: Re: Re:29

It’s in my paper you haven’t bothered reading.

Does your paper prove that Twitter exercises the kind of pre-publication editorial discretion that a book publisher or a newspaper practices?

If “no”, I don’t need to read your paper⁠—I can say you’re full of shit without having to waste my time like that.

If “yes”, you’re gonna need to show your work here, because I know how Twitter works for end users (based on having been a user of it for around a decade before leaving it behind) and it sure as shit ain’t like that.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:30

Does your paper prove that Twitter exercises the kind of pre-publication editorial discretion that a book publisher or a newspaper practices?

My paper explains why this is not the test for a publisher. This would be a stupid test for a publisher because it would incentivize publishers & newspapers to not screen materials to escape publisher liability. But they don’t do this because doing this is not part of the test and would not get rid of publisher liability.

As I already explained, publishers in some cases do not really screen materials. Definitely not for things like defamation. Whether you screen materials or not is irrelevant to whether you are a publisher. Instead, if you are incapable of doing this, it should inform your decision about whether it makes sense to operate as a publisher (hint: if you are incapable, you should not operate as a publisher).

Stephen T. Stone (profile) says:

Re: Re: Re:31

As I already explained, publishers in some cases do not really screen materials. Definitely not for things like defamation.

And in those cases, publishers can be held liable for defamation because they decided that the truth is less important than whatever motive they had to avoid even basic-ass fact-checking.

Whether you screen materials or not is irrelevant to whether you are a publisher.

It is entirely relevant. And you can’t cite a single law or binding legal precedent that says Twitter is the legal publisher of all third-party speech on the platform even if it exerts no pre-publication editorial discretion over any third-party speech.

To phrase what you’ve been doing here in an analogy you might understand, you’re trying to sell a Millennium Falcon collectible to a die-hard Star Trek fan: You’re getting nowhere fast and everyone thinks you’re an idiot for trying.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:32

And in those cases, publishers can be held liable for defamation because they decided that the truth is less important than whatever motive they had to avoid even basic-ass fact-checking.

People who don’t know what they are talking about frequently contradict themselves. Hence your statements that Twitter is not a publisher because they don’t pre-screen materials (they actually do, automated). Yet when I mention some offline publishers don’t pre-screen materials, you assert they are still publishers and should be held liable for defamation. Which is it? I thought pre-screening was what made them a publisher? If they decline to pre-screen, how can you still say they are a publisher if that is your rule? Or is your rule more akin to “some animals are more equal than others”?

And you can’t cite a single law or binding legal precedent that says Twitter is the legal publisher of all third-party speech on the platform even if it exerts no pre-publication editorial discretion over any third-party speech.

I already did cite this. It was discussed in Stratton Oakmont. The court said that the fact their rules were enforced after publication was irrelevant to the determination of a publisher. This was correct. Of course, 230 exempted sites from this rule, but the rule is as stated in Stratton Oakmont. Because the rule cares about the control exerted, not when it is exerted. Speech that is censored after it’s published is still controlled by the censor. Their assertion of control just happened at a later point.

Stephen T. Stone (profile) says:

Re: Re: Re:33

Yeah, I’m gonna round out this comment, then find something better to do because rolling around in the muck with you for this long is affecting my mental health.

Twitter is not a publisher because they don’t pre-screen materials (they actually do, automated)

Twitter doesn’t screen every post before it’s published⁠—even with automated tools, that would be nigh impossible. Again, that’s one of the reasons Twitter has a near-unsolvable CSAM problem.

when I mention some offline publishers don’t pre-screen materials, you assert they are still publishers and should be held liable for defamation

Yes, because publishers operate under different standards. They’re generally supposed to vet all the speech that appears in their publication prior to it being published. A newspaper that doesn’t fact-check its own stories should ostensibly be held liable for defamation because it refused to give a shit if what it printed was true.

But Twitter doesn’t operate under those standards because Twitter doesn’t hold back content for pre-post screenings in the same way. If I were to go onto Twitter right now and post something as benign as “Disintegration is the best album ever”, Twitter wouldn’t hold my post for some arcane pre-approval process⁠—the post would go through. (Assuming Twitter was working without any bugs in that moment, anyway. Fuck you, Elon.)

Twitter doesn’t play a role in shaping a third party’s speech before it’s published onto Twitter. It doesn’t (and can’t) pre-vet the millions and millions of posts that go through the service every day. To hold it liable as a publisher despite the service largely (if not exclusively) exercising editorial control after the publication of speech it didn’t know was even going to be published is a ridiculous notion that makes me wonder if you have some slight brain damage going on.

I already did cite this. It was discussed in Stratton Oakmont.

Stratton Oakmont vs. Prodigy is not binding legal precedent. 47 U.S.C. § 230 was passed precisely to make sure that case didn’t have a chance to become precedent and kill the Internet before it could become the Internet.

You can’t cite a single binding legal precedent⁠—i.e., a legal precedent that hasn’t been superceded by a law or another precedent⁠—to support your bullshit. 230 puts liability for online speech where it actually belongs: on the person who wrote it. Twitter should only ever be responsible for its own speech⁠—because if it were held responsible for the speech of every Twitter user, the site would have to become a publisher by holding back millions of tweets every day to ensure none of them violate any laws. Your assertions and ideas, if turned into the law of the land, would grind Twitter⁠—and the rest of the interactive side of the Internet⁠—to a complete halt.

You may want to see the Internet become a one-way broadcast medium. You won’t find anyone here willing to back that play⁠—least of all me. Again: You’re trying to sell your shit to someone who isn’t even remotely interested in buying it even after your best possible sales pitch. That you’re still trying is your problem. Me? I’ve got “The Bullet or the Blade” on my mind and I’d rather listen to that on repeat for an hour while I do some offline stuff than keep pushing your shit in so hard that I’m performing a virtual colonoscopy. You haven’t said anything that would change my mind and I doubt you ever will, so⁠—and I say this with all the respect you’re owed⁠—please fuck all the way off. 🖕

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:34

Sorry, this is a waste of my time. Your arguments are all circular.

You: “Cite a case that says this”
Me: I cite a case.
You: “But Section 230 overruled that”
Me: Actually, it only says they are not treated as publishers. It has no effect on the criteria state law uses to determine a publisher. Criteria which are still used offline.
You: “Cite any law or case.”
Me: I did.
You: “Yeah, but Section 230…”.

You just keep chasing your tail like a confused dog.

You: “Section 230 is good because Twitter is not a publisher”.
Me: According to state definitions, they are.
You: “Yeah, but Section 230 says they are not. Therefore Section 230 is good. Because they are not publishers. Don’t you understand? Section 230 is good because they are not publishers and they are not publishers because of Section 230. Man, I cannot believe I have to explain this to trolls. This is simple stuff, man!”
Me: I am out of here.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:25

230 is right because it gives online services like Twitter “the ability to exercise what they deem to be appropriate editorial discretion within that open forum” (to quote Manhattan Community Access Corp. v. Halleck) without facing legal liability for choosing what speech it will or will not host post-publication.

That’s a state action case, totally unrelated to intermediary liabilities. These are completely separate claims. It’s also a bad decision. It was 5-4, with the 5 conservative justices holding that there was no state action and the 4 liberal justices dissenting. You just took the side of Kavanaugh, joined by Roberts, Thomas, Alito, and Gorsuch. Sotomayor filed a dissent, joined by Breyer, Ginsburg, and Kagan.

The 5 conservative justices reversed the federal appeals court holding, somehow finding that a NY city government entity was not a state actor, and were therefore allowed to ban creators from their network who had produced material criticizing the entity. Forgive me if I puke.

Genevieve Lakier, a Section 230 proponent, criticized this opinion here: https://www.acslaw.org/analysis/acs-supreme-court-review/manhattan-community-access-corp-v-halleck-property-wins-out-over-speech-on-the-supposedly-free-speech-court/

Again, good job.

I can cite Manhattan Community Access Corp. v. Halleck, Prager University v. Google LLC, and NetChoice v. Attorney General, State of Florida. And speaking of binding legal precedent…

I already covered the first one. PragerU is also about state action, so also unrelated to intermediary liabilities. NetChoice is about forcing social media sites to carry certain speech, which is also not related. It’s not possible for it to be related because it was filed recently. With Section 230 the law online, it’s not possible to make an intermediary liabilities claim.

None of your cases provide a basis for Section 230. At all. Zero support for it. They don’t even have anything to do with it.

…can you make a single citation of such that backs up this claim? Because I don’t think you can.

If you do not understand what a speech distributor is, this conversation will go nowhere. None of your Section 230 heroes, including Masnick or anyone else, will deny that websites like Twitter are speech distributors. This is fundamentally what they do.

Twitter doesn’t exercise editorial control on third-party posts prior to their being posted on Twitter.

Again, this is not the legal test. Scholastic Press doesn’t alter any of the Harry Potter text, yet they are the publisher and have full publisher responsibility.

The only speech for it could be considered a “publisher” is its own speech⁠—and I hope you’re aware that 230 doesn’t cover first-party speech.

1st party speech makes you an author. Section 230 has nothing to do with 1st party speech because intermediary liability laws have nothing to do with 1st party speech.

Craftsman doesn’t get sued if someone uses one of its hammers to commit a murder. Twitter shouldn’t get sued if someone uses its service to commit defamation. We sue the people who used the tools, not the companies behind those tools.

If a brand new hammer breaks in half the first time you swing it and injures someone, we due indeed sue the hammer maker. Because it was defective. This is the same for intermediary liabilities. If they distribute or publish speech in a negligent manner, they have liability. If they don’t, they are fine. Do the activity reasonably, non-negligently, and you are fine. Just like the hammer maker.

Even if it is: The fact that no one, even the trolls, is willing to back your play should be a sign that maybe your play isn’t all that good. I mean, I’m a fucking dumbass with more problems than Jay-Z and I’m still able to shut your shit down.

Doubling down on bandwagon fallacy is a bold move.

…Twitter doesn’t pre-vet third-party posts before they go live.

Again, this is not the test. You are so fond of telling me to cite my sources. Cite your source for this. It’s not the test at all. The vast majority of publishers do pre-vet materials – not because they are required to by law – but because it’s a minimum standard for being a publisher. If you cannot meet this standard, you should not operate as a publisher. If you have never run more than 1 mile, don’t enter a marathon. And don’t tell everyone a marathon needs to be redefined as only 1 mile because that’s all you can do. That’s not how it works.

No, it doesn’t. + a bunch of conclusory statements with no grounding in the law.

Sigh.

You would have to instead sue the person who actually wrote the post⁠—which is how the law should work.

If you think this is how the law should work, do let all the states, judges, professors and everyone else who knows intermediary liabilities law that they are all wrong and you are right. Make your case. Argue for the complete abolition of intermediary liabilities law. Good luck!

You haven’t even come close to explaining why the law should work any other way…or, at least, in a way that manages to convince anyone that you’re anything but ignorant.

There are many pages in my paper devoted to this. It’s clear I will not convince you because you cannot convince someone who doesn’t have an open mind. You obviously haven’t even read the paper. How am I going to convince someone who can’t even be bothered to read my arguments? And the few that he does read, he tends to misrepresent them or just repeat talking points back? I am basically talking to the wall. Or more accurately a Rush Limbaugh style dittohead.

And if Twitter were a publisher, you might have a point. But it’s not, so you don’t. And you can’t cite any-fuckin’-thing that says it is.

Stratton Oakmont literally said that a site like Twitter would be a publisher. Prodigy was so mad about this they got the law changed. But in intermediary liabilities law, Prodigy absolutely is the publisher. And so is Twitter. Which is why 230 says ‘Even though they are according to state law, you cannot treat them as such.’ State laws define Twitter as a publisher. Section 230 says they are immune from that treatment, even though they meet the definition.

By your logic, Twitter should be held responsible as the publisher for all CSAM on the service even if Twitter employees didn’t post it, create it, or know it was on the service until after being informed of that fact.

Yes. Because that’s the responsibility all publishers have. If you don’t like it, argue against that. Tell states to revoke their intermediary liability laws. See how far you get. But please record it because I want to see their laughter.

I don’t know how you fail to fathom how absolutely fucked up that sounds to anyone who isn’t you, but that’s your problem to solve. I can’t fix your shit for you.

Of course, this may sound odd to someone who knows nothing about the law or this area of the law. But what you sound like is a Creationist telling an evolutionary biologist why he is wrong about Evolution. You sound like a flat-Earther explaining why the Earth is flat.

Stephen T. Stone (profile) says:

Re: Re: Re:26

Not gonna bother with all that shit, just gonna pick out this one thing:

that’s the responsibility all publishers have

Publishers pre-screen speech. Twitter only screens speech after someone has posted, not before. While it may have automated tools to help with moderation, none of them amount to exerting the kind of editorial control over speech that a publisher has.

Hell, Twitter has a CSAM problem precisely because it doesn’t pre-screen speech. It can’t know exactly how much CSAM is on the platform because it can’t find all the CSAM. To believe Twitter should be held responsible for all the CSAM because it found only a portion of that content is to believe something that would fundamentally change how the Internet works. I mean, you’re arguing that 4chan should’ve been held liable for every threat of violence ever made on its boards despite all evidence that 4chan’s owners and moderators have no pre-posting control over what someone chooses to post on that shitpit of a site. You’re trying to apply liability for third-party speech onto services that don’t do anything to earn that liability other than exist and accept third-party speech.

JFC, dude, you’re essentially saying that if you were to threaten my life in a comment on this site, I should hold Mike Masnick responsible for that threat because Techdirt has spamfilters. What in the actual godforsaken fuck.

Toom1275 (profile) says:

Re: Re: Re:23

By contrast, when a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum. This Court so ruled in its 1976 decision in Hudgens v. NLRB. There, the Court held that a shopping center owner is not a state actor subject to First Amendment requirements such as the public forum doctrine….

In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum.Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether. “The Constitution by no means requires such an attenuated doctrine of dedication of private property to public use.” … Benjamin Franklin did not have to operate his newspaper as “a stagecoach, with seats for everyone.” … That principle still holds true. As the Court said in Hudgens, to hold that private property owners providing a forum for speech are constrained by the First Amendment would be “to create a court-made law wholly disregarding the constitutional basis on which private ownership of property rests in this country.” … The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:24

‘long line of quotes that has nothing to do with intermediary liabilities law’

The text you quote is about whether all speech forums automatically become public squares subject to First Amendment restrictions equal to those of the government – simply for distributing speech. The answer to that question is “No”. But it has nothing to do with intermediary liabilities law.

Publishers are always free to exercise their First Amendment rights – they are not prohibited in the way that this quoted text discusses. But in exchange for that exercise, they must accept greater liability. This has always been the law.

Ben Franklin is an interesting example in that text because Franklin did operate his printing press as “a stagecoach, with seats for everyone.” He was not required to do so, but Franklin felt it was not appropriate to use his privilege as a printing press owner to impose his views on everything he printed. He felt he had an obligation to publish everything so that everyone else could decide what they thought of the views. Who is he to decide what is and is not good speech? His life was dedicated not to imposing his views, but to convincing people with his arguments. Those principles guided how he operated his printing press. He defended his printing of views considered deplorable and unworthy of printing in a paper, written in 1731, titled “Apology for Printers”.

https://founders.archives.gov/documents/Franklin/01-01-02-0061

“The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property”

Neither do intermediary liability laws. They merely impose greater liability for doing so when they control the printing press. They are still free to impose their editorial discretion. They just have to accept responsibility for it. Making you responsible for defects in your product is not equivalent to prohibiting you from making that product.

Stephen T. Stone (profile) says:

Re: Re: Re:25

And if Twitter were a printing press, you might have a point. But it’s more like a bulletin board where anyone can post whatever the fuck they want. Barring any direct role in the creation or publication thereof, Twitter isn’t a publisher of any third-party speech on its platform. You can’t cite a single law or binding legal precedent that says it is.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:23

Does a owner of a cafe/club/pub have intermediate liability for all speech in their premises if the eject customers for their offensive speech?

No. Because they do not control any means of publication. The means of publication is the spoken word. That was done by the patron.

Interestingly, intermediary liability laws have been applied to bars in the form of notice liability. If someone writes defamation on a bathroom stall, the bar must remove the defamation when notified. There is one well known court case where this was held:

https://casetext.com/case/hellar-v-bianco

This is a bit harsh and there has been some criticism of it, which I don’t care to take a side on because it has no relevance to the Section 230 debate. The main issue is that a bar is not openly a distributor of speech, so it’s not fair to apply similar rules. The counter-argument is that once notified of the speech, their failure to remove it made them a willing distributor.

If not, why should social media face intermediate liability for all speech because they moderate some?

Because they are openly distributors of speech and viewpoint moderating some speech makes them a publisher by definition. Let me note once again the claim that websites only moderate “some” speech is misleading in the extreme. If the US government announces a set of strict speech rules, 99% of speech will self-sensor to meet those rules. The government will only have to moderate “some” of the speech – the rest will be self-moderated by the authors so they don’t get in trouble. It would be misleading in the extreme to claim that only the speech the government touches is “moderated”. It’s all moderated/censored. Publishing rules that force authors to conform to them is itself moderation/censorship. And the harsh punishment dealt if you violate their rules – suspension or ban – ensures that most authors will indeed self-censor. I know I have. I reduced my criticism of the US government after multiple suspensions for accurate content critical of the government. Maybe I’m just old school, but this seems bad for democracy.

Anonymous Coward says:

Re: Re: Re:24

I reduced my criticism of the US government after multiple suspensions for accurate content critical of the government. Maybe I’m just old school, but this seems bad for democracy.

And yet your entire spiel on this article has been bitching and whining about the EFF trying to keep the government OUT of the Internet. So what is it you want? Do you actually want the government involved or not? Because if your anecdotal experiences are to be believed, I’d think that you’d want LESS government involvement, not more.

Then again you anti-230 chucklenuts have always been consistently contradictory.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:25

And yet your entire spiel on this article has been bitching and whining about the EFF trying to keep the government OUT of the Internet. So what is it you want? Do you actually want the government involved or not? Because if your anecdotal experiences are to be believed, I’d think that you’d want LESS government involvement, not more.

1) My tweets that were censored were criticisms of the government, but had nothing to do with government involvement with the Internet.

2) One can criticize some aspects of the government, praise others, and be neutral on yet others. I simply do not have an ideology that says “government = bad”. I criticize where things are bad and praise where things are good.

3) I don’t have an ideology that tells me “government should stay out of the Internet”. I don’t see the Internet as different than the rest of the world. It’s not. Early Internet leaders thought it was. They were wrong. I believe in democracy. This means that, like the rest of the country, the government should govern the Internet. Democratically elected leaders should make legal decisions and enforce laws on the Internet. We should not turn this over the vigilante corporations. And I adhere to the same principle online and offline: government should generally not censor speech. They should not give preference to some speech over others. Which is why I am against Section 230, which is an obvious speech privilege for speech distributors who operate online vs. all other speech distributors, who are forced to follow intermediary liability laws.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:27

People have seen what happens when you turn over the Internet to the government. You get China. You get Russia. You get Iran.

Are you saying you view our government the same as the governments of China, Russia, and Iran? Do you not understand the differences between our governments? We are a democracy. We elect our leaders and they represent us. Our system has flaws, but it’s not China, Russia, or Iran. Section 230 in fact imposes a system on the Internet far worse than those countries: it creates an oligarchy where big, powerful companies are the legal system. Where what is “moral” and permitted is “Whatever is good for Google”.

Even worse, Section 230 enables China, Russia, and Iran to meddle in the Internet. We are seeing this right now with India. People are complaining that Twitter is cooperating with India to censor topics globally. This censorship is enabled by Section 230. Without 230, they would not be able to do it. The same is true of all the censorship our own government has been doing. An irony of 230 is that the goal of EFF & CDT was to freeze the government out of the Internet, but what it actually did was guarantee extensive government interference. The only way in which it barred the government from the Internet was in enforcing laws that protect people. The law is bad from all angles.

When you give companies the power to censor, anyone who has power over those companies will seize that power. Two groups have power over Internet companies: governments and advertisers. And governments globally are now seizing this power to silence criticism and erase inconvenient information. Censorship is also increasingly taking the form of censoring information that advertisers do not like. This trend will only continue. The FDA just last week announced the removal of a drug for pre-term birth from the market after years of approval when the drug never worked and was harming pregnant women. It took evidence-based medicine researchers years to get the FDA to admit its mistake. In our great future, such criticism will be impossible, as sites like Twitter will block it. That way we can all continue to use drugs that are harming us and the FDA can project an image of infallibility.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:27

Should the government have the right to force a queer-friendly Mastodon instance into hosting speech that positively promotes the torturous and barbaric anti-queer practice of “conversion ‘therapy’ ”?

Already answered this multiple times. The government doesn’t force anyone to host anything. They provide a choice of liabilities based on what powers you want to exercise. This is how the law has always worked and it is Constitutional.

Stephen T. Stone (profile) says:

Re: Re: Re:28

The government doesn’t force anyone to host anything. They provide a choice of liabilities based on what powers you want to exercise.

Yes or no, shitbag: Should a queer-friendly Mastodon instance be held legally liable for all third-party speech hosted on the service⁠—speech that it didn’t have a hand in crafting, editing, or publishing in any way⁠—if it refuses to host any speech that positively promotes the torturous and barbaric anti-queer practice of “conversion ‘therapy’ ”?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:29

Yes or no, sh*tbag

The use of profanity or namecalling does not make you right. It just makes you look dumb.

Should a queer-friendly Mastodon instance be held legally liable for all third-party speech hosted on the service⁠—speech that it didn’t have a hand in crafting, editing, or publishing in any way⁠—if it refuses to host any speech that positively promotes the torturous and barbaric anti-queer practice of “conversion ‘therapy’ ”?

1) I have answered clearly a million times. Yes: they should have publisher liability because this makes them a publisher.

2) Publisher liability is a lot less extensive than you think. It only covers defamation and similar charges.

3) If the instance is truly small and limited to people who share similar views, the likelihood of anything occurring that would bring liability is small. And the potential liability itself is small. If someone is defamed on that service, the damages would be limited to damages as a result of the defamation on the service. In many cases this would likely be close to zero, as the people likely to be defamed are likely to already be thought of quite poorly by users of the service and therefore the defamation does not do much harm.

4) Publishers can limit who is allowed on their service. If a “small” Mastodon instance wants to control speech on its service – acting like a publisher – then it should vet those it allows to use the service and limit it to only those who can be trusted. And/or take other legal steps to minimize its exposure (have users indemnify the service for their behavior – this would not block liability, but would force the user to pay the service back, if they can – and also incentivize not defaming anyone).

speech that it didn’t have a hand in crafting, editing, or publishing in any way

This is where you are totally wrong. Again, if you set boundaries for what ideas can be expressed, you absolutely have a hand in crafting the resulting speech. If the Chinese government says you cannot criticize China, that absolutely impacts what people will say. You will self-censor your speech to fit the rules. The speech is not fully a reflection of your thoughts, but rather your thoughts passed through the filter given by China. I challenge you to argue that such speech is “free” and that China has no hand in crafting it. Of course they do. The same is true of your Mastodon instance. If they set boundaries for what types of viewpoints they will allow, then they are participating in the speech. Just as if Amazon were to delete all reviews that are less than 3 stars. Through this policy they would be crafting a message that is deceptive and different from reality. The overall rating for a product is an important message for customers and this policy might change and average rating of 2.6 to a rating of 4.6. All without Amazon writing a single word of the reviews. It is ignorant to say that Amazon has no “hand in crafting, editing, or publishing in any way”. They absolutely do. Again, if you don’t want the liability that comes with that, don’t exercise editorial control. This tradeoff has always existed in the law. The reality is that it rarely leads to liability, so most people have never thought about it. Basic advice: if you cannot control the message, do not assert the right to control the message. Asserting that right is for those who have the capability to do so. Absent that capability, you are simply taking on unnecessary liability.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:30

No law on the books says a public accomodation business in meatspace must host any third-party speech, let alone all third-party speech. What you’re suggesting are onerous restrictions on free association and free speech that would only apply to interactive web services for the sole reason of “because Internet”. Around here, that kind of bullshit gets you mocked and laughed at, because we’ve all seen lawmakers pull that same shit⁠—it was ignorant and ridiculous then, and it’s ignorant and ridiculous now.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:32

If no law exists, then why was 230 written to exempt them?

230 was written to defend the right of interactive web services to moderate said services however they wish⁠—yes, including for “viewpoints”⁠—without risking legal liability for those decisions. These services don’t vet their users the same way a newspaper would vet its writers or a book publisher would vet an author⁠—and the same lack of vetting applies to the content written by users of those services.

In your world, the owner(s) of a pro-queer Mastodon service of any size must be forced to host anti-queer propaganda or risk being legally liabile for every post by a third party if it deletes even one instance of that propaganda. The reason this state of affairs doesn’t exist in reality (at least in the United States) is because of 230. 230 exists because its authors rightfully recognized that interactive web services don’t function like a traditional publisher of speech⁠—and they shouldn’t be treated as such for deciding that they don’t want to host all speech.

Your whole schtick is about looking for a way to force speech onto social media services without saying that’s what you’re trying to do. Under your shitty arguments, a service that doesn’t moderate speech to avoid legal liability isn’t being “forced” to host speech it otherwise wouldn’t even if that would be the actual effect of that situation. You’re trying to undo the freedoms of association and free speech for social media platforms by arguing that they don’t deserve those freedoms if they say “hey, so, we don’t want to host bigoted speech”. And you can hem and haw and go “aw shucks, that’s not what I was trying to do at all” until the heat death of the universe. But you’re not fooling anyone else here, least of all me⁠—because you’re not that clever and I’m not that stupid.

Now actually, finally, permanently fuck all the way off. If all the flags on your comments hadn’t been a clue, let me spell it out for you: You and your bullshit aren’t welcome here.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:29

One more thing:

You act as if allowing someone to post “hateful” things on a pro-trans instance will cause everyone to melt and die. It would not. Everyone could easily block the person and no one would see their ranting anymore. If a person did this repeatedly, the messages could be deleted as spam. Not only this, but there is nothing stopping users of the service from opting in to a censorship system as I describe in my paper and in other posts here. If users want the instance to censor on their behalf, they can make that choice. There are even more avenues than I have mentioned here. Sites would have numerous tools to deal with this issue that would not make them publishers.

As I mentioned, in the case where users are of the same mind as the site – my proposed changes will have little impact. Those users can continue designating the site to censor on their behalf. In such a scenario, the site is acting as an agent of the user, not as a publisher.

There is also the issue of sites having a mix of functions. Sites are both printing presses and distributors. Their distributor function is not restricted because distribution does not impact editorial control. Under my proposed changes, Twitter would not be forced to promote content it doesn’t like. It would just be forced to allow it to be posted. The point of what I propose is that users would be in charge of what they see. The site would be responsive to users, not antagonistic toward them.

Stephen T. Stone (profile) says:

Re: Re: Re:30

Just so we’re clear…

Yes or no, shitbag: Should the government have the right to force an association between a pro-queer social media service and anti-queer speech by way of placing liability for all third-party speech onto the service’s owners if they delete even one instance of anti-queer speech?

If “yes”: For what reason should the government be unable to force that association onto traditional/offline publishers?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:31

Ever heard of a game called whack-a-mole?

What would this accomplish? On a small site that vets its users, how would you even do this? The point of trolling is to get attention. If no one gives this kind of trolling attention, why would anyone do it? And I already stated that if someone does do this, the site is able to take whatever steps it needs to stop it. Because at that point it’s not viewpoint censorship but rather maintaining the usefulness of the site. It’s a time, place, or manner restriction.

People have been trained to overthink this and assumes that unless sites have unlimited power, bad things will happen and they will be powerless to stop it. This is not the case. It’s what they want you to think because it leads you to the conclusion that you have to support their total immunity.

Anonymous Coward says:

Re: Re: Re:32

What would this accomplish?

You misinterpret what i meant, in that it is those hiding other peoples speech who would be whacking the mole as it pops up with a new name. Also, banning a user is an extreme form of moderation, and under your rules opens the site to a defamation case.

The people that would be hit by such a ban are those who are most likely to bring a suite out of pure spite. While the site would probably win such a case, doing so would bankrupt a small site, and it would not take too many such cases to bankrupt even medium sized sites. Section 230 is a way of limiting the damage that spiteful people can do via suites, and there are a lot of spiteful people with the access to the money to bring such suites, some of them being politicians.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:33

Also, banning a user is an extreme form of moderation, and under your rules opens the site to a defamation case.

Banning a user doesn’t necessarily make you a publisher. It depends on why you banned them.

The people that would be hit by such a ban are those who are most likely to bring a suite out of pure spite.

The only liability this brings is liability for defamation. A “spiteful” user has no claim against a website for being banned. Only someone who has been defamed on that site could bring a suit. Since the “spiteful” user was not defamed, there is no suit to bring.

there are a lot of spiteful people with the access to the money to bring such suites, some of them being politicians.

This is what 230 proponents want you to believe. It’s never been the case offline. It will not be the case online. Because our legal system is not insane. If it ever did become an issue, the same people who say we need 230 have all the power they need to get Congress to address this problem. These people have so much power that they got Congress to give them total immunity. Getting Congress to fix any issues with lawsuits would be a snap. Google practically owns all of academia and a good portion of Congress. It would not be difficult. But they don’t want you to think about that. They want you to swallow their fearmongering and nod your head – the same way Dick Cheney wanted you to swallow his fearmongering on Iraq and march to war with him. It’s the same game. Just different players. And in both of these games, the marks insisted the people who didn’t fall for the game are crazy and they are the smart ones.

This comment has been deemed insightful by the community.
Toom1275 (profile) says:

Re: Re: Re:35

It’s telling that those who lie pathologically about Zection 230 because they’re incapable of understanding it have yet to show even a single instance of Section 230 being the cause of any harm whatsoever.

And they never will, since such a thing is pretty much impossible by design.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:36

they’re incapable of understanding it have yet to show even a single instance of Section 230 being the cause of any harm whatsoever.

Delusional. The two most famous cases – Zeran & Batzel – involve extreme harm. Even Chris Cox expressed deep regret over what happened to Ellen Batzel. 230 “experts” have you so crossed up they even got you to defend a MAGA-judge decision that was an obvious case of legislating from the bench. You are so emotionally attached to this law that not even rolling with the Trumpsters give you pause. Iraq War supporters felt the same way.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:37

they even got you to defend a MAGA-judge decision that was an obvious case of legislating from the bench

Halleck was correctly decided. That Kavanaugh wrote the decision is, as far as I’m concerned, irrelevant. Also: It’s telling that your whining still doesn’t include a single example of Section 230 being the direct cause of any kind of harm. And it’s funny that you should bring up Batzel v. Smith, because the Ninth Circuit’s decision in that case contains a line that I found interesting:

Absent § 230, a person who published or distributed speech over the Internet could be held liable for defamation even if he or she was not the author of the defamatory text, and, indeed, at least with regard to publishers, even if unaware of the statement.

Keep in mind, friend, that everything you’ve said here up to this point says you want that state of affairs to be reality.

Another interesting bit:

[C]ourts construing § 230 have recognized as critical in applying the statute the concern that lawsuits could threaten the “freedom of speech in the new and burgeoning Internet medium.” “Section 230 was enacted, in part, to maintain the robust nature of Internet communication, and accordingly, to keep government interference in the medium to a minimum.” Making interactive computer services and their users liable for the speech of third parties would severely restrict the information available on the Internet. Section 230 therefore sought to prevent lawsuits from shutting down websites and other services on the Internet.

That’s what us 230 proponents have been telling you all along: If you place liability for third-party speech onto the people who run interactive web services, those services probably won’t last long.

And what’s this? Even more interesting bits!

[T]here is little doubt that the Cox-Wyden amendment, which added what ultimately became § 230 to the Act, sought to further First Amendment and e-commerce interests on the Internet while also promoting the protection of minors. Fostering the two ostensibly competing purposes here works because parents best can control the material accessed by their children with the cooperation and assistance of Internet service providers (“ISPs”) and other providers and users of services on the Internet.

Without the immunity provided in Section 230(c), users and providers of interactive computer services who review material could be found liable for the statements of third parties, yet providers and users that disavow any responsibility would be free from liability. …

Although Stratton was a defamation case, Congress was concerned with the impact such a holding would have on the control of material inappropriate for minors. If efforts to review and omit third-party defamatory, obscene or inappropriate material make a computer service provider or user liable for posted speech, then website operators and Internet service providers are likely to abandon efforts to eliminate such material from their site.

All of that reads like, y’know, everything we’ve been telling you that you fully believe is wrong.

I could keep quoting from the decision, but it’s pretty clear that the Ninth Circuit agrees not with you, but with everyone who’s been taking potshots at you the past few days. Man, what is it with you anti-230 people citing cases that actually work against your arguments? Swear to God, it’s almost hilarious how you keep aiming that gun at your foot…

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:38

Halleck was correctly decided. That Kavanaugh wrote the decision is, as far as I’m concerned, irrelevant.

I wasn’t talking about Halleck. I was talking about Zeran. No one will defend Halleck – except you, apparently. Masnick will not defend it. No 230 supporters will defend Halleck. It’s a grossly anti-First Amendment decision. And has nothing to do with Section 230.

Zeran, on the other hand, now that’s a MAGA decision that 230 supporters love. MAGA judges are the best! At least when they make up decisions that do things I like! This is the logic of 230 supporters. You are rolling with MAGA if you support Zeran.

Keep in mind, friend, that everything you’ve said here up to this point says you want that state of affairs to be reality.

Guess who else said that state of affairs should be reality? The Batzel court! In the first sentence of the opinion! You are quoting Batzel reciting intermediary liabilities law. But what you don’t realize is this: the court approves of this. Because every court approves of it. It makes sense. It doesn’t make sense to you because you are ignorant and refuse to recognize it because that cuts against your law – Section 230. You are in denial.

Keep in mind, friend, that everything you’ve said here up to this point says you want that state of affairs to be reality.

Another interesting bit:

[C]ourts construing § 230 have recognized as critical in applying the statute the concern that lawsuits could threaten the “freedom of speech in the new and burgeoning Internet medium.” “Section 230 was enacted, in part, to maintain the robust nature of Internet communication, and accordingly, to keep government interference in the medium to a minimum.” Making interactive computer services and their users liable for the speech of third parties would severely restrict the information available on the Internet. Section 230 therefore sought to prevent lawsuits from shutting down websites and other services on the Internet.

That’s what us 230 proponents have been telling you all along: If you place liability for third-party speech onto the people who run interactive web services, those services probably won’t last long.

Indeed, that’s what some courts have said. Because it’s what 230 proponents have said. But do you know who disagrees? The Batzel court. That’s right: remember the first sentence of the opinion? The Batzel court states right off the bat that all of this stuff is nonsense. Even though courts have said these things, the Batzel judges disagree. As they say, “There is no reason inherent in the technological features of cyberspace why First Amendment and defamation law should apply differently in cyberspace than in the brick and mortar world.”

All of that reads like, y’know, everything we’ve been telling you that you fully believe is wrong.

Those statements are made in the context of 230 existing. They are not saying what the Ninth Circuit believes, as you seem to think, but rather what the stated intentions of Section 230 are. A court is not allowed to second guess the intentions of Congress. If Congress says that was their intent, that was their intent. If they fail in that intent is not relevant to this case, therefore the court will say nothing about it.

To repeat: the Batzel court totally disagrees with this intent. They say there is no reason for 230 to exist. But this is not their decision, so they are forced to interpret 230 even though they think it’s a bad law. The opening sentence of Batzel is unusual. Judges usually do not criticize Congress so openly. It indicates the Batzel judges were worried about being perceived as responsible for this mess. It’s a statement, ‘Hey, don’t blame us for this. This is not our mess. Congress did this.’

I could keep quoting from the decision, but it’s pretty clear that the Ninth Circuit agrees not with you, but with everyone who’s been taking potshots at you the past few days.

LOL! The opposite is true. The Ninth Circuit said that 230 is a bad law! Right in the opening sentence. You don’t know how to read an opinion. Or, more likely, your bias just causes you to skip right over that important line. And read everything else in a way that supports what you believe.

It’s obvious you just read Batzel for the first time. Congratulations. It’s a good idea to know the major cases in an area of law that you advocate so strongly for and pretend to know. I encourage you to read Zeran as well. And then read Doe v. America Online, particularly the dissent, where they rip the MAGA judges who wrote Zeran to shreds. You might actually learn something. But given the way you read Batzel, that’s not looking promising.

Anonymous Coward says:

Re: Re: Re:39

No one will defend Halleck – except you, apparently. Masnick will not defend it. No 230 supporters will defend Halleck. It’s a grossly anti-First Amendment decision. And has nothing to do with Section 230.

It’s funny in this comment you call Stephen clueless, and then you write this. Masnick has regularly written in praise of Halleck. I mean, jfc, you couldn’t even do the slightest bit of research?

https://www.techdirt.com/2019/06/18/supreme-court-signals-loud-clear-that-social-media-sites-are-not-public-forums-that-have-to-allow-all-speech/

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:40

It’s funny in this comment you call Stephen clueless, and then you write this. Masnick has regularly written in praise of Halleck. I mean, jfc, you couldn’t even do the slightest bit of research?

Well, fck me. I guess even I gave Masnick too much credit. He really is willing to side with the conservative justices over the liberal ones *just because their holding protects his pet law – even though it smashes the First Amendment. That’s amazing. And you all are just rolling with the MAGA crowd on this? What a clown show.

Reading through what Masnick wrote, I’m not actually sure he is endorsing the opinion here, but rather just using it to bolster other claims. As he sort of notes at the end, he doesn’t even need to agree with this in regards to social media, as the dissenters feel the same way about that particular area as the majority.

This decision is straight up crazy though. The government created a “private” entity (actually, it was a ruse – the entity for all purposes is the government). That entity was put in charge of official government business. The majority held that this government business was not part of the “traditional” function of government, therefore the entity is not a state actor. In other words, the government can create sham private orgs and censor speech at will, so long as the activity involved is something that didn’t exist when the Constitution is ratified. This is Originalism to an insane level.

Let me reiterate: the government in this case is basically operating a TV channel and censored someone for criticizing the government. The majority said that’s okay because TV channels didn’t exist when the Constitution was passed. That’s nuts. N-U-T-S. Under this logic, the First Amendment has eroded substantially over the years and will eventually erode almost completely, as society changes and more and more government functions never existed when the Constitution was passed. NUTS. If Masnick endorses this, that’s nuts. I’m not convinced he does. If any of you endorse this, that is also nuts.

Toom1275 (profile) says:

Re: Re: Re:41

In the real world:

What it means is that only someone at the cognitive level of vegetable† could possibly assert the claim that a private platform where people can conditionally publish their own speech on is in any way even remotely similar to a “company town” where a corporation has control over peoples’ lives and rights.

†Halleck, Prager, Ryan

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:42

What it means is that only someone at the cognitive level of vegetable† could possibly assert the claim that a private platform where people can conditionally publish their own speech on is in any way even remotely similar to a “company town” where a corporation has control over peoples’ lives and rights.

I agree with this assertion, so putting me with Prager is false and misrepresents my position. The problem is that in Halleck, the Court was not dealing with a “private platform”. It was dealing with a “platform” that was a creation of the government, for the government, and funded by the government. It was “private” in the same sense that I am the King of England just because I say so. The 4 “liberal” judges dissented. They are not crazy. Let me just quote the opening of Justice Sotomayor’s dissent, which is to the point and brutal for a SCOTUS opinion:

The Court tells a very reasonable story about a case that is not before us. I write to address the one that is.

This is a case about an organization appointed by the government to administer a constitutional public forum. (It is not, as the Court suggests, about a private property owner that simply opened up its property to others.) New York City (the City) secured a property interest in public-access television channels when it granted a cable franchise to a cable company. State regulations require those public-access channels to be made open to the public on terms that render them a public forum. The City contracted out the administration of that forum to a private organization, petitioner Manhattan Community Access Corporation (MNN). By accepting that agency relationship, MNN stepped into the City’s shoes and thus qualifies as a state actor, subject to the First Amendment like any other.

Anonymous Coward says:

Re: Re: Re:12

The 1st Amendment has never been applied as-written in its entire history. That’s why there’s an entire area of jurisprudence on “1st Amendment exceptions” despite the fact that there *is* no “except” or “unless” in it.

But people raalized, quite swiftly, that unless they did allow regulation of speech, they’d have a disaster on their hands. So instead of redrafting the 1st to make that plain, they just pretended it did in fact allow them to make laws about speech if they *really* needed to. But its not supportable in the text, and the idea that it doesn’t apply to “things that governments normally restricted in 1791” makes the drafters look like idiots.

Tanner Andrews (profile) says:

Re: Re: Re:12 criminalize everything

Do recall, and never stop recalling, that 1A protects any and all non-criminal speech as protected from government interference or proscription

Right. And the work-around is to criminalize speech you do not like. The common example is anti-draft speech, Schenck v. U.S., 249 U.S. 47 (1919), but criminalization ls also used against things like sex or education.

You can also criminalize opposition to war generally, Abrams v. U.S., 250 U.S. 616 (1919), or criticism of govt officials, Craig v. Hecht, 263 U.S. 255 (1923), or labor activism, Whitney v. California, 274 U.S. 357 (1926).

You just never know what will fall in or out of official favor, but the trick is to make it criminal so that speech urging it can be suppressed.

Stephen T. Stone (profile) says:

Re: Re: Re:9

I think there are circumstances where forcing a company to allow speech they’d prefer to remove is, in fact, going to be the correct choice — just as I think forcing a company to remove speech they would prefer to host can be the correct choice.

By all means, please detail what you think some of those circumstances would be.

I insist.

Stephen T. Stone (profile) says:

Re: Re: Re:11

Show me the law that says any/every social media service operating within the United States must carry an Emergency Broadcast System alert.

I’ll wait.

…oh, what’s that, you can’t?

I’mma level with you here: Some speech is of such vital importance that legally requiring some outlets to carry that speech is understandable. A legal requirement for a cable/satellite service to carry an EBS broadcast is one such example. You probably think that’s a “gotcha”, but it’s not. I openly acknowledge (and even approve of) such exemptions to the idea of “people/companies shouldn’t be forced to host speech”.

But this discussion is, first and foremost, about social media platforms on the Internet. They’re not exactly the same thing as a cable/satellite provider, and they’re under no obligation to carry those kinds of emergency signals. That holds true for social media services of any size⁠—to claim otherwise is to claim that even the smallest open-to-the-public Mastodon instance must be forced to carry an EBS broadcast, and I know you don’t have any citation of fact to back up such a claim.

Therein lies the issue with your attempt at a “gotcha”: You’re still arguing that a social media service should be forced by the government to carry (or not carry) certain kinds of third-party speech. But if the government can’t force Twitter to carry an EBS broadcast, it can’t (and shouldn’t be able to) force Twitter to host racist speech. Not one law or binding legal precedent is on your side in this argument; if you can find one, you’d be the first.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:14

And I’d like you to show me the law that says the government can make any social media service carry speech it otherwise wouldn’t host. Neither of us are getting what we want today, son.

Your characterization of the law as “forcing” companies to host speech they don’t want is not correct, but even if we accept your characterization, this is exactly what intermediary liability laws have done forever. Returning to BookSurge, according to you they were “forced” to carry speech they didn’t want to carry. Their policies explicitly told customers they would print all speech – and they did so as a means to maintain their distributor status. They had a choice: and they made the choice to restrict their “right of association” and remain a distributor. This choice is legal and approved by the US Supreme Court. Few, if any, challenged this idea prior to the Internet and Section 230.

Anonymous Coward says:

Re: Re: Re:14

I reiterate: You are asking a different question now. Your original question was if it should be done, not whether it is legal to do so.

You asked me for circumstances I thought speech would be worth compelling. I answered. Now you’re banging on about showing you a means where it could be compelled, as if that were the same question.

Pay attention to what you’re saying, and don’t try to pretend that these questions are answered in the same way, as if the current legal structure of your particular country cannot be improved.

Oh, and furthermore even if I did provide such a law I am quite certain you would immediately disavow it as inapplicable (cf. the ‘but it doesn’t count if it’s being applied to cable companies as long as it doesn’t apply to social media!’ idea) or unAmerican or whatever.

And even beyond that, this whole ‘if a government can compel one company to do this in one instance, it will therefore force all people to do it all the time’ is just wrong. There are lots of examples of compelled speech already, in other forums and circumstances, and yet as you insist there’s nothing in U.S. law (yet) that specifically compels Twitter to be a cesspool, right? So it doesn’t inevitably follow — which means the whole slippery slope argument is not particularly valid.

Stephen T. Stone (profile) says:

Re: Re: Re:15

even if I did provide such a law I am quite certain you would immediately disavow it as inapplicable

If the law applies to something that isn’t an interactive web service, yes, it would likely be inapplicable. An interactive web service isn’t a cable provider or a radio station or any other kind of broadcaster/publisher/distributor of speech you think would be analogous in this discussion.

this whole ‘if a government can compel one company to do this in one instance, it will therefore force all people to do it all the time’ is just wrong

And if I were saying that, you might have a point. But I’m not, so you don’t.

There are lots of examples of compelled speech already

How many of them involve the United States government forcing an interactive web service operating within its borders to host speech it would otherwise refuse to host? 🤔

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:12

But if the government can’t force Twitter to carry an EBS broadcast, it can’t (and shouldn’t be able to) force Twitter to host racist speech.

“If the government can’t force Twitter to do one totally unrelated thing, then it can’t force them to do something completely different, with an entirely different justification.”

These two things are completely different. Aside from the fact that an EBS is forcing them to carry something and intermediary liability laws are not – they are forcing them to make a choice that impacts their freedom in choosing their customers, which is substantially different – it’s not true the government cannot require Twitter to carry an EBS message. They government has simply chosen not to. If they can provide sufficient evidence that it’s justified, they could require Twitter to carry it. I think that’s highly unlikely, but there is nothing that bars them from doing so other than the same exact rule by which every 1A restriction is judged. The same rule that has been used to evaluate intermediary liability laws and found them justified.

Anonymous Coward says:

Re: Re: Re:13

The problem with your interpretation of third party liability laws is one that prevents social media from existing as manually examining every post before it is made public is impossible, and algorithms cannot detect libel, and notice and take down would allow the puritanical to censor everybody else.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:14

algorithms cannot detect libel

You don’t need algorithms to detect liable. With notice liability, you are not required to detect anything.

notice and take down would allow the puritanical to censor everybody else

This is a common refrain, but it’s not based on any evidence. It’s super simplistic. First, this has not happened offline. Second, Congress has the power to address any problems caused by the “puritanical”. Our current system assumes that false claims of defamation will not be made. This assumption changes somewhat when so many people can make allegations online. Which is why it would behoove Congress to provide safe harbor to websites who develop a reasonable system of conflict resolution to handle allegations. This is not a difficult task. It’s a simple change that would be far preferable to the sweeping powers granted by Section 230. Third, the current system actually already allows “puritanical” censorship, and it occurs a lot. It’s just that you agree with that censorship and are possibly one of the puritans. Section 230 has not resolved this problem, but rather just allowed the websites to choose which puritans get their way and which ones do not. While at the same time allowing actual defamation to be distributed even after it has been identified. 230 literally gets it wrong from every angle. Because it’s solution to the problem is: make the companies the judge. That’s a terrible idea and liberals would normally be the first to say so in any other area of law.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:16

DMCA is abused because it was written by industry. It makes industry the arbiter of what is a violation. This was always going to be abused. I would encourage Congress to establish a different set of rules for defamation. A safe harbor that protects sites who employ a reasonable conflict resolution system.

There is a wide gap and range of legal solutions between the DMCA’s “accuser as judge” and Section 230’s “wipe out the law entirely”. Both are unacceptable extremes.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:18

The same applies to your interpretation of intermediary liability and the “notice-and-staydown” idea.

No, it’s not. Because Congress can legislate a different system. They have not for the DMCA because copyright owners have too much power. In fact, copyright owners are so powerful that the authors of Section 230 put a copyright exception into 230 because they assumed copyright owners would have the power to cause problems for 230.

But 230 supporters are quite powerful themselves. If they need legislation to create a workable safe harbor system, that is not difficult. And unlike 230, it has the advantage of being Constitutional.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:19

In fact, copyright owners are so powerful that the authors of Section 230 put a copyright exception into 230 because they assumed copyright owners would have the power to cause problems for 230. But 230 supporters are quite powerful themselves.

What you are proposing is essentially what a lot of anti-230 voices want, because those anti-230 voices are also pro-copyright voices. They look at Section 230 as a threat to their enforcement and penalty regimes. So maybe… consider that there are very good reasons for supporting Section 230. Safe harbors are not going to happen without a good amount of power backing them up because pro-copyright, anti-230 folk hate those, too.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:21

The sole motive behind getting rid of section 230, is the malicious desire to employ fraud to harm the innocent. That 230 leads exclusively to justice is what they see as the “problem” with it.

It’s amazing you can read minds and know my motives. I find it funny that your claim is backwards. 230 does all sorts of nasty things, for example, it encourages blackmail as a business model. This is discussed in my paper. The scam works like this: since Zeran says you don’t have notice liability, you can leave defamatory statements on your website as long as you want. You can even reach out to the defamed party and offer to remove the defamation in exchange for money. This is blackmail. It looks an awful lot like a mafia protection racket. Some sites do this. It’s legal under Zeran.

Anonymous Coward says:

Re: Re: Re:22

blackmail as a business model

Right… so all of a sudden anti-defamation laws don’t work anymore because Section 230 exists. Where have we heard that before? Oh yeah, by the same guy who threatened to sue Techdirt into the ground, claiming that Section 230 allows Russians to review bomb doctors and convince Americans that they suck.

It’s funny that you mock others for their imaginary scenarios but just like every other pro-repeal fanatic, you keep trotting out this bogeyman of “Section 230 legalizes defamation!” Pull the other one.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:23

Right… so all of a sudden anti-defamation laws don’t work anymore because Section 230 exists.

They don’t. The issue is that the person who commits defamation is sometimes not reachable by the law. This is especially true online. They live in another country or they are anonymous and have made themselves too hard for a defamed person to discover them (especially since a victim doesn’t have the power of law enforcement).

Intermediary liability laws alleviate this situation somewhat. We may not be able to bring the defamer to justice, but we can at least force the distributor to stop distributing the defamation. Section 230 destroys this ability. The distributor is allowed to keep distributing defamatory material. This means that all a defamer has to do is cover his tracks and choose an outlet that will continue spreading their defamation, and the victim cannot even stop it.

This is exactly what happened in the case that established all this nonsense: Zeran v. AOL. An anonymous person defamed Zeran on AOL. AOL didn’t even track IP addresses of users, so they had no idea who he was. After being made aware of the defamation, AOL told Zeran they would remove it, then didn’t bother. They kept distributing the defamation for quite some time. Meanwhile, Zeran’s life was melting down. AOL didn’t care. They basically gaslit him – telling him they would take care of it, then doing nothing. The anonymous person pretended to be Zeran on AOL and advertised that he was selling t-shirts mocking the Oklahoma City bombing victims. He left Zeran’s phone number and it destroyed his business because his phone was no longer usable and it was his business phone. He received death threats and had to go into hiding. AOL didn’t care. It was insane. So Zeran sued them. And he lost: the court unbelievably created immunity for distributors – immunity that every commentator had said was not part of Section 230. And that legislative history made clear was not part of Section 230. They legislated from the bench. To ensure that AOL would not pay for their gaslighting and neither would any website in the future. Gaslight away, 4chan! Gaslight away, Kiwi Farms! Etc, etc.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:24

Who are the judges who made the Zeran ruling?

J. Harvie Wilkinson III wrote the opinion. He was nominated by Reagan and was at one point a favorite for George W. Bush to replace Rehnquist on the Supreme Court. Judge Wilkinson has been controversial in his career as a conservative, pro-business judge. Notably, he wrote the Hamdi opinion that said the US government could hold Yaser Esam Hamdi indefinitely without access to legal counsel or a court. This opinion was so radical it was ultimately overturned by the Supreme Court in Hamdi v. Rumsfeld. Scalia and Thomas dissented, so he aligns with their jurisprudence in this case. Rehnquist & Kennedy were in the majority.

Donald S. Russell was a Southern Democrat nominated by Nixon. As governor of South Carolina, he opposed the Civil Rights Act. I don’t know a lot about his jurisprudence, but it’s not clear how much input he had into this case. He was being treated for cancer at the time and underwent radiation therapy around this time. He died a year later.

Terrence Boyle was not officially on the Fourth Circuit at the time, but was later nominated by George W. Bush. His nomination was fiercely opposed by many liberal interest groups, as he had a terrible track record. The Fourth Circuit had reversed him more than 150 times. He was terrible on Civil Rights and was a protege of racist Senator Jesse Helms. Helms personally blocked African-American judges nominated by President Clinton to prevent the first black judge serving on the Fourth Circuit. When Bush became President, he then nominated this racist judge who was a protege of Helms. What an insult. It’s difficult to describe how bad Boyle is: he is hostile to Civil Rights, gender rights, even the Americans with Disabilities Act. The Leadership Conference on Civil and Human Rights has more here: https://civilrights.org/resource/the-nomination-of-terrence-boyle

When 230 supporters cheer the judicial lawmaking in Zeran v. AOL, these are the judges they are getting in bed with. These are not good judges. The decision was not a good decision. It was garbage. My question for 230 supporters is: do you know whose side you are on? Do you?

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:25

They live in another country or they are anonymous and have made themselves too hard for a defamed person to discover them (especially since a victim doesn’t have the power of law enforcement).

It really is the same, tired old trope with you anti-Section 230 fearmongers. Someone magically untraceable or out of the country will somehow irreparably ruin another person’s life. And you bring up some of the threadbare examples to go along with it. Doctors in America who get review-bombed. Waitresses who get harassed by incels for not being willing to sleep with random men. The point about women being harassed by landlords and strangers is particularly funny, because half the cases you bring up pre-date widespread Internet use.

You keep trying to pitch the death of Section 230 as the death of harassment and defamation – but that won’t be the case. Never mind the fact that you keep insisting that more lawsuits won’t happen as a result of Section 230’s removal, but you keep citing Section 230’s protection from defamation lawsuits as a key reason why it needs to go away in the first place. It was very clear from the get go that making it easier for defamation cases to go forward was always the goal, so your claim that removing Section 230 would not lead to a significant increase in lawsuits was always a falsehood.

Gaslight away, 4chan! Gaslight away, Kiwi Farms! Etc, etc.

And yet, Kiwifarms was taken down despite Section 230 existing.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:26

It really is the same, tired old trope with you anti-Section 230 fearmongers. Someone magically untraceable or out of the country will somehow irreparably ruin another person’s life.

This is reality. The most famous case – Zeran – involved an anonymous person. After Zeran, the law was settled so that you could not bring a similar case, so no doubt this has continued. It’s impossible to know because the victims have to suffer in silence, with no legal recourse.

You keep trying to pitch the death of Section 230 as the death of harassment and defamation – but that won’t be the case.

Straw man. Have never said this. Defamation and harassment both exist offline, despite laws punishing them. What intermediary liability laws do is discourage defamation and punish those responsible. Of course, if you remove disincentives for bad behavior, you will get more of that bad behavior. Which is what we get online. Many of the same people who claim Section 230 is indispensable also complain about the Wild West nature of Internet discussions. Without realizing the two are related.

And yet, Kiwifarms was taken down despite Section 230 existing.

They were briefly taken offline by anticompetitive and abusive behavior that should scare anyone with a brain. Giving Internet infrastructure providers the power to decide who can access the Internet is not the win you think it is. It’s also futile because the sites can eventually find their way back online with different providers.

We have a choice: a system of justice run by an actual legal system, established by democratically elected leaders. Or a system of justice where large corporations dole out “justice” according to their own needs, unelected and unaccountable to anyone but themselves. I prefer democracy. Section 230 supporters seem to prefer this bizarre, authoritarian, vigilante justice system that 230 creates online. A system that mirrors the one used in China in many ways. Good company.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:27

What intermediary liability laws do is discourage defamation and punish those responsible.

And yet, to quote Batzel v. Smith:

Absent § 230, a person who published or distributed speech over the Internet could be held liable for defamation even if he or she was not the author of the defamatory text, and, indeed, at least with regard to publishers, even if unaware of the statement.

What you’ve been suggesting is a state of affairs where Twitter employees (up to and possibly including Musk himself) should be held legally liable for defamation even if they didn’t know about the defamatory content being on Twitter until after being notified of its existence. That you can’t see any issue with that idea is your problem. Solve it yourself.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:28

And yet, to quote Batzel v. Smith:

Absent § 230, a person who published or distributed speech over the Internet could be held liable for defamation even if he or she was not the author of the defamatory text, and, indeed, at least with regard to publishers, even if unaware of the statement.

What you’ve been suggesting is a state of affairs where Twitter employees (up to and possibly including Musk himself) should be held legally liable for defamation even if they didn’t know about the defamatory content being on Twitter until after being notified of its existence. That you can’t see any issue with that idea is your problem. Solve it yourself.

You have been arguing this whole time that no such law exists. Yet you finally read Batzel and see the judge reciting the law and now you are reading it back to me as if you knew it existed all along? Which is it, flat-Earther? Is there no law that holds these things or is Batzel correct that this has been the law everywhere prior to 230? Your obvious ignorance to this law is on display. And now – having finally read Batzel – you think you know what it means? Like you have suddenly discovered some flaw in intermediary liability laws that no one else ever noticed? What a fool.

Since you have now read Batzel, you must have also read the first two sentences of the opinion. Something 230 supporters always ignore and don’t tell people about. Here it is:

There is no reason inherent in the technological features of cyberspace why First Amendment and defamation law should apply differently in cyberspace than in the brick and mortar world. Congress, however, has chosen for policy reasons to immunize from liability for defamatory or obscene speech “providers and users of interactive computer services” when the defamatory or obscene material is “provided” by someone else.

Let me translate this for you: ‘Congress is nuts. This law makes no sense. But courts do not second guess Congress, so we have to interpret their silly laws as written.’ She then went on to misinterpret the law, issuing an opinion that everyone, including Chris Cox, says was not intended? Why? Because 230 was so poorly drafted that they used the word “user” outside its plain meaning and didn’t bother saying so in the statute. This judge was unaware of what they intended in the statute and thus assumed “user” had its plain meaning, resulting in this bad opinion. But if she had her way, this law wouldn’t exist because she doesn’t think it makes any sense.

This is the great Batzel opinion that 230 supporters mention all the time. Everyone should love 230 because it protects “users” in addition to big companies. Except the authors of the law say this was never intended, the legislative history indicates it was never intended, and the judge who wrote this opinion says the law shouldn’t even exist. 230 supporters leave all that out when they tell you 230 protects “users”. Seems like relevant information – why would they withhold all that important information from you? Are you being played? The question answers itself.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:29

There is no reason inherent in the technological features of cyberspace why First Amendment and defamation law should apply differently in cyberspace than in the brick and mortar world.

Except there is: The technology of the Internet allows services like Twitter to work in a way that literally no other service in meatspace can ever work. Go ahead and show me any kind of offline service that is as close as possible to a 1:1 translation of what Twitter does. I’ll wait.

The whole point of 230 is that yes, online services like Twitter should be held liable for speech, but only their own speech (or speech that they had enough of a hand in creating/publishing that it qualifies as theirs). They shouldn’t be held liable if someone writes and publishes defamatory speech without the knowledge of the service⁠—or if that defamatory speech stays up until after the service is notified of its existence. What you’re asking for is the right to sue Twitter for speech it didn’t post, doesn’t control, and shouldn’t be held liable for unless you’re looking to sue the deepest pockets instead of the people who defamed you.

230 was so poorly drafted that they used the word “user” outside its plain meaning and didn’t bother saying so in the statute

And yet, 230 is what protects users of interactive web services from being sued for retweeting/boosting/whatever-ing speech that could be defamatory. Imagine that~.

This is the great Batzel opinion that 230 supporters mention all the time.

I’ve literally never seen anyone cite Batzel in my years commenting here, and most of the regular commentariat is staunchly pro-230 (for obvious reasons).

Seems like relevant information – why would they withhold all that important information from you?

Because the plain reading of the text of 230 is clear enough: “Users” refers to anyone who actively uses the service. If Cox and Wyden really thought that one word could be misinterpreted in a way that completely nullifies any and all liability for defamation “because Internet”, they would’ve worded the statute differently.

You’re literally the only person in this comments section who actively desires to see 230 torn down so people can file Steve Dallas lawsuits against social media services that had no active role in defaming someone. Again: If you can’t see how that’s an issue for even the largest interactive web services, that’s a “you” problem, and only you can solve that shit.

Now take the hint that all your flagged comments have been dropping and go somewhere that’ll kiss your ass for your anti-230 views. I suggest Truth Social.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:30

Except there is: The technology of the Internet allows services like Twitter to work in a way that literally no other service in meatspace can ever work. Go ahead and show me any kind of offline service that is as close as possible to a 1:1 translation of what Twitter does. I’ll wait.

I agree with this. The Internet allows a scale not possible offline. The problem is: this has no impact on the law. In fact, this unprecedented scale makes intermediary liability laws more necessary. This is discussed in my paper. The potential for abuse and consequences are far greater online, leading to a greater need for legal protections for victims.

And yet, 230 is what protects users of interactive web services from being sued for retweeting/boosting/whatever-ing speech that could be defamatory. Imagine that~.

1) It’s not supposed to. As I stated, everyone involved with 230 has said that. It’s dishonest to claim this as a feature when the authors all say it was not intended. When this goes in front of SCOTUS, this “feature” will fall. It will not survive review because it’s nonsense.

2) This is not need to protect “retweets”, etc because users do not control the means of publication, thus they can never be seen as publishers. They could be seen as distributors, which means that once they had notice they would need to take something down. But how often will that happen, and why would it be burdensome to require that? If you share something awful about someone and it turns out to be defamation, you should be required to delete it when you find out it’s defamatory. Why should you be allowed to keep spreading it? In many ways intermediary liability laws are anti-rumor spreading laws. If you spread rumors, you run the risk of incurring liability. All humans inherently understand that rumor-spreading is bad and most people dislike “gossips”. Not 230 supporters! Gossips are awesome. Gossip away!

I’ve literally never seen anyone cite Batzel in my years commenting here, and most of the regular commentariat is staunchly pro-230 (for obvious reasons).

230 supporters routinely cite the “230 protects users, too!” line. That comes solely from Batzel. It’s telling that you swallowed that line and repeat it all the time, yet don’t even know where it comes from. Someone is keeping you in the dark.

Because the plain reading of the text of 230 is clear enough: “Users” refers to anyone who actively uses the service. If Cox and Wyden really thought that one word could be misinterpreted in a way that completely nullifies any and all liability for defamation “because Internet”, they would’ve worded the statute differently.

Little problem with your logic: Cox & Wyden say that’s not what they intended. I know you must think Cox & Wyden omniscient forces for good. If they didn’t say something, it means they didn’t mean it! They cannot make mistakes! Even on a bill that was hastily thrown together in only a couple weeks, on a subject about which they knew very little! All hail Saint Cox & Saint Wyden, who cannot make mistakes!

Except they have admitted to what you claim is not possible. Cox has admitted Batzel was not intended. They did not intend “user” in the plain meaning of the word. They had something specific in mind. They simply failed to put it in the bill. This kind of thing happens when you rush.

Stephen T. Stone (profile) says:

Re: Re: Re:31

I’m not gonna even bother with anything in that comment but this:

This is discussed in my paper.

Much like the now-silent ThorsProvoni and his endless citing of a legal case that he lost, nobody cares about your paper. You can go off about “my paper says this, my paper says that” all you want, but it won’t make a goddamn bit of difference here because you don’t have any credibility. I mean, I barely have more credibility than you, and that’s only because I make direct citations of legal cases and laws. I don’t direct people to a paper I wrote on a subject and expect to be taken seriously as an authority on that subject⁠—to the point where I expect people to kiss my ass⁠—only because I wrote a paper on it. Your “paper” literally means less to me than the toilet paper I use to wipe my ass.

I know I’m the pot calling the kettle black here, but damn, dude, all those times you said you were “done” with me and you keep coming back? I know I have shitty impulse control and whatnot. What the fuck is your excuse?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:32

nobody cares about your paper.

I don’t care what you think of my paper. You obviously haven’t read it, so it’s interesting you have thoughts on it. But I will reference my paper because it’s a highly researched and documented academic work in this field. It contains numerous citations to relevant caselaw. It’s far more comprehensive than anything I can post here. If you don’t want to read it, that just tells me you are not interested in anything that challenges what you believe. Which is beyond obvious at this point.

Anonymous Coward says:

Re: Re: Re:31

When this goes in front of SCOTUS, this “feature” will fall. It will not survive review because it’s nonsense.

You are not nearly the first person to have threatened this. There’s been one consistently dissenting troll on this site who’s been gloating about it for five years, claiming that what he has on hand is enough to destroy Section 230, Techdirt and Masnick several times over. Yet even during the Trump administration, possibly one of the most pro-copyright/anti-230/anti-consumer US administrations to date, even that didn’t happen.

If you had any kind of clout or influence at all, you wouldn’t be here bitching about it to one of Section 230’s most notable proponents. You’d have gone to some university professor or judge who’d be sympathetic to your claims and rallied up all the anti-Big Tech, anti-Google money there is. (And there is a lot of that!) But no, instead you’d rather come here, stand on your own personal soapbox and pretend it’s a high horse.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:32

If you had any kind of clout or influence at all, you wouldn’t be here bitching about it to one of Section 230’s most notable proponents

I have it all written down in my paper. Every question you hopeless EFF fans can possibly ask me is already addressed. The reason why none of you can see past your own noses shoved up Big Tech’s ass is because all of you are desperate lawbreakers who want to harass law-abiding citizens. And instead of respecting my magnanimity to lower myself and try to bridge the gap all you do is insult me.

All of you are going to go down with the sinking ship that is Section 230, and when you do, I’ll be laughing.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:34

I have bad news for you. If your paper consists of “everyone who disagrees with me is a criminal”, you’ve probably not only written a useful paper, you’ve not even invented a compelling fantasy.

The previous comment was not me, but someone impersonating me. I do not accuse anyone of being a criminal in my paper. Though one section does describe blackmail as a business model, but there is no discussion of whether this is criminal or not. Intermediary liability laws are not criminal laws. They are civil laws meant to make whole a victim who has been harmed.

I also would not say things like “you hopeless EFF fans”. I still admire much of the work they do, although I view them differently now after doing research on Section 230. And I understand why others still view them in such a positive light. But the fact is you have been played for a fool on this issue.

It shows how devoid of arguments people are that very few legitimate arguments have been raised and instead people are resorting to impersonation and misrepresentation. Anything to avoid an actual discussion on the merits. Just like Mike Masnick, who has ducked me repeatedly on this topic. He doesn’t want any part of this discussion.

Anonymous Coward says:

Re: Re: Re:35

The previous comment was not me, but someone impersonating me

What proof do you have of that? You probably don’t need to include actual accusations in your paper. But from the first moment you started posting here you have shown nothing but contrition for the EFF and anyone who supports Section 230.

But the fact is you have been played for a fool on this issue.

Your only source has been your paper.

Just like Mike Masnick, who has ducked me repeatedly on this topic. He doesn’t want any part of this discussion.

Why does Masnick’s opinion matter this much to you, exactly? If Section 230 is as indefensible as you claim, your time would be far better spent taking your findings to friendlier publishers and lawyers, surely? Like… this is something you trolls have never been able to explain. You routinely mock whatever this website posts and does and call it an insignificant opinionated cesspool. But you keep begging Masnick to notice you and throw a tantrum when he doesn’t give you the attention you want.

Anonymous Coward says:

Re: Re: Re:27

It’s impossible to know because the victims have to suffer in silence, with no legal recourse.

For what it’s worth – and I’m just bringing up this for context, not accusing you of doing so – the same guy who brings up defamation victims existing as a reason to abolish Section 230 has, on multiple occasions, also called women whores for sleeping their way into power. So whenever chumps like you bring up victims of defamation as your excuses, those excuses get taken with a huge shaker of salt.

But there’s also the the fact that removing Section 230 would also not discourage defamation or harassment, because, again, defamation and harassment are not dependent on whether Section 230 exists or not.

What intermediary liability laws do is discourage defamation and punish those responsible.

Do you… read half the shit you spout, mate? What intermediary liability laws do is punish the platforms for not making you feel less offended. It gives you a scapegoat to chase after and demand money from. You yourself, and other anti-230 folk, have insisted that trolls and defamers can simply use other email addresses and accounts to continue harassment – so what do you think making the platform responsible for them is going to accomplish? If anything, trolls will go absolutely hog wild knowing that you won’t pursue them because it’s too hard, but will demand money from platforms who can’t keep up.

Giving Internet infrastructure providers the power to decide who can access the Internet is not the win you think it is.

And yet, this is precisely how chumps like you think copyright law should be enforced. You’ll go up the chain of logistics to demand your pound of flesh from anyone and everyone regardless of whether they were actually responsible.

I prefer democracy. Section 230 supporters seem to prefer this bizarre, authoritarian, vigilante justice system that 230 creates online.

Democracy told you that people would rather use websites without having to worry if some chucklenut afraid of Russian randombots will poke at some oversensitive snowflake.

What you want is authoritarian vigilantism that seeks out a pound of flesh because you can’t find and destroy one troll online.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:28

For what it’s worth – and I’m just bringing up this for context, not accusing you of doing so – the same guy who brings up defamation victims existing as a reason to abolish Section 230 has, on multiple occasions, also called women whores for sleeping their way into power. So whenever chumps like you bring up victims of defamation as your excuses, those excuses get taken with a huge shaker of salt.

Let me get this straight…a random person who I don’t know once called some women whores. And he also doesn’t like Section 230, therefore anyone who doesn’t like Section 230 is basically this guy. Well, okay then. What next? Calling me Hitler because Hitler used to wear pants and I also wear pants?

But there’s also the the fact that removing Section 230 would also not discourage defamation or harassment, because, again, defamation and harassment are not dependent on whether Section 230 exists or not.

Totally! I mean, drunk driving is not dependent on drunk driving laws. We should just eliminate these laws, since they have no impact on how much drunk driving occurs. That stuff they teach you in the first year of law school about deterrence? Rubbish! A bunch of made up garbage.

If anything, trolls will go absolutely hog wild knowing that you won’t pursue them because it’s too hard, but will demand money from platforms who can’t keep up.

You don’t even understand how this works. Trolls cannot demand anything. They don’t have a lawsuit. None of this is about trolls. And you claim that trolls would sue websites who cannot keep up. If a person is trolling, it means they don’t have a real defamation claim. Which kind of makes it hard to sue a website for not keeping up, since the only claim they could make against the website is a defamation claim.

And yet, this is precisely how chumps like you think copyright law should be enforced. You’ll go up the chain of logistics to demand your pound of flesh from anyone and everyone regardless of whether they were actually responsible.

You seem to know a lot about my stance on copyright, despite me never discussing it. I’m probably to the Left of you on copyright.

Democracy told you that people would rather use websites without having to worry if some chucklenut afraid of Russian randombots will poke at some oversensitive snowflake.

This gave me a good laugh. I have found that most 230 supporters have no idea how the law was passed. It’s an interesting tale that is far from what I would call “democratic”.

1) Industry wrote the law
2) It was attached to massive, must pass bill in the House.
3) The Senate refused to introduce it.
4) In the House, it was barely debated and almost no one understood it. Maybe a dozen Reps understood it to any degree. The authors convinced Newt Gingrich to support it. Since leadership supported it, many other Reps supported it – despite not knowing what it did. Rather than explain it to Reps who knew little about technology, they instead told them to vote for it to express displeasure with the Senate bill. So the votes in favor were not even on the merits of 230, but instead an express of opposition to the Exon Amendment in the Senate.
5) Both 230 & Exon passed their chambers with neither voting on the other’s amendment. In conference committee, a deal was struck that saw both amendments included in the final bill together.
6) A coalition of 230 allies led by ACLU & EFF then sued over the Exon Amendment, now the CDA. It was thrown out in Reno v. ACLU. ACLU, etc deceived the Supreme Court by not telling them about the legislative deal. This deal would have made severing the laws unconstitutional. Unaware of the deal, the Supreme Court severed the laws.
7) The House negotiators & 230 authors essentially reneged on their legislative deal. They knew the CDA was going to be challenged and encouraged it. Then deceived the Supreme Court, making them an unwitting pawn in subverting the democratic process.
8) From there, multiple courts then expanded 230 beyond what was intended. The parts that 230 proponents consider most essential – such as Zeran – were actually judge made expansions by MAGA judges.

I have never seen a law with such an undemocratic history. Maybe a dozen or so Congressmen knew what this law did when it was passed. Out of 535. And those dozen cut a deal that they reneged on and deceived the Supreme Court to get their way. The law was then expanded beyond what they even passed by corrupt judges. 230 supporters look at this and say, “This is great!” Well, in reality, most are unaware. And those who are aware are careful not to talk about it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:29

Let me get this straight…a random person who I don’t know once called some women whores. And he also doesn’t like Section 230, therefore anyone who doesn’t like Section 230 is basically this guy. Well, okay then.

The bulk of complainants against Section 230 by and large have made it clear that destroying Section 230 is intended to facilitate more defamation lawsuits going forward, while simultaneously insisting that more lawsuits wouldn’t happen. So when someone trots along and makes those exact arguments, they tend not to be taken seriously.

We should just eliminate these laws, since they have no impact on how much drunk driving occurs. That stuff they teach you in the first year of law school about deterrence?

Drink driving laws are a deterrence because they go after the person actually drink driving. Making websites liable for what trolls do will not have that effect. Because the trolls know that you won’t bother to go after them, when instead you can go after the websites they post on.

It’s genuinely amazing that you do not understand this.

Trolls cannot demand anything. They don’t have a lawsuit. None of this is about trolls. And you claim that trolls would sue websites who cannot keep up.

Learn to read. Trolls are not the ones demanding lawsuits. Armed with the knowledge that you want to sue any website they post on, they’ll simply intensify their defamation and harassment knowing full well that you’re not interested in holding them responsible. They’ll do it because they know you’re desperate for a payout from every platform they post on.

You seem to know a lot about my stance on copyright, despite me never discussing it. I’m probably to the Left of you on copyright.

Until evidence is shown proving the contrary, I have no reason to take your word for it. The truth is that anti-Section 230 trolls here have made their intentions clear that removing Section 230 is motivated by the idea that no Section 230 would make it easier for them to subpoena websites accused of facilitating copyright infringement without any evidence.

I have found that most 230 supporters have no idea how the law was passed. It’s an interesting tale that is far from what I would call “democratic”

In short, you’re angry because you don’t find anyone here who agrees with you. Democracy doesn’t just mean judges, even though I personally doubt that your claims hold any water given your vested interests to bitch about Section 230.

Seriously, if your paper and research is so groundbreaking and damning to anyone who supports Section 230, why don’t you put your money where your mouth is? Go find a friendly judge, some Republican rag, anyone who has the money to be a patron of your work and nuke Section 230 the way you wanted. You’d be far happier that way than having to deal with people on a website you hate.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:30

The bulk of complainants against Section 230 by and large have made it clear that destroying Section 230 is intended to facilitate more defamation lawsuits going forward, while simultaneously insisting that more lawsuits wouldn’t happen. So when someone trots along and makes those exact arguments, they tend not to be taken seriously.

I cannot comment on what the “bulk of complainants” say because I am not them. Nor do I share the views of many of the people I get compared to on here. My critique of 230 is a liberal critique. In many ways it is opposite of common conservative ones and most conservative critiques I have seen are wrong. Which is probably why people like Masnick are so fond of talking about them. Dunking on an 8 foot rim is the most popular tactic online. It builds you followers and makes you seem smart to those who are not experts in the field. The Harlem Globetrotters look really good against the Washington Generals, but put them against an NBA team and they don’t look so good anymore.

Drink driving laws are a deterrence because they go after the person actually drink driving. Making websites liable for what trolls do will not have that effect.

Wrong. Websites will revert to being distributors. And distributors will be required to remove defamation when they receive notice. This will in turn discourage defamation because it will not remain online. The current system allows websites to do whatever they want. They don’t have to remove it. This encourages defamation. Sites like 4chan/8kun traffic in defamation. And it stays up forever. If you can get away with defaming someone without consequence, you are more likely to do it. 230 serves as a barrier to even track down anonymous defamers. Sites have no obligation to track people and no obligation to help people find them. This encourages more defamation. “Trolls” will be a lot less aggressive in a world where sites have to help track them down. Will this solve all the problems? Of course not. But it will have a major deterrent effect.

Because the trolls know that you won’t bother to go after them, when instead you can go after the websites they post on.

You can only go after the website if they are negligent – if they fail to remove on notice or otherwise fail to fulfill basic requirements. If the website removes on notice and even helps you track down the troll, the only case is against the troll. Websites can protect themselves simply by not being jerks.

Armed with the knowledge that you want to sue any website they post on, they’ll simply intensify their defamation and harassment knowing full well that you’re not interested in holding them responsible. They’ll do it because they know you’re desperate for a payout from every platform they post on.

Except the site won’t be responsible if they follow basic rules of civil society. So your troll will be playing a useless game. Which is why this won’t happen. Not to mention it’s bizarre to assert that a “troll” would target someone with defamation so that person could get rich suing a website. The “defamed” person would have to be in on it and such a scheme would be easily uncovered and they would go to prison.

Until evidence is shown proving the contrary, I have no reason to take your word for it. The truth is that anti-Section 230 trolls here have made their intentions clear that removing Section 230 is motivated by the idea that no Section 230 would make it easier for them to subpoena websites accused of facilitating copyright infringement without any evidence.

This makes no sense because 230 is already exempted from copyright. Anyone who says this obviously does not know what they are talking about. Removing 230 has no effect on copyright because 230 already doesn’t affect copyright. “Some people have made dumb legal arguments, therefore I am going to assume you believe the same” is not a credible or fair position.

In short, you’re angry because you don’t find anyone here who agrees with you.

I’m not really angry. I have remained quite calm and measured in my comments, while others here have insulted and cursed me.

Democracy doesn’t just mean judges, even though I personally doubt that your claims hold any water given your vested interests to bitch about Section 230.

I don’t have any vested interests at all. All of the 230 supporters you follow do, yet you don’t have an issue with that. My interest is that I am friends with many leaders in evidence-based medicine. They privately complained about being censored and worried about a future where pharma and government gets to determine what is “truth” and what gets censored. I was curious about how this situation came to be and I found Section 230. I read all the claims about how glorious this law is, but decided to research for myself and found a completely different story. I have nothing “vested” in this. I simply wanted to know how this law worked. And I found that people were being told things that were not honest.

Seriously, if your paper and research is so groundbreaking and damning to anyone who supports Section 230, why don’t you put your money where your mouth is? Go find a friendly judge, some Republican rag, anyone who has the money to be a patron of your work and nuke Section 230 the way you wanted. You’d be far happier that way than having to deal with people on a website you hate.

I just might. And you might regret daring me. I am at the beginning of this process. But I don’t “hate” this website or the people here. I just disagree with them. I don’t have to hate everyone I disagree with. It seems a lot of people here do hate everyone who disagrees with them. This is becoming more widespread in our polarized society. It seems impossible to have civil disagreement these days. I have seen people like Masnick encourage this. His behavior is certainly nothing to be proud of in many cases. Look at how he handled me when I presented this argument on Twitter: insulted, cursed, and blocked me. That’s the lowest form of response possible. Not becoming of someone who fancies themselves an “expert” on a topic. But I do not “hate” him. Nor anyone here, no matter how misguided they are.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:31

If you can get away with defaming someone without consequence, you are more likely to do it.

And if you remove Section 230, trolls will simply ramp up their efforts because they know your attention will be not on them, but the websites they post on. There may very well be a deterrent effect, but not on the people causing the behavior you find problematic. Removing Section 230 will not magically grant you powers to hunt down trolls in Russia.

even helps you track down the troll, the only case is against the troll

And when the website fails to track down the troll, it opens them up for liability. Anti-Section 230 folk have made it absolutely clear that they will sue websites who fail to find trolls who defame them because the site must “obviously” be protecting the troll with their failure.

This makes no sense because 230 is already exempted from copyright. Anyone who says this obviously does not know what they are talking about. Removing 230 has no effect on copyright because 230 already doesn’t affect copyright.

You’d think so. But that’s the argument a lot of people on your team are clamoring for. They made the same argument during the SOPA repeals and the FOSTA approvals. If you’re not happy with that, you should be getting your team to fix their crappy arguments.

My interest is that I am friends with many leaders in evidence-based medicine

Ah, yes, the man behind the man. Considering the one person relevant to that field who you’ve cited as an example is the guy who complained about lockdowns, this claim does not exactly put you in the best of lights.

I just might. And you might regret daring me. I am at the beginning of this process.

The same guy who made the “Section 230 protects men who call women whores” has been threatening an expose on this website for five years, claiming that everyone who posts here is an enabler for massive financial fraud connected to several levels of corruption that go all the way into the legal system. You’re going to have to forgive the rest of us for not holding our breath waiting for your groundbreaking finishing blow.

And if Section 230 was as dead as you claim it is, sure? Go ahead and gloat about it. Go find your tribe and murder the shit out of the law you hate so much. If the death of Section 230 was as inevitable as you claim, then your input is not as valuable as you think it is if the death of Section 230 was going to happen anyway. But keep posting your delusions here thinking that you’re a champion for lording over Internet nobodies.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:32

You generally seem obsessed with people I don’t know and keep calling them my “team”. In fact, it’s clear you view anyone who disagrees with you as belonging to an opposite “team”, even when they hold totally different and incompatible views. You lump FOSTA in with anti-230 people when FOSTA people were generally supportive of 230, which is why they amended it. They were just concerned with sex trafficking online. Many of the groups behind FOSTA are large, international sex trafficking NGOs. They have nothing in common with anti-230 trolls on this site – yet you lump together on some imaginary “team”. The world is easy when you pretend it’s black and white.

It’s clear you will not change your mind. It’s not open to anything that would change it.

Considering the one person relevant to that field who you’ve cited as an example is the guy who complained about lockdowns, this claim does not exactly put you in the best of lights.

Even on this you misrepresent his views. Ioannidis was never against lockdowns – at least not early in the pandemic. He supported them. He simply called for gathering data to see if they were truly necessary and if they were effective. He called for us to measure whether what we were doing was correct. Imagine that. And for that he was smeared as a right-wing lunatic (Ioannidis is actually far Left).

And he was pretty damn right, too. He questioned whether we should close schools. Why? Because pandemic prep plans said this should never be done. And guess what? That turned out to be an absolute disaster. School closures are probably the longest-lasting negative from the pandemic – worse in the long run than the deaths caused by the virus. The worst part is it was all for nothing. School closures had no positive impact on the spread of the virus. It was pointless and catastrophic. Because we panicked.

And while Ioannidis initially supported lockdowns, he didn’t support them forever. Because they are huge interventions and need to be justified. And we never did that. In fact, he was attacked just for telling people they need to justify it. Because no one wanted to justify it. Why? They didn’t want accountability. Gathering data would mean it could be proven wrong, which politicians & advocates did not want. But it’s clear now that lockdowns also had very little effect, if any. Places that had them did no better than places that did not, despite huge efforts to fool people to the contrary.

This is exactly what happens when you refuse to back your interventions with evidence. This kind of stuff is what led to the birth of evidence-based medicine. And in all these years, there has been little progress. In some ways, we have gone backwards. The only successes of evidence-based medicine seem to be getting everyone to agree they should be evidence-based and everyone claiming they are, but in practice few have changed how they behave and few are actually evidence-based. It’s become a slogan people throw around because it sounds good. But in practice, they largely ignore it. And attack those who practice it.

Anonymous Coward says:

Re: Re: Re:33

You generally seem obsessed with people I don’t know and keep calling them my “team”. In fact, it’s clear you view anyone who disagrees with you as belonging to an opposite “team”, even when they hold totally different and incompatible views.

People who share your views have, by and large, made it very clear that they think anyone who disagrees with them is a criminal, a pirate, a thief, or any combination of negative descriptors et al. Their arguments don’t go much beyond “Masnick enables crime and I wish he would go die in a fire”. Same goes for your bitching, I’d say.

You lump FOSTA in with anti-230 people when FOSTA people were generally supportive of 230, which is why they amended it

Amendments to laws such as SOPA, FOSTA and Article 13/17 don’t come as a result for any respect for consumers. They happen because enough members of the general public complain to a point where it becomes politically inconvenient to ignore the dissent at a critical mass. Though realistically, at best it just means that it forces an official statement that can be walked back afterwards. To wit, proponents of Articles 13/17 to make platforms responsible for copyright infringement insisted that automatic instant content filters would not be required as a result of Articles 13/17… only for them to later admit, after Articles 13/17 were passed via a non-transparent vote, that filters were not only intentional, but mandatory.

The one thing stopping every platform from being forced to implement “notice and staydown” is the tech simply not existing or being actually feasible. Of course, had the pro-copyright lobbies actually paid attention to the geeks – you know, the “Big Tech” folk you regularly denigrate – they’d have already known this.

They were just concerned with sex trafficking online. Many of the groups behind FOSTA are large, international sex trafficking NGOs.

And I’m sure many of the “artists” supporting “life + 70 years” were just concerned with Internet pirates, not suing innocent children and grandmothers and corpses. And yet here we are.

This kind of stuff is what led to the birth of evidence-based medicine.

All your evidence so far has been repeatedly, insistently ramming your finger on your paper.

Stephen T. Stone (profile) says:

Re: Re: Re:24

The issue is that the person who commits defamation is sometimes not reachable by the law.

That alone shouldn’t be a reason to put liability for defamation onto someone who had absolutely nothing to do with writing or posting that defamation. To put it another way: Elon Musk shouldn’t be held liable for defamation if some dickhead in Russia uses Twitter to defame an American celebrity.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:25

That alone shouldn’t be a reason to put liability for defamation onto someone who had absolutely nothing to do with writing or posting that defamation.

That alone is not enough. That alone, combined with the publisher exercising editorial control, is enough. These problems have been worked out decades – even centuries – ago. If you claim the right to control the speech, you become morally culpable for that speech. It doesn’t matter if you didn’t exercise that right or did so poorly. If you claim that right, it’s yours. If you don’t want that responsibility, don’t claim that right. It’s a simple tradeoff – a tradeoff with enormous societal benefits. Which is why it has been the law for a very long time.

Stephen T. Stone (profile) says:

Re: Re: Re:26

If you claim the right to control the speech, you become morally culpable for that speech.

That’s just it, though: Twitter isn’t claiming the right to control the speech. It’s claiming the right to control whether that speech is allowed on its platform⁠—and only its platform. Twitter doesn’t (and can’t) control whether someone posts their speech on other platforms in addition to Twitter. It can only control whether that speech stays on Twitter if someone else reports it for potentially violating the rules. I fail to see how that decision alone should make Twitter legally liable for third-party speech that no employee either had a hand in writing/posting or knew was on the service until after the speech was posted.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:27

That’s just it, though: Twitter isn’t claiming the right to control the speech. It’s claiming the right to control whether that speech is allowed on its platform⁠—and only its platform.

Yet more ignorance. Under this definition, NO ONE would ever be a publisher. Guess what? Book publishers only claim the right to control what appears from the printing press, too! Same with newspapers! You need to think before you speak. Actually sit down and think about how these things work instead of just spouting off nonsense. If your analogy fails in the offline world, you need to go back to the drawing board. You don’t understand how this works at all. Read my paper and you might see not only how this works but why it works that way. If you dare to gain some knowledge.

Twitter doesn’t (and can’t) control whether someone posts their speech on other platforms in addition to Twitter.

Neither do book publishers! Or newspapers! Or literally any publisher. It’s obvious that publishers can only control speech that is published on their own printing press. Guess what: the law doesn’t care. That’s not a factor.

I fail to see how that decision alone should make Twitter legally liable for third-party speech that no employee either had a hand in writing/posting or knew was on the service until after the speech was posted.

Because you fail to understand the law. Maybe sit this one out. You are obviously not cut out for this. This is cringe city. Reading your posts is like watching NASCAR: it’s mildly entertaining, mainly for the crashes, which are frequent in your posts.

Stephen T. Stone (profile) says:

Re: Re: Re:28

Under this definition, NO ONE would ever be a publisher. Guess what? Book publishers only claim the right to control what appears from the printing press, too!

And if Twitter were a printing press, you might have a point. But it’s not. So you don’t.

Reading your posts is like watching NASCAR: it’s mildly entertaining, mainly for the crashes

Yes, yes, you’d like to see me receive brain damage (or worse) from a high-speed automobile collision. What, you think you’re special?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:29

And if Twitter were a printing press, you might have a point. But it’s not. So you don’t.

They are. More ignorance from you. Twitter is a printing press, just as radio and television are. You don’t literally have to print on physical paper to be a printing press. That was the original meaning. But as technology grew, new mediums emerged. A printing press is anyone who puts speech to a medium for others. The medium is Twitter’s website/network, which is a unique medium.

Stephen T. Stone (profile) says:

Re: Re: Re:30

Twitter is a printing press, just as radio and television are.

Radio and television—live programming notwithstanding—pre-approve all the materials that they air. Twitter doesn’t pre-approve anything that goes on its site. To wit…

A printing press is anyone who puts speech to a medium for others.

…no employee at Twitter directly “puts speech to a medium for others”⁠—partially because Twitter would be legally liable for that speech if they did. What Twitter does would be more like having an open-to-the-public bulletin board within a privately owned business: You can post what-fuckin’-ever you want, but that doesn’t mean it’s going to stay up once the owner hears about it, and the owner shouldn’t be considered the “speaker” of such speech only because they have the board up in their building.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:31

Radio and television—live programming notwithstanding—pre-approve all the materials that they air.

Your ignorance is boring. No, they don’t. In fact, the radio exception was articulated precisely because radio stations air live interviews where they don’t pre-approve everything that will be said. Pay attention. Know the law before you comment.

Twitter doesn’t pre-approve anything that goes on its site.

Not a part of the legal test at all. Their inability to pre-approve is what should tell them not to behave like a publisher. It doesn’t inform their legal treatment because their legal treatment is interested in the behavior, not their capabilities.

…no employee at Twitter directly “puts speech to a medium for others”⁠—partially because Twitter would be legally liable for that speech if they did.

Again, not part of the test. And total nonsense, as I already covered. If this was part of the test, then newspapers and publishers might stop pre-screening just to avoid liability. But they cannot do this because it’s not part of the test.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:28

A newspaper, book publisher, film studio or record labels all decide what shall be published under their imprint. Social Media services remove posts after publication if an algorithm, or person brings the post to their attention, and they decide to remove it. They cannot be expected to decide whether or not any post is defamation, or whether it refers to the complainant.

Also, if you want notice and take down, how long is a reasonable time for a site to take action, noting that some sites are run by a single person. What mechanism to you propose to prevent abuse, including campaigns to fill a site with notices,and who pays, because a bad mechanism can become too expensive for small site to employ. One foreseeable outcome of a simple notice and take down system is fo crooks to offer a service to flood a site with take downs, just like DDOS is available as an illegal service.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:29

A newspaper, book publisher, film studio or record labels all decide what shall be published under their imprint. Social Media services remove posts after publication if an algorithm, or person brings the post to their attention, and they decide to remove it. They cannot be expected to decide whether or not any post is defamation, or whether it refers to the complainant.

This has been discussed at length. And the court in Stratton Oakmont made clear that whether you exercise editorial control early or late does not matter. This is correct. It’s not when you do it, but what you do, that matters. If Twitter is not capable of pre-screening materials, the wise choice is for it to not exercise editorial control. Prodigy made bad choices and rather than face up to them, they got an exemption. That’s not a good way of writing laws. Like if Pfizer made a drug that killed a bunch of people and rather than pay damages they got Congress to pass a law exempting them. You would complain about that. But Section 230? You have been trained otherwise.

Also, if you want notice and take down, how long is a reasonable time for a site to take action, noting that some sites are run by a single person.

This is for a court to determine, as they do in every other negligence case. Congress can provide guidelines. Litigation is expensive and a last resort. Cases would rarely ever be brought unless the time period was obviously unreasonable. So much better for both parties to resolve it outside of court.

What mechanism to you propose to prevent abuse, including campaigns to fill a site with notices,and who pays, because a bad mechanism can become too expensive for small site to employ.

There are many potential mechanisms, such as punishing people who submit false claims, etc. Too many to list here. These are not unsolvable problems. Quite easy. As for “small sites”, they would naturally not have a lot of user content, so very few claims, if any, ever. There is an assumption that the ability to claim defamation will lead to tons of people making false claims. Yet this ability already exists offline and has existed forever and no flood of false claims has ever occurred. Any famous person who is discussed in a book could call up bookstores any time they want and say they have been defamed in a book and demand it be removed from shelves. Yet, they do not.

This is not to say that false claims will not happen – or even claims designed to intimidate, suppress speech, or generally abuse the process. Those will happen, but they will be rare. They will not be something that generally concerns a small site. Larger sites like Twitter will have to deal with it more, but they already deal with this. People already report posts to Twitter today in insane numbers because Twitter’s broad rules encourage people to report just about any post they disagree with. Somehow Twitter deals with this today. I guarantee the number of reports Twitter has to deal with would substantially decrease in the absence of Section 230 because there would no longer be much that is reportable. Sure, things like violent content would still be reportable – and probably acted on much faster because they are not lost in a sea of reports about people who think transgender people shouldn’t use a certain bathroom.

One foreseeable outcome of a simple notice and take down system is fo crooks to offer a service to flood a site with take downs, just like DDOS is available as an illegal service.

And a site could easily ignore these takedowns. This would never happen because it doesn’t make sense. Small sites would know it’s obviously fake and large sites already accept reports today and would already be experiencing this if it’s a potential. Not to mention this would not really be possible because sites do not have standard mechanisms for reporting. On one site it may be email. On another, a form. On one site you may have to login. On another, you don’t. DDOS attacks work precisely because they attack a standard web infrastructure that works the same across all sites – and is simple.

I do appreciate that you are actually asking relevant questions, as opposed to everyone else who is just in denial. But these questions appear to be driven by a need to justify 230’s existence. An unending series of “what ifs” that you have been led to believe are insurmountable problems but are actually easily solved. You can imagine all sorts of bad scenarios, the vast majority of which will never happen.

Anonymous Coward says:

Re: Re: Re:30

There are many potential mechanisms, such as punishing people who submit false claims, etc.

That is a disingenuous argument, as only the poster is a a position to challenge a take down, and few have the time and money to do so, especially if that means fighting a case in a foreign court. That is one of the problems with bad DMCA claims, the impacted party is not able or willing to challenge them. The dancing baby case shows how hard it is for an individual, as indicated by the number of articles Techdirt wrote on the topic.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:31

That is a disingenuous argument, as only the poster is a a position to challenge a take down, and few have the time and money to do so, especially if that means fighting a case in a foreign court.

Who says it would go to court? There is a simple process that could be followed, informal on small sites and formal on larger ones. Congress could even codify this in a safe harbor law like what I mentioned several times. First, a site could require some evidence in favor of the claim. Then, they could provide an opportunity for rebuttal. The safe harbor could even specify that in cases where evidence is not clear, sites should refrain from removing the posts and receive a safe harbor as long as they follow a reasonable process. Cases would rarely be complicated. 99% of websites would never encounter a case in their history.

The DMCA is problematic because it gives all power to the copyright holder. Things don’t have to work that way – they just do because the copyright holders wrote the law. Kind of the same way that Section 230 provided total immunity to sites because the sites wrote the law.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:32

Keep in mind also that Congress never intended 230 to get rid of distributor liability. This was invented by MAGA judges and will never survive Supreme Court review. So even if you think 230 is Constitutional, you have to admit that Zeran will not survive SCOTUS review. It was a MAGA judge pro-business wet dream and MAGA judges have switched sides because they don’t like their people being censored. Wilkinson & Boyle from the Zeran opinion are similar judges to Thomas, and Thomas HATES Section 230. You think he’s going to preserve Zeran?

When Zeran inevitably disappears, sites will have to figure out how to deal with claims. If 230 disappears altogether, that situation will be the same. So the real issue is that Zeran prevents this and Zeran is not good law. So 230 proponents better start thinking about what to do when Zeran disappears. Because it will.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:33

230 proponents better start thinking about what to do when Zeran disappears.

We have. The overwhelming majority of sites on the Internet that accept user-generated content (UGC) will have three options in the wake of 230 being dismantled:

  1. Refuse to moderate anything not required by law (e.g., CSAM) to avoid legal liability since knowledge is required for that liability,
  2. Stop accepting UGC to avoid legal liability for speech the site didn’t generate or publish, or
  3. Shut down altogether.

And I’ll note that Option 2 would effectively count as Option 3 for plenty of sites that rely on UGC, be they a filehosting site with no social functions (e.g., catbox.moe), a filehosting site with limited social functions (e.g., Imgur), or a full-bore social media service. No site that doesn’t already have millions of dollars to fight back against legal threats will risk even the possibility of legal liability for third-party speech. Those sites that do have such funds on hand would still heavily consider Options 1 and 2, and a good number of them would likely go with 2 if they can find the right content partners. (YouTube, for example, would absolutely go “broadcast-only” in the wake of a repeal of 230.)

So if 230 is repealed, you can say goodbye to Imgur, 4chan, Neocities, An Archive Of Our Own, Tumblr (most likely), Bandcamp, every U.S.-based Mastodon instance, other small-time social media services (lookin’ at you, Truth Social), every comments section on every blog (including this one right here), every old-school forum, every search engine that isn’t Google, and…well, basically, the interactive side of the World Wide Web as we know it. That is the future of the Internet without Section 230: pre-approved content owned by huge corporations being fed to you through what was once a two-way communications network that became a one-way broadcast medium.

That is the future you want: An Internet that isn’t.

Own your dream, you son of a bitch. Then pray it never comes to pass.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:33

This was invented by MAGA judges and will never survive Supreme Court review.

I mean, beyond being wrong about just about everything else, I find it most perplexing that you keep randomly insisting that any judge that disagrees with you is a “MAGA judge” even though dozens of judges made rulings on 230 that say you’re completely wrong way before “MAGA” was even a thing.

What you’re really saying is that you think that any judge appointed by a Republican (even those pre-Trump) is illegitimate.

Which makes you sound like a fucking crazy person who needs serious help.

But, then again, the rest of your posts on this thread kinda support that theory.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:34

I find it most perplexing that you keep randomly insisting that any judge that disagrees with you is a “MAGA judge” even though dozens of judges made rulings on 230 that say you’re completely wrong way before “MAGA” was even a thing.

These are not “random” judges. They are hardcore conservative judges who have exactly the jurisprudence that Trump likes and nominated repeatedly to courts. Wilkinson wrote an op-ed praising Trump when he became President and talked about how Trump had the opportunity to heal the wounds of the 1960s. Boyle is so conservative he’s probably to the Right of Clarence Thomas.

What you’re really saying is that you think that any judge appointed by a Republican (even those pre-Trump) is illegitimate.

I had a feeling that defending Section 230 is such a strong emotional imperative that someone would feel the need to defend judges with a record of racism and hostility toward women. Fantastic job.

I brought up the ideology of these judges because it explains why they made such a disastrous decision in Zeran. I have shown that Zeran was totally wrong on the law. That is covered in my paper. Or, if you prefer, read the dissent in Doe v. America Online. There is just no counter-argument. Zeran was one of the worst decisions I have ever seen. It was not just wrong on the law, but dishonest. And the reason it was dishonest was the judges who decided it are MAGA type ideologues whose goal was to write a pro-business, anti-regulation opinion. It is exactly the kind of judge that everyone should be against and exactly the kind of opinion that should make people angry. But because the opinion expanded your favorite law, you love it. And you are willing to praise the MAGA judges who wrote it, even though it’s an example of everything wrong with the legal system. A subversion of democracy.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:32

It does not matter what measure you put in place to reverse bad decisions, because a third party is removing material, and it then up to the poster to challenge the removal. Notice and take down gives all the power to those who want content removed, which includes politicians and companies that want bad reviews and stories removed. Satire like Devin Nunes cow would not be allowed to exist, nor would criticism of the cops. Would you trust Musk not to abuse such a system to protect the Tesla brand from any criticism.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:33

Notice and take down gives all the power to those who want content removed, which includes politicians and companies that want bad reviews and stories removed.

It does not. In fact, you can give as much power to the accused as you want. The DMCA gives all the power to the accuser because the accusers wrote the law. That’s not how things have to be. This is a political choice.

Would you trust Musk not to abuse such a system to protect the Tesla brand from any criticism.

Musk already has this power and then some. 230 gives him the power to do anything he wants, including secretly censor on behalf of China, who have substantial power over him through Tesla’s reliance on China. If China called him in tomorrow and demanded secret censorship, it could be done in such a way that it would be virtually undetectable and it would also be legal in the US because 230 gives sites unlimited discretion. In fact, TikTok has been doing exactly this for years: censoring on behalf of China in ways that only came to light when employees came forward and discussed it. It is likely that their true censorship is more extensive than anything that has been revealed. There are no repercussions from this thanks to Section 230.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:35

In terms of moderation? Yeah, pretty much. What are you going to do about it besides whine like a toddler that he can’t be forced to host your speech⁠—or be held legally liable for your speech?

You said we need 230 to prevent Musk from getting this power and I pointed out 230 is what gives him that power and your response is to say I am “whining” about it. This is called The Art of Always Being Right.

Anonymous Coward says:

Re: Re: Re:34

It does not. In fact, you can give as much power to the accused as you want.

How, when the notice is given to a third party to act on, with the risk of being sued if they do not act on it. If the matter can be settled by a simple counter notice, which would be power to the accused, then their is no point in a notice and take down system. If getting the post restored requires the person issuing the take down to agree, then they have all the power.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:35

How, when the notice is given to a third party to act on, with the risk of being sued if they do not act on it. If the matter can be settled by a simple counter notice, which would be power to the accused, then their is no point in a notice and take down system. If getting the post restored requires the person issuing the take down to agree, then they have all the power.

There are systems other than “accused has all the power” and “accuser has all the power”. I specified the site would adjudicate disputes in a simple system. The site would ultimately decide, with emphasis on leaving speech up unless the evidence was clear. Congress could specify all of this, along with a safe harbor rule that says sites will not be held responsible if their decision is wrong – so long as they follow a reasonable process.

This takes care of edge cases, which can be difficult, while encouraging easy cases to be processed. And it’s the easy cases that everyone cares about. If someone thinks they are defamed but it’s not clear, let them take legal action and get a court to decide, with safe harbor for the site (to encourage speech). But if someone is defamed and it is clear, sites should be made to act swiftly to remove it. The rules can easily be designed to create a system that is fair and does not suppress speech.

Anonymous Coward says:

Re: Re: Re:36

I specified the site would adjudicate disputes in a simple system.

Describe such a system, which does not involve the site if carrying out research at its own expense, and which protects the site if it makes a mistake. It is simple to say such a system would exist, it is much more difficult to design a simple system that is not just take down on notice.

bhull242 (profile) says:

Re: Re: Re:26

That alone is not enough. That alone, combined with the publisher exercising editorial control, is enough.

If the ability to control the content wouldn’t, on its own, make them legally responsible for the content (which it doesn’t, despite your claims to the contrary), then adding, “the actual culprits are beyond the reach of the law,” will not be any more convincing. Which is the point being made here. Your complaining about not being able to enforce the laws against the real culprits has absolutely no relevance to this discussion.

These problems have been worked out decades – even centuries – ago.

You haven’t even demonstrated a problem to be worked out that is actually solved by your proposed solution. That the actual culprits are sometimes not capable of being held responsible under the law even though they did commit the crime is not solved by holding someone else responsible for use of their tools. That’s not how justice or equity works, nor should it be how the law works.

And if you’re referring to the actual problem with distributor liability—namely the moderator’s dilemma—then the only thing that actually solves that problem is §230, even if it is an imperfect solution. At least if you don’t want to make it so that only a select few are able to post or so that there is no moderation at all. Previous regimes only “worked” because of the limited scale any given corporation operated on compared to the internet, with relatively few voices being broadcast.

If you claim the right to control the speech, you become morally culpable for that speech.

You’re talking about morality. This is about legality. While there is some correlation, it’s not perfect. Lots of immoral things are and should be legal, and lots of otherwise moral things are and should be illegal.

If you want to put moral responsibility on the platform, fine, but that isn’t enough to put legal responsibility on them, too. That said, you are merely asserting moral responsibility. In my opinion, that isn’t necessarily the case.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:27

Your complaining about not being able to enforce the laws against the real culprits has absolutely no relevance to this discussion.

It is relevant. The law worked out ages ago that if the printer exercises editorial control – controls the speech – they have moral and legal culpability. This solves the problem of unreachable parties. The publisher is almost always reachable.

That the actual culprits are sometimes not capable of being held responsible under the law even though they did commit the crime is not solved by holding someone else responsible for use of their tools. That’s not how justice or equity works, nor should it be how the law works.

This is how the law of intermediary liabilities has always worked. Because asserting the right to control the speech gives you moral culpability. Based on this, legal responsibility is assigned. Has always been the case.

You’re talking about morality. This is about legality.

No. I am giving the moral justification for the legal rule that has always existed.

If you want to put moral responsibility on the platform, fine, but that isn’t enough to put legal responsibility on them, too.

This has always been the legal rule, so you disagree with the development of the common law across every state in the country.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:20

Copyright doesn’t have anything to do with 230 because it’s exempted. Copyright holders couldn’t care less about 230. People who care about 230 tend to be people who get censored. For example, I know many leaders in evidence-based medicine. Most were censored by different platforms during the pandemic for questioning “official” claims. This is what EBM is supposed to be about. They worry about the power of government and pharma to suppress criticism going forward where those two entities essentially control the behavior of sites through their power & money. It was the concerns of EBM leaders that originally motivated me to research Section 230.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:22

In other words conspiracy theorists, quack doctors and their supporters, bigots etc. who all think they know better that the actual experts in any field.

This is what many people in authoritarian countries think, too. Everyone who is censored must be a conspiracy theorist, quack, bigot, etc. Otherwise they wouldn’t be censored. You will find this sentiment to be widespread in China. And in Venezuela. And in Cuba. It’s damn near a universal sentiment in North Korea. All the censored deserve it. It’s amazing how many people who probably claim to be liberal agree with the authority’s characterization of those they censor.

I could provide many examples that disprove what you say. But you will reject all of them. Because in your mind being censored by definition means you are a quack. This is the authoritarian mindset at work.

One example: John Ioannidis was censored multiple times, including by YouTube. He is the most cited medical researcher in the world and a legend in evidence-based medicine.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:23

Everyone who is censored must be a conspiracy theorist, quack, bigot, etc. Otherwise they wouldn’t be censored

Ah, yes, the typical Techdirt troll style of argumentation. “You’re only censoring me because I’m right!”

I could provide many examples that disprove what you say. But you will reject all of them. Because in your mind being censored by definition means you are a quack

People read your tripe, then click on the flag button. You keep citing your paper as the authoritative source. People tend not to view your own personal source as an objective point of discussion, especially when you keep shilling it.

John Ioannidis

Mr. Anti-Lockdowns? Can’t say that’s a surprise you’d be behind the champion of all Republicans refusing to wear masks and demand haircuts. For someone who claims to not be a Republican you sure gobble up each and every one of their talking points while somehow also whiffing what they’re thinking.

bhull242 (profile) says:

Re: Re: Re:5

The answers he wants are either “yes” or “no”. The only options he’s closed off—if any—would be non-yes-or-no responses or evasive replies.

So no, I’m not ignoring what he said; it just doesn’t contradict what I said. In the past, some have said “yes”, while others have said “no”. They have also given different reasons for their answers. Stephen isn’t trying to force a person who believes the answer is “no” to answer “yes” or vice versa.

He tailored the question to ensure that the answer—whatever it might be—gives him the information about their opinion that he wants, regardless of whether he agrees with their opinion or not. There is nothing strange or wrong with that; you tailor your questions so that the answer you get gives you the information on the subject you want to learn more about.

That he would expect a specific result and tailors it accordingly is irrelevant.

Anonymous Coward says:

Re:

Personally, no.

But governments everywhere that aren’t the US? Their answer is always YES. Why?

Because guess who gets to compel speech the government approves of? The government itself. Oh, and this also means the government also gets to force privately owned WEBSITES to delete speech the government disapproves of.

Anonymous Coward says:

Re: Re:

Government compelled speech, and government compelled removal of speech, are both tools that should be employed with extreme caution. But they are both tools that can, and are, used to effect good things. That’s why they’re legal even in places that explicitly say they shouldn’t be, like the U.S.

This comment has been flagged by the community. Click here to show it.

JSpitzen (profile) says:

"What If" Scenario "Plan B"

The following question may have been asked and answered before but I could use a refresher. I assume that, for a small to medium operation like Techdirt, the anti-spam system mainly works in the cloud and that, as a practical and technical matter, the servers in the cloud could be based in some optimal (for certain purposes) non-U.S. location. So if TechDirt were forced to abandon its own anti-spam measures, could it not provide a raw data feed to a remote server that deployed the same, or equivalent, technology? Then couldn’t any of us who wished look at that filtered data instead of the raw data? Wouldn’t this avoid any liability on the part of Techdirt?

Someone please tell me the defect in this scheme.

JSpitzen (profile) says:

Re: Re: Please elaborate

I am speculating about a scenario where sites that moderate (via anti-spam technology and/or a hybrid of technology and manual methods) have liability but sites that allow comments without moderation are not subject to that liability. As everyone agrees, in that scenario, the raw data feed of a site that does not moderate becomes unreadable because it is buried in spam. Why cannot a 3rd party filtering service, perhaps from outside the U.S., solve of mitigate that problem?

Stephen T. Stone (profile) says:

Re: Re: Re:

Why cannot a 3rd party filtering service, perhaps from outside the U.S., solve of mitigate that problem?

I would assume the site’s primary servers and owners being in the U.S. would still subject them to the laws of the U.S.

But again, that’s getting into a hypothetical that assumes Techdirt wouldn’t close comments if they were barred from filtering for spam. That remains the primary defect in your scheme, and you can’t avoid it by going “okay but what if [x]”.

This comment has been deemed insightful by the community.
TFG says:

Re: Re: Re:

This does nothing to protect them from the true cost of losing Section 230.

The whole benefit of Section 230, the reason it works so well, is that it quickly and cheaply halts litigation in its tracks.

It turns ruinously expensive lawsuits into simple open-and-shut legal motions that cost less than a percentage point of the cost of the normal lawsuit.

The move you suggest doesn’t help out Techdirt in avoiding the ruinous lawsuit. They’d still have to jump through the hoops for the court to determine if this setup means they’re out of suit jurisdiction, etc. etc. – discovery would probably be required and that is capable of bankrupting companies.

So all it takes is a deep-pocketed person who wants to kill the small business and even if the lawsuit turns out meritless it can still end them.

PaulT (profile) says:

Re:

The “cloud” probably wouldn’t protect anyone if the target of the block was in the same country/state as TD, but that’s where 1A/230 come in.

The “cloud” means you run software on other peoples’ hardware, not that you’re immune from its effects. The most important thing is that a venue that doesn’t want assholes causing trouble is allowed to block them, and I don’t think I’ve ever seen a venue as lenient online or offline as this one.

cpt kangarooski says:

Well, spam and language, I’d say. Family friendly services like Prodigy also didn’t want people swearing online.

(Going way way back, once in a while in the 70s and 80s someone would try to buy or sell something online, like a used bike or something, and would often be chided for using a government owned network for private commercial purposes, but it wasn’t really spam or moderation as we think of them)

Ninja (profile) says:

That’s a huge volume.. The current captcha seems quite harmless nowadays (I barely ever see any window asking for input). Wouldn’t something like this ease the burden on TD servers?

I wonder where they originate from as well. If I was in charge I’d probably blanket ban certain ip ranges from the worst offenders from commenting without an account. I mean, you can still not reveal any identity even if you sign up. I certainly would understand if you took such actions and my country was hit (I wouldn’t be surprised if Brazil is one of the main sources of spam content).

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Backup Plan

Hopefully, Gonzales will lose the case. But even if they win, a spam filter is different from a recommendation algorithm. Gonzales is arguing over the promotion of material regarding Section 230(c)(1), while the restrictions of spam filters would fall under Section 230(c)(2)(A). Even if recommendation algorithms become unusable, spam filters fall under a different legal category.

TFG says:

Re:

For now.

And it would depend on the ruling, if the Supreme Court ruled to strike down Section 230 in whole or in part.

And there’s the risk of a ruinous lawsuit to figure out what the new status quo is in the aftermath of a ruling that kills 230. Regardless of whether Spam filters were actually under a different section of it, there will be blood in the water as the legal sharks fight it out.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

Gonzales is arguing over the promotion of material regarding Section 230(c)(1), while the restrictions of spam filters would fall under Section 230(c)(2)(A).

Do you really think that anybody should listen to what you have to say about section 230 when you posted that Facebook could use §230 to dismiss a lawsuit against Facebook’s own speech.

Just to remind you of what you said Koby:

Instead, they will seek a dismissal based on … or perhaps 230. (You will have to click to reveal)

spam filter is different from a recommendation algorithm.

How so? A spam filter essentially recommends that certain posts/emails/etc should be considered junk and recommends that others are not junk.

Cat_Daddy (profile) says:

Re: Keep dreaming dude

If we’re judging how the arguments in February played out, I’m sure you would be very disappointed. It’s probably unlikely that Section 230 will die or be reformed from the Gonzalez case (emphasis on probably). Could the Supreme Court still screw up? I mean it’s the Roberts Court, I am not discounting that it would be out of the realm of possibility. But judging by how they’re cautiously treading this particular case, I’m actually cautiously optimistic that section 230 can survive, even unscathed. And I don’t say that often about the Roberts Court.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

demand that freedom of speech should be removed from someone who doesn’t conform to the accepted regime is noted.

Ain’t it amazing how often these “freeze peach” warriors really only mean free speech for those with whom they agree, everybody else should be canceled and silenced.

PaulT (profile) says:

Re: Re: Re:2

Yeah, that’s it, and it seems to be just another case of “spoiled kids” being confused because they face opposition and being in a minority for the first time. It’s a big generalisation, but the people complaining don’t seem to be the goth/RPG/horror/sci-fi/nerd/whatever crowd who spent their high school years being bullied who complain, it’s the jock crowd who used to attack those different and are experiencing opposition for the first time.

The “freeze peach” guys are just confused because they entered a space larger than when they grew up, and don’t like that people they never gave a chance to have an equal voice.

That One Guy (profile) says:

Well on the ‘plus’ side if the courts and/or legislators really are stupid enough to exempt algorithms then it sure sounds like the impact will be immediate and significant enough that the reason why that was such a monumentally stupid idea will be glaringly obvious to such a degree that the push to walk it back will likely be near instantaneous.

Kinda hard to ignore ‘every platform operating in the US just became overrun by spam overnight’ after all.

Nemo_bis (profile) says:

Re: Distributed harm

Unfortunately, I believe your expectation is way too optimistic.

If spam filters become de facto prohibited (by means of making them a huge legal cost), people will surrender to spam in less visible ways, such as shutting down comment sections, web forums, chats etc. (even more than already happened in the past decade). The few remaining small email providers will vanish. The online communication oligopolies will become even more concentrated and will find other ways to protect their business, such as an increase in pay-to-play.

All this is very harmful, but it’s harmful in a way similar to restrictive copyright laws: diffuse harms to nearly everyone, more concentrated here and there, while an extremely small set of people profits handsomely. Those who propose to cause this kind of catastrophe may enjoy it.

PaulT (profile) says:

Re:

Which indicates the filter is working correctly. You might disagree with the fact that so much of your nonsense gets flagged as trolling/spam, but since it is, then the spam filter is where it belongs.

The actual question is how much gets allowed when the spam filter is moderated. If you end up with most comments coming through, then the system is working as intended, you might just have to work on the reason why you’re reported so often. We’ve all been caught by it, but if you get 90% of the comments in the spam filter and 90% gets allowed when it’s checked, the problem might not be the filter. Hey, at least you’re not on one of those right-wing sites, where you’ll often be blocked and have your posting history removed if you dare go against the status quo.

Rekrul says:

Re: Re:

Which indicates the filter is working correctly. You might disagree with the fact that so much of your nonsense gets flagged as trolling/spam, but since it is, then the spam filter is where it belongs.

Almost every one of my comments that gets filtered, shows up on the site a day later.

It could be all of them, but since the custom search no longer allows you to actually search comments, I often don’t have the ambition to manually go back through the previous stories looking for ones I’ve commented on, then checking the comments to see if my comment showed up.

THIS comment isn’t spam, but there’s a good chance that it won’t immediately show up when I click the post button.

PaulT (profile) says:

Re: Re: Re:

“Almost every one of my comments that gets filtered, shows up on the site a day later.”

Indeed, there’s nobody employed full time to go through the spam box, and any decent site of any size gets thousands of actual spam comments. I used to have a blog that was maybe 200 daily hits, but some days 60 spam comments in the filters. I can only imagine what these guys have to trawl through.

But, that doesn’t change the issue – if you regularly fall afoul of the filter, that probably means you’re doing something to trigger it…

“I often don’t have the ambition to manually go back through the previous stories looking for ones I’ve commented on, then checking the comments to see if my comment showed up.”

The fact that you don’t wish to check the box requesting an email when the article is updated or register an account to keep track of history is not the site’s fault. In fact, they’re already ahead of many because they allow you to post without an account in the first place.

This comment has been flagged by the community. Click here to show it.

Ryan says:

This claim is not correct

Because you do not need 230 to protect spam filtering. I explain this and much, much more in my paper on Section 230, which gives a detailed history of intermediary liabilities law, Section 230, and the serious Constitutional issues raised by Section 230.

https://twitter.com/therealrthorat/status/1643270588864032768

Also see this article I published outlining one of the main Constitutional issues:

https://davidhealy.org/twenty-six-words-and-the-internet/

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Also not true

Claims about long lawsuits that cost millions of dollars are common from 230 proponents because it encourages readers to not think about the merits of the law, but rather be fearful about the process. Our legal system can handle these cases just fine – but 230 proponents don’t want you to know that. It’s a key part of their strategy.

Anonymous Coward says:

Re: Re: Re:

Nobody is worried about the legal system being able to handle the cases. How equipped the legal system can handle lawsuits tells you nothing about what it means for the average site owner. Section 230 could disappear tomorrow and it won’t make the process any easier, better, etc for the average person.

I will note that your entire Twitter thread of pissing and moaning has been nothing more but to whine about Section 230 as a bogeyman preventing the government from controlling the Internet at all – which, if you want your dreck to have any sort of meaningful relevance, you’ll have to update your talking points for the unthinking Trumpians. They’re all convinced that Section 230 is what allows for more government regulation, not less.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:3 Those are not "facts"

They are opinions. The opinions of someone who works with and for the tech industry. Of course, that industry loves the idea of a total liability shield. I would like a liability shield for myself, too. When someone writes a paper claiming that an obscure law written by lobbyists and passed with almost zero debate in Congress is “better than the 1st Amendment”, that should be a sign that you are dealing with a biased party – a fanatic.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:4 An Example

I will highlight a passage from the linked post to demonstrate the lack of knowledge on this topic displayed by people like Mike Masnick. Here is a quote:

“At times, in the past, I’ve argued that in a reasonable world we shouldn’t even need a CDA 230, because the proper application of liability should obviously be with the person posting the law-breaking content, rather than the platform hosting it.”

What is wrong with this statement? This has never been the law. And no one who understands intermediary liabilities law will ever agree with this statement. Intermediary liability laws exist for good reasons: they exist for the same reason that laws requiring grocery stores to clean up spills exist. When you undertake an activity that can harm the public, you have certain obligations to mitigate those harms. A grocery store has to clean up spills immediately. A speech distributor has to remove harmful speech immediately. A publisher is not even allowed to publish harmful speech.

If Masnick thinks intermediary liability for speech is a bad area of law, he should argue for its complete abolition – offline and online. He will get crushed in this attempt, but he can at least try.

Anonymous Coward says:

Re: Re: Re:4

I take it you’re new to this, eh?

Mike is so biased Google cut their funding and tried to sabotage Techdirt’s GoogleAds to the point that the site had to migrate to a WordPress instance.

Mike is so biased he takes funding from anyone as long as they let him do his thing. (It is NOT a slur against Mike.)

Meanwhile, you…

Think that the American legal system can handle the surge in cases that will happen when 230 is repealed.

I’m sorry, I can’t take you seriously.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:5 My statement about bias

Was about Eric Goldman, who wrote the paper Mike was discussing. Goldman’s whole career is side by side with Google. Aside from that, Masnick has an obvious industry bias but it’s beside the point because he is just wrong on this issue. He doesn’t understand the basics of intermediary liabilities law, as indicated by the passage I quoted in another post.

And again, you make an assumption that there will be a “surge” of cases when 230 is repealed. You provide no evidence for this assertion. It’s based on the same wishful thinking that sold us the Iraq War. No doubt there would be more cases – after all, 230 has shut down lawsuits even where behavior is egregious – but there is no evidence there will be a “surge” of cases that will be problematic. In the unlikely event that did occur, Congress has many tools it can use to handle that. It can even preemptively use those tools. 230 is not the only option. It’s just the most extreme, worst option. But it’s preferred by industry because it’s quite a huge privilege. If I invited a friend to my house and he got injured by some negligent maintenance of my property, I would absolutely LOVE to be immune to that liability. That sounds fantastic to me. So, of course, the tech industry loves their immunity to intermediary liabilities. Who wouldn’t? But we don’t normally let the recipients of huge legal privileges tell us whether those privileges are a good idea for the public as a whole. They are just a tad bit biased in that regard.

Anonymous Coward says:

Re: Re: Re:6

you make an assumption that there will be a “surge” of cases when 230 is repealed

What about that statement is an assumption? There’s been no shortage of anti-Techdirt dissenters gleefully claiming that if Section 230 were gone tomorrow, they’d waste no time filing lawsuits against websites and website owners for reposting or sharing publicly available information that they find inconvenient. John Smith, F230, and the huge number of people who claim that Section 230 has directly harmed them by making it difficult to sue.

But we don’t normally let the recipients of huge legal privileges tell us whether those privileges are a good idea for the public as a whole

And yet, the RIAA got FOSTA approved.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:7

“And yet, the RIAA got FOSTA approved.”

I think you mean DMCA? FOSTA is about sex trafficking exceptions to 230. This is precisely my point: DMCA is a bad law that everyone hates precisely because it caters to industry at the expense of the public. Section 230 is the same, only the public has been fooled to think otherwise. That industry waltzed in and wrote a law to protect consumers. LOL.

Anonymous Coward says:

Re: Re: Re:8

Section 230 regularly gets put up as a bogeyman by pro-copyright interests, because anything that’s interpreted as a protection for intermediaries and platforms is seen as a threat by them. As long as it makes it harder for them to go after Google or Cloudflare, they hate it – which is why for the longest time Richard Bennett insisted that net neutrality and consistent high bandwith Internet connections were only used by people who wanted to pirate movies.

FOSTA was essentially the RIAA’s revenge for failing to ram SOPA down everyone’s throats.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:2 Address the merits of my arguments

I urge 230 supporters to address the merits of the arguments I make in my paper and elsewhere. None are willing to do so because they know they will lose any debate. The strategy is instead to ignore and not give the arguments any oxygen. Lean on their expertise. I have invited and continue to invite anyone with expertise to discuss this topic. What are they so afraid of?

Why does Masnick hide behind a Twitter block? When I raised this issue with him the first time he insulted me, cursed me, then blocked me. Nowhere did he address the merits. I doubt he understands the legal issues involved. If he thinks he does, he can debate on this any time he wants. I welcome debate on the topic – something that has been sorely lacking due to the tech industry’s stranglehold over the issue.

sumgai (profile) says:

Re: Re: Re:3 Stone, revisited

So let me ask you One Questions, rephrased specifically for you:

How many times has Masnick appeared before Congress in the guise of an expert on the topic of 3rd party liability, and how many times have you appeared in those halls?

None of us for a minute believe that such appearances are an automatic “I’m right, you’re wrong” card to be played as desired, but then again, none of us believe that a loose lugnut should be allowed to espouse unknowledgeable legal concepts without viable retort. I refer to your use of the word “never”, as in ‘ This has never been the law.’, in reference to intermediary liability. It is now obvious that you haven’t been around long enough to have “lived” in history, you’re only cognizant of your current generation having been forced to take some history classes in school, and you avoided those as much as possible.

At my age, I can tell you exactly when tortious 3rd party liability came into being, at least in my state, because I was one of the early recipients of said legal concept. But you, you have your mind made up, and from where the rest of us are standing, it sure as hell looks like you believe heart-and-soul that primary liability is nonsense, and that lawyers should have a perpetual payday by suing everyone in sight, and let the courts sort it out.

Final thought: The moment you opened with “Was[sic] about Eric Goldman…”, you tossed all credibility you may have had right into the dumpster. What-about-ism is the most sure sign that one’s tenets are rooted in emotion, and not in reality. As noted in analyses above, your so-called ‘papers’ would not earn a passing grade in any credible law school. Well, the Google Law School might give you the time of day, but insofar as I’ve seen, none of those graduates have won a court case, let alone actually tried one.

Credit where it’s due: At least you one-upped most of our resident trolls by providing links, we’ll give you that.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:2 Have you even read my Twitter thread?

You say the thread is whining that 230 prevents the government from “controlling the Internet”. In fact, I say the opposite. I don’t want the government controlling the Internet and 230 has allowed it to do so. You imply that I am a “Trumpian” or speaking to those people, which could not be further from the truth. My criticism is a liberal criticism of 230. I am probably far more liberal than you or others on this forum.

I go into great detail in my paper about how 230 enables government abuses. Others have done so in the past. The censorship powers that 230 provides will always be a target for governments and always abused. People right now are complaining about Twitter cooperating with the government of India to censor topics globally. They could not do that without 230. It enables and encourages that kind of censorship and government cooperation. What is Twitter going to do, withdraw from India? 230 puts them in a bind. Do what India says or lose the market. In the absence of 230, India’s ability to dictate censorship would not reach US shores. Thanks to 230, it does. This is just one small way 230 allows governments to meddle in the Internet – and control speech.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:2 You seem so sure that they cannot

Yet there were no issues prior to 230’s passage. Of course, the Internet was much smaller back then and we didn’t really have social media. So no one can be sure of what would happen in the absence of 230 because the situation has never really existed. Yet 230 supporters speak with such certainty about the things they KNOW would happen. The sky will fall. Cats would marry dogs. Elephants and mice would become best friends. It’s easy to make up a whole list of fearful things that you imagine in your mind would happen to scare people into not thinking critically about what you are advocating. That’s how we got the Iraq War. Like weapons of mass destruction, the claims made by 230 supporters are made up in their head. Industry made up these claims to get the total liability shield they wanted. EFF & CDT helped them to get what they wanted – to freeze the government out of the Internet.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:4 Try arguing honestly

I never said the things you claim. The history is well known. Prodigy & AOL made up claims to get a total liability shield. As the Internet evolved and more sites enjoyed the privileges of 230, those sites of course agreed with Prodigy & AOL. This is all discussed in my paper. Are you allergic to honest discussion?

This comment has been deemed insightful by the community.
Rocky says:

Re: Re: Re:5

From your previous comment:

Industry made up these claims to get the total liability shield they wanted. EFF & CDT helped them to get what they wanted – to freeze the government out of the Internet.

Short memory?

Prodigy & AOL made up claims to get a total liability shield.

They did? There where two separate cases (Stratton Oakmont v Prodigy Services & Cubby v CompuServe) with two different claims leveraged at them with two different outcomes. The cases created a legal catch 22 for any interactive service. For them to have made any claim up they had to be in cahoots with each other and those suing them in an effort to create a situation where section 230 could be created.

Zeran v AOL, which you refer to, was a later challenge to section 230 which makes it hard for anyone to “make claims up” in an effort to create section 230.

If you make such basic errors in your argument, what other errors have you made?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:6

Industry is a generic term. Industry at the time 230 passed was Prodigy, AOL, & CompuServe. Prodigy & AOL helped write 230 & were instrumental in creating misleading arguments to justify it. Of course, industry today is a lot larger than that.

As for this “legal catch-22” – read my paper. There was no legal catch-22. This was dreamed up by Prodigy and pushed hard by EFF & CDT – all of whom wanted to freeze government out of the Internet and were willing to misrepresent a lower court decision to get their way. The two cases had different outcomes because the two companies had vastly different behavior.

230 proponents misrepresent Stratton for decades, claiming that it held that any moderation would make a site a moderator. This is false, and I quote key lines in the opinion demonstrating this. What Stratton held was that viewpoint moderation makes you a publisher. Of course, Prodigy had a motive to give the opinion an expansive reading, hoping to get a huge bailout, which they did. And EFF & CDT played along because it provided an excuse to get what they wanted: a government freeze out on the Internet. The public got taken for a ride.

Industry won, consumers lost – a story as old as time. The only difference here is that groups that represent consumer interests betrayed consumers. Because their ideology (government is always bad) conflicted with consumer interests in this case. Intermediary liability laws are a rare case where government rules are meant to protect the public from the powerful. EFF & CDT worked with industry to ensure that the public is powerless online. Instead, they empowered industry. This follows their radically libertarian ideology, expressed by EFF’s founder, Barlow, in his manifesto, which drips with mad scientist vibes. Barlow’s manifesto can be summarized as: democracy sucks, the rule of law sucks, the Internet will become a utopia if you just leave us, your benevolent dictators who always have good intentions, in charge. It honestly reads like someone who was high on drugs wrote it – which may actually be the case.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:8

From Page 5 of the opinion, discussed on Page 29 in my paper:

“By actively utilizing technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and “bad taste”, for example, PRODIGY is clearly making decisions as to content (see, Miami Herald Publishing Co. v. Tornillo, supra ), and such decisions constitute editorial control. (Id.)”

This is the stated reason for the finding of editorial control. And both “offensiveness” and “bad taste” are subjective judgments that allow Prodigy to delete anything it wants based on its view of that content. That is the essence of viewpoint control. Prodigy asserted the right to delete or edit anything it wanted based on whether it viewed that content as “offensive” or in “bad taste”. This is the role of a publisher. It is editorial control.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:9

The Stratton Oakmont opinion is not perfect. It could be written clearer. And its reliance on Tornillo is a problem because Tornillo gives a bad, overly broad definition of editorial control (in dicta). But it’s also a low NY state court opinion that had virtually no precedential value and could have (and would have) been clarified and expanded upon by higher court decisions. Section 230 short-circuited this natural process, opting to instead obliterate this entire area of law online.

But the expansive reading given to Stratton Oakmont by 230 supporters is not correct. It was a weak interpretation favored by Prodigy, AOL, & EFF. Because this weak, expansive interpretation served their needs. It was self-serving. The more accurate interpretation is that since the case only mentioned forms of viewpoint control, the opinion was limited to viewpoint control. This also has the advantage of being the true purpose of editorial control, as I explain in great length in my paper.

Stephen T. Stone (profile) says:

Re: Re: Re:10

since the case only mentioned forms of viewpoint control, the opinion was limited to viewpoint control

“Trans people are literal demons who should be eradicated.” That’s a viewpoint⁠—a heinous and barbaric one, granted, but still a viewpoint.

Yes or no: Do you believe a platform should be considered a publisher, and thus legally liable for all third-party speech on said platform, if it deletes any post expressing that viewpoint?

If “yes”: Why?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:11

No – but only because the last part is arguably advocating for violence – which is not a viewpoint, but an action. But the first part – yes. You may disagree with it, but unless you want to be a publisher, you have to leave it up. You don’t have to promote it. Newspapers wrestle with this same issue. They don’t print letters to the editor that say “trans people are demons” because they disagree with that viewpoint. But in exchange for this, they accept publisher liability. This has always been the tradeoff in the law. My paper has pages and pages discussing this tradeoff, why it exists, and why it must exist.

This doesn’t mean there is nothing you can do. For example, a site could offer a service to users to screen out that content. The service would be opt-in. If the service is opt-in, then the site is not censoring the content, but rather acting as an agent of the user, who is delegating the decision-making to the site and reserves the right to terminate that delegation at any time.

One of the odd things about Section 230 is that 2 of its 5 stated purposes claim to want to empower users/parents to make content decisions. But the powers 230 gives completely eliminates the power of users/parents to make these decisions. When a site uses 230 to censor, the user is unable to override that decision – the user has lost the power to choose. Section 230’s powers are at odds with its own stated purposes. This is largely because the people who wrote Section 230 didn’t really understand how it would operate in practice or how it intersected with intermediary liabilities law. They didn’t even know much – if anything – about that law anyway.

Stephen T. Stone (profile) says:

Re: Re: Re:12

No – but only because the last part is arguably advocating for violence – which is not a viewpoint, but an action.

That’s cute, that you made that distinction. But here’s the thing: The law doesn’t see it that way. Hate preachers around the country take to their pulpits every Sunday and say the equivalent of “queer people need to be executed by the government”. Nobody arrests them for it. Their speech is legal⁠—as is the viewpoint I described.

unless you want to be a publisher, you have to leave it up

A Mastodon instance that prides itself on being pro-queer⁠—to the point where it has a rule banning anti-queer speech⁠—is a publisher because it bans anti-queer viewpoints and should therefore be held liable for every bit of third-party speech on the instance.

Yes or no: Do you believe the previous statement?

They don’t print letters to the editor that say “trans people are demons” because they disagree with that viewpoint. But in exchange for this, they accept publisher liability.

They accept liability for first-party speech. And if you wrote a paper on 230, you should already know that 230 doesn’t have protections for first-party speech.

a site could offer a service to users to screen out that content

They already do⁠—it’s called “moderation”. And most platforms also have some form of user-side moderation in the form of blocking functions (be they for users, tags, or specific words/phrases).

When a site uses 230 to censor, the user is unable to override that decision – the user has lost the power to choose.

A site doesn’t “use 230 to censor”. 230 doesn’t give them the right to moderate⁠—that honor falls to the First Amendment⁠—but it does give them the legal protection they need to moderate without being held legally liable for the speech they don’t moderate.

As for “the user has lost the power to choose”: Tough shit. If they got banned, they should’ve thought about that before they broke the rules of the service. If they lost the privilege of seeing someone else’s speech, they can go find that someone else on a different platform. No one⁠—and I emphasize, NO ONE⁠—has a right to demand that a platform host any kind of speech, be it theirs or someone else’s. You wanna talk about legal arguments, so go find one that says someone does have that right; if you can’t, that’s your problem.

the people who wrote Section 230 didn’t really understand how it would operate in practice or how it intersected with intermediary liabilities law

They absolutely understood those things. That’s why they wrote 230 the way they did. You’re the one who seems to think the owner of a Mastodon instance should be held responsible for every post on their instance that they didn’t make.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:13

The law doesn’t see it that way.

You are mixing 1st Amendment law with intermediary liability rules. The government has greater restrictions than a company because the government is the ultimate authority. If the government blocks something, it is blocked everywhere. They have tighter restrictions for this and other reasons. The rules around intermediary liability are similar, but not the same. They are based on the same principles. But they are used in an entirely different manner. These rules merely determine whether you are a publisher or distributor. They don’t say whether you can or cannot be restricted from speaking. Different areas of law have different rules.

Yes or no: Do you believe the previous statement?

YES. Why shouldn’t you? If you assert the right to control what can be said, then you are the one speaking. If there are 50 comments that are pro-trans and 50 anti-trans, and you delete all 50 anti-trans, then the remaining speech is YOUR speech by any reasonable measure. You have used the speech of others to form your own message. Here’s the kicker: websites admit this. In the Texas and Florida cases involving the laws passed there to compel sites to carry certain speech, their legal argument against this has been “When we censor, that is protected speech”. You cannot have it both ways. You cannot claim it’s protected speech in one instance and then turn around and say it’s not your speech in the case of intermediary liability laws. 230 proponents argue both sides of issues frequently because they don’t really understand the law so they talk around in circles.

They accept liability for first-party speech.

No. They accept liability for third party speech. Letters to the editor are written by third parties. Newspapers accept responsibility for them. Books are written by third parties. Publishers accept responsibility for them. This is how the law has always operated.

They already do⁠—it’s called “moderation”.

No. They do not “offer” moderation. They impose it. Users have no choice. If something you want to see is “moderated”, you have no recourse. You cannot see it. This is a key distinction.

A site doesn’t “use 230 to censor”. 230 doesn’t give them the right to moderate⁠…but it does give them the legal protection they need to moderate without being held legally liable for the speech they don’t moderate.

If 230 is what gives them the legal protection to censor, then they in fact “use 230 to censor”.

As for “the user has lost the power to choose”: Tough shit.

The Policy section of 230 explicitly states that 2 goals (of 5) of 230 are to empower users/parents to choose. You are saying “tough shit” to the stated goals of 230. That’s a problem.

If they lost the privilege of seeing someone else’s speech, they can go find that someone else on a different platform.

This is not a consideration in the law. If a printing press distributor were to refuse to publish a book for editorial reasons, the law does not say, “That’s okay, they can just find another printing press.” No, the law says, “That’s okay, but that printing press is now a publisher.” One reason for this is that if a printing press rejects a work, it is very likely that other printing presses will reject it for similar reasons – even all printing presses. This results in suppression of minority views. The public loses because minority/controversial views are snuffed out and unable to be printed. The same happens online. You say someone can find a censored person on another site. Except that’s not really the case. If someone gets censored on one site, they get censored on others, too. In fact, the TOS of Google & Apple app stores allow them to dictate that anything Google & Apple find unacceptable is censored on every app in their stores. Amazon uses the same logic to deny the use of their essential web services. Similar for Microsoft. Minority/Controversial views have a very difficult time finding a home anywhere on the Internet. And when they do, they are constantly under threat. China famously censors many things on the Internet. We do not excuse that by saying, “It’s okay because Chinese citizens can just use a VPN to get around it.” This argument is lazy and simplistic.

No one⁠—and I emphasize, NO ONE⁠—has a right to demand that a platform host any kind of speech, be it theirs or someone else’s. You wanna talk about legal arguments, so go find one that says someone does have that right; if you can’t, that’s your problem.

No one is asserting this “right”. Instead, distributors have a choice. That choice determines their liability. They are not forced to do anything. They simply have different responsibilities depending on their choice. And this choice is the heart of intermediary liabilities law. It has been around for ages and ratified by the Supreme Court.

Stephen T. Stone (profile) says:

Re: Re: Re:14

Just gonna pick out one thing from your horribly misinformed Gish Gallop screed and hope this gets through your thicker-than-steel skull:

You cannot claim it’s protected speech in one instance and then turn around and say it’s not your speech in the case of intermediary liability laws.

Except you can.

You cite the First Amendment, but you forget that the First Amendment⁠—by way of centuries of related jurisprudence⁠—protects what is known as “the right of free association”. What that means is someone is free to associate with whomever they wish; what that also means is that people must be free to not associate with someone, even if the association is only implicit. (It’s like freedom of religion: Being free to practice your specific religion means you must be free to not practice someone else’s.)

The funny thing⁠—at least as it regards your dumbassed argument⁠—is that the freedom of association also applies to speech. A grocery store can put up a bulletin board for people to post flyers and such, but that store has no legal, moral, or ethical obligation to host every flyer that someone wants to put up. Whether a given grocery store would willingly host a flyer advocating for White supremacy is a decision only that store’s owner(s) can make. The government can’t force them to host that flyer⁠—or take it down, for that matter. (Again: Freedom requires the freedom to refuse.)

That same principle applies to social media services: A service like Twitter or Truth Social has the freedom to refuse hosting certain kinds of speech. They can also refuse to moderate speech that the other service would. (Example: Twitter might not want to host anti-Black racist rants, but Truth Social might.) Neither one should be punished for those decisions⁠—and that includes having legal liability for the speech they leave up.

But under your twisted, Freeze Peach–leaning bullshit beliefs, any platform must be punished for its refusal to associate with speech (and people) that said platform deems reprehensible. I mean, you outright said that a hypothetical platform that decides to bar anti-queer speech should be held legally liable for all other speech. You also misunderstand liability by saying that moderation decisions are akin to the (hopefully) rigorous editing standards of newspapers and book publishers, as if deleting a post with the N-word in it after it’s been posted is the same thing as seeing that speech beforehand and approving it for publication. It’s not⁠—and either you know it’s not (in an attempt to gin up outrage at your bullshit) or you don’t (which would make you incredibly ignorant of a lot of things), but neither one speaks well of you.

You’re pissing me off, and you’re clearly trying to piss off a whole lot of other people here. That’s a good sign that I should stop replying to you⁠—which, after this comment, is exactly what I’m going to do. But before I go, I have one more thing to address:

The Policy section of 230 explicitly states that 2 goals (of 5) of 230 are to empower users/parents to choose. You are saying “tough shit” to the stated goals of 230. That’s a problem.

Parents and users have a right to choose what services they’ll use/allow in their homes. What they don’t have is a right to make their own rules for those services. That’s what “tough shit” is about: Losing their spot on a given service because they broke the rules sucks for them⁠—but they lost a privilege, not a right, and I’m not about to weep for the ignorant because I’ll be crying all day.

Now please go fuck yourself with an inanimate carbon rod. 🖕

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:15

Except you can.

You are saying it’s okay to make one argument in one context and the opposite argument in another? That’s certainly bold. Will not impress many judges.

You cite the First Amendment, but you forget that the First Amendment⁠—by way of centuries of related jurisprudence⁠—protects what is known as “the right of free association”.

This is where knowing the law helps. Again, the Supreme Court has already looked at these laws and says they do not violate any right of free association. Why? Because these “rights” are not absolute. No rights are. They can be restricted if the government can provide the appropriate level of justification for the given right. And intermediary liability laws do this. In fact, in Smith v. California, the Court said that assigning strict liability to distributors crosses the line and fails their balancing test. But assigning it to publishers does not. In a later case, they confirmed that notice liability for distributors also passes scrutiny.

You cannot reopen these debates online and say intermediary liability violates your “right to free association” because the Court has already said it does not. And few, if any, legal experts ever disagreed with it. You are reopening this debate because you are ignorant to the prior debate. You are a non-expert coming into a well settled area of law and asserting it’s wrong while totally lacking any of the requisite knowledge to make that assessment. It’s honestly embarrassing. And you feel justified in doing this because 230 “experts” have told you this makes sense. They fed this to you knowing that you are not an expert and would not understand the deception. Now you are in a pickle, making ridiculous arguments that were long ago discarded because you foolishly trusted 230 experts were being honest with you. Sorry, but they were not. They continue to not be honest about this.

A grocery store can put up a bulletin board for people to post flyers and such, but that store has no legal, moral, or ethical obligation to host every flyer that someone wants to put up. Whether a given grocery store would willingly host a flyer advocating for White supremacy is a decision only that store’s owner(s) can make. The government can’t force them to host that flyer⁠—or take it down, for that matter.

As I already stated, the government cannot force them to host every flier. But if they censor fliers they don’t like, they might take on legal liability for all fliers posted. In practice, this has never been an issue because it’s rare that someone posts legally actionable material on a bulletin board. And if they did, the amount of damages is likely too small to be worth pursuing a legal case. There is an interesting question in this particular case that asks whether the bulletin board is a means of publication or whether it’s simply a distribution channel. After all, everything posted to the board is already published – someone has already typed it up and printed it out. The bulletin board is merely a means of distribution. This difficult question is not present online, where a website is clearly a means of publication.

Again: Freedom requires the freedom to refuse.

You ever heard of common carriers? Airlines are explicitly prohibited from refusing customers except for very limited criteria, such as safety. If they don’t like you, they still have to fly you. Same is true of the postal service and other common carrier mail delivery services. Laws actually require that they serve you. So much for “freedom to refuse”. Common carriers are a type of intermediary – a type with the most stringent government controls. These controls are legal. They do not violate the 1st Amendment. Just as the looser controls placed on distributors and publishers do not violate the 1st Amendment. Understand the law before you comment on it.

That same principle applies to social media services: A service like Twitter or Truth Social has the freedom to refuse hosting certain kinds of speech. They can also refuse to moderate speech that the other service would. (Example: Twitter might not want to host anti-Black racist rants, but Truth Social might.) Neither one should be punished for those decisions⁠—and that includes having legal liability for the speech they leave up.

This is just based on your own belief and not the law. Again, BookSurge did not have this freedom. They were sued and the court intensely scrutinized their policies to see if they exercised editorial control. They did not, so they were found not liable. But only because they scrupulously guarded their distributor status by refusing the type of editorial control you think is essential.

You also misunderstand liability by saying that moderation decisions are akin to the (hopefully) rigorous editing standards of newspapers and book publishers, as if deleting a post with the N-word in it after it’s been posted is the same thing as seeing that speech beforehand and approving it for publication. It’s not⁠—and either you know it’s not (in an attempt to gin up outrage at your bullshit) or you don’t (which would make you incredibly ignorant of a lot of things), but neither one speaks well of you.

It is the same thing! Legally, it’s exactly the same thing. In both cases the speech was censored. The law doesn’t care whether it got posted first and then censored. That’s irrelevant. Much of the censorship in China occurs after the fact. That doesn’t make it not censorship. This is dumb.

You’re pissing me off, and you’re clearly trying to piss off a whole lot of other people here.

When people are emotionally attached to something and you criticize it, they tend to get angry. That’s not on me. When they cannot refute your criticisms, they tend to get even angrier. Again, that’s not on me. You should be mad at the people who have misled you. This is Mike Masnick’s website. He replies to a lot of comments. He has not replied to any of these. His silence is telling.

Parents and users have a right to choose what services they’ll use/allow in their homes. What they don’t have is a right to make their own rules for those services.

This is not what those policies in 230 are about. If this were the goal, there would be no need for them to appear in 230 because parents already could choose what services they use. You don’t need sweeping federal legislation for that. The intent – and this is clear from the legislative history – was to empower parents to decide on the content their kids see, not the service. And I can even tell you why this was put into 230: because the competing Exon Amendment (the CDA) allowed the government to decide for parents. Wyden & Cox (rightly) thought that was unconstitutional, so they put a policy in their bill that said it should be up to the parents. Only problem is the powers in their bill didn’t do that. In fact, it blocked that. But instead of the government blocking it, the companies block it. Same result: parents were disempowered.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:17

I’m just going to stop you right there. Stephen is known to not take kindly to people putting words in his mouth that he didn’t put there first.

I’m just going to save him the trouble and point out that you have grossly misunderstood what he said.

Not at all. It’s exactly what he said. And in supporting that, he went off on a long discussion of “free association” that has nothing to do with anything. A stream of consciousness from someone who obviously knows nothing about this law. Anyone who does know this law rolled their eyes at it because it was just a waste of time and space.

This situation is a dishonest one created by 230 supporters. When arguing for 230, they have always argued that websites should not responsible for someone else’s speech. This rests on the assumption that moderation decisions are not themselves speech by the sites. This has always been wrong. Texas and Florida called their bluff. They said: okay, if moderation decisions are not speech, then we can regulate how you moderate. Then they turned around and said, “Wait just a minute! Our moderation is speech!” This argument is winning in those cases because it is in fact speech. They just don’t want you to notice they argued the opposite to justify 230’s powers. You cannot argue that it’s not speech to justify the law and then argue it is speech to block regulation.

bhull242 (profile) says:

Re: Re: Re:12

[T]he people who wrote Section 230 didn’t really understand how it would operate in practice or how it intersected with intermediary liabilities law.

They fully understood that. Indeed, in later interviews, they fully agreed with the courts’ interpretations of the law. You’re just imposing your rationalizations on people who didn’t have them.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:13

“They fully understood that. Indeed, in later interviews, they fully agreed with the courts’ interpretations of the law.”

1) That has nothing to do with whether they understood how the law would operate or how it interacted with intermediary liabilities law.

2) It’s also not true that they agreed with courts. Cox is on record in Kosseff’s book saying that Batzel v. Smith was a bad decision they did not intend.

3) Zeran v. AOL is a decision that Wyden says they did intend. Yet, it’s obvious from the legislative history they did not. If they had intended it, they would not have used the word “publisher”, they would have started their initiative after Cubby, not after Stratton Oakmont, and they would have mentioned their desire to overturn Cubby in the Congressional Record. They did not, so it’s obvious they did not intend the result in Zeran. Legislators often want to give expansive readings to their bills – far more expansive than what was passed. That’s what happened here. And this confusion against demonstrates they didn’t really know much about how intermediary liabilities law works.

This comment has been deemed insightful by the community.
bhull242 (profile) says:

Re: Re: Re:14

1) That has nothing to do with whether they understood how the law would operate or how it interacted with intermediary liabilities law.

Yes, it does. They literally said that they agree with the court’s determinations on how it operates and how it interacts with intermediary liabilities law, saying that that’s exactly how they intended it to work. How is that not them understanding how it would operate and how it interacted with intermediary liabilities law? It’s literally what they were going for.

2) It’s also not true that they agreed with courts. Cox is on record in Kosseff’s book saying that Batzel v. Smith was a bad decision they did not intend.

Yeah, and he also said that it was an outlier that was later corrected by the same court. That doesn’t disprove my point.

3) Zeran v. AOL is a decision that Wyden says they did intend. Yet, it’s obvious from the legislative history they did not. If they had intended it, they would not have used the word “publisher” […]

Nothing about the use of “publisher” disproves that Zeran was intended. It only says that, for the purposes of liability for third-party content (and only for those purposes), an ICS provider is not to be treated as the publisher or speaker of that content. It does not say or imply that the immunity from liability requires that the cause of action literally use the word “speaker” or “publisher” when describing the ICS provider, nor does it suggest that the immunity was intended to be excluded from ICS providers who are also publishers.

[…] they would have started their initiative after Cubby, not after Stratton Oakmont, […]

It was after both, and it was intended to address them both. Legislators can address more than one thing at a time. This isn’t an either-or scenario. It can be both.

[…] and they would have mentioned their desire to overturn Cubby in the Congressional Record.

They did. They explicitly said from the start that it was intended to address both cases. This is extremely well established. You literally have no idea what you’re talking about.

They did not, so it’s obvious they did not intend the result in Zeran.

The premise is false (one of the three things you claimed didn’t happen actually did), so the conclusion is undemonstrated. Not to mention that the conclusion doesn’t even necessarily follow from the premises (as I stated above).

Legislators often want to give expansive readings to their bills – far more expansive than what was passed.

[citation needed]

In my experience, it’s usually the other way around.

And this confusion […]

The only confusion here is yours. Again, they were well aware of what they were doing, and you have failed both in rebutting the evidence for that and in providing evidence that actually shows the contrary.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:15

Yes, it does. They literally said that they agree with the court’s determinations on how it operates and how it interacts with intermediary liabilities law, saying that that’s exactly how they intended it to work. How is that not them understanding how it would operate and how it interacted with intermediary liabilities law? It’s literally what they were going for.

As I said, many years after the fact Wyden claimed that Zeran was what they intended all along. Thankfully we have a thing called legislative history that tells us he is pulling our chain. I detail this in my paper. So does the excellent dissent in Doe v. America Online. I will discuss the legislative history below, but the Doe v. AOL dissent cuts to the heart of the matter: we know this claim is wrong because it undercuts the stated purpose for writing 230. ALL of the legislative history and ALL of subsequent statements make clear the driving force behind 230 was the “catch-22” that they believed sites were in where “moderating” made them a publisher. They wanted to end this catch-22 for the explicit purpose of encouraging sites to “moderate”. As Doe v. AOL points out, one of the five states purposes of the statute was to promote “decency” on the Internet. But the elimination of distributor liability granted by Zeran undercuts this goal. If a distributor is not required to delete illegal content, even when it knows it is illegal, the law has given sites the power to be indecent rather than promote decency.

The Doe v. AOL dissent calls this above situation an “absurd interpretation” that is “totally unwarranted”. It also notes that all commentary prior to Zeran assumed the distributor liability was not covered. Because that’s obviously the case. One paper stated explicitly: “[n]otably, the legislation did not explicitly exempt ISPs from distributor liability, and its specific reference to `publisher or speaker’ is evidence that Congress intended to leave distributor liability intact.”

The Zeran case is an obvious example of legislating from the bench. The pro-business court decided it wanted to exempt distributors and set about finding ways to rationalize it. To do so, it had to ignore legislative history and misrepresent the Restatement, but it accomplished the goal. 230 supporters cheered this, demonstrating their lack of principles, as they supported obvious judicial lawmaking just because the outcome was one they liked.

Yeah, and he also said that it was an outlier that was later corrected by the same court. That doesn’t disprove my point.

No, he didn’t. Please show me where he did. Batzel has never been “corrected” by any court. It is still good law. Batzel is the source of the claim by 230 proponents that 230 covers “users”. Cox said this was never intended. So did virtually everyone else involved with drafting & promoting 230. No one has said otherwise.

Nothing about the use of “publisher” disproves that Zeran was intended. It only says that, for the purposes of liability for third-party content (and only for those purposes), an ICS provider is not to be treated as the publisher or speaker of that content. It does not say or imply that the immunity from liability requires that the cause of action literally use the word “speaker” or “publisher” when describing the ICS provider, nor does it suggest that the immunity was intended to be excluded from ICS providers who are also publishers.

The Zeran decision was about distributors, not publishers. None of this really speaks to the problems with Zeran.

It was after both, and it was intended to address them both. Legislators can address more than one thing at a time. This isn’t an either-or scenario. It can be both.

If legislators intend to do two things at once, they typically mention both things, not just one of them. And their stated purpose for passing the law usually doesn’t contradict the thing they failed to mention.

They did. They explicitly said from the start that it was intended to address both cases. This is extremely well established. You literally have no idea what you’re talking about.

They didn’t. You are just making things up. The Congressional record includes a single mention of Cubby by Chris Cox. And that mention indicates he approved of the decision. There was no discussion about overturning it. The only discussion mentioned that they wanted the same liability for publishers, not that they wanted to get rid of notice liability entirely. Again, see Doe v. AOL for citations to the Congressional record.

[citation needed]

In my experience, it’s usually the other way around.

No citation needed. Congressmen do not work for ten years to pass a single bill and then tell the media, “Actually, my bill is quite small and does almost nothing.” The fact that Congressmen inflate their achievements and the reach of their bills is so common it’s joked about among lawyers.

Anonymous Coward says:

Re: Re: Re:16

Congressmen do not work for ten years to pass a single bill and then tell the media, “Actually, my bill is quite small and does almost nothing.”

So exactly why should people believe you? You’re obviously not going to work for ten years writing a paper and then say, “Actually, my plan to destroy Section 230 is quite small and does almost nothing.”

You keep trying to downplay the consequences but it simply doesn’t work.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:9

And guess what? 230 makes no distinction between “platform” and “publisher”. (Neither does the First Amendment.) All 230 does is put the responsibility for speech where it belongs: on the shoulders of whoever wrote it.

Under the idea that “Prodigy is a publisher and should therefore be held liable for third-party speech”, the government would ironically have more control over what speech that, say, Twitter could or couldn’t host. After all, if Twitter were legally liable for the speech of its users in any and every instance, the mere threat of legal action⁠—which would involve the courts, which are extensions of the government⁠—would make Twitter moderate as little speech as possible. That would allow bigots, spammers, and other such assholes to run rampant on Twitter with no consequences for their speech. After all, if “viewpoint moderation” would make Twitter liable for their speech, Twitter won’t moderate it.

The whole point of Section 230 is to provide platforms like Twitter, Facebook, Parler, Gab, Mastodon instances, and even a shitpit like 4chan with the legal leeway they need to moderate those platforms as they see fit. Remove that protection and legal jeopardy will attach to any act of moderation, which means those platforms will either shut down or stop moderating speech. If you can’t figure out how that’s a bad thing, you have much bigger problems than not being taken seriously on this comments section.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:10

“All 230 does is put the responsibility for speech where it belongs: on the shoulders of whoever wrote it.”

This is a misleading talking point that takes advantage of the fact that most people don’t know this law. This has never been the law. If the responsibility should be on who wrote it, why is Stephen King’s publisher responsible for what he writes? Why are newspapers responsible for letters to the editor? Why are bookstores responsible if they don’t pull a defamatory book from their shelves?

The answer to these questions is that you are being misled by carefully crafted talking points meant to distract you from the real issues. In intermediary liabilities law, publishers and distributors are NOT responsible for the speech, they are responsible for THEIR OWN BEHAVIOR WITH REGARDS TO THAT SPEECH. They have certain duties to minimize the harm that 3rd party speech can cause to others. These duties are inherent in the activity they undertake – distribution of speech. When they are held liable, it is not for the speech itself, but FOR THEIR FAILURE TO FULFILL THEIR DUTIES TO THE PUBLIC IN REGARDS TO THE OFFENSIVE SPEECH. This is a key distinction that groups like EFF have misled the public about for decades.

“After all, if Twitter were legally liable for the speech of its users in any and every instance, the mere threat of legal action⁠—which would involve the courts, which are extensions of the government⁠—would make Twitter moderate as little speech as possible.”

1) This is not technically true. As I said, only viewpoint moderation would be prohibited. And that’s a good thing.

2) You are making the upside down argument that government enforcing laws that incentivizing companies not to censor amounts to government “controlling” speech. This is bizarre. That by creating an environment that encourages speech, the government is “controlling” speech. This is actually the purpose of intermediary liability laws! To increase the quality and quantity of speech! It’s the purpose of every Supreme Court balancing test around speech. Free speech is built on this concept.

And yes – intermediary liability laws do limit the 1A rights of speech distributors. I explain in my paper why this is a good thing. If you think it’s a bad thing, argue for eliminating it offline and online. You will not get anyone with expertise to agree with you offline because it’s a disaster. In short: this tradeoff maximizes speech. The government has to make a tradeoff: without these laws the speech of large companies is preferenced. With them, the speech of individuals is preferenced. In both scenarios, someone gets their speech restricted. With these laws, that is a single, often giant company that has ample opportunity to speak anyway. Without these laws, it’s individuals who often don’t have opportunities to speak. This is how it works online thanks to Section 230. The speech of giant mega corporations is preferenced over the speech of their billions of users. This is a massive net loss for free speech.

“legal jeopardy will attach to any act of moderation”

False. As explained, legal jeopardy would only attach to viewpoint moderation, which is really not necessary, and is precisely the “moderation” that people hate (“moderation” is a vague word with no legal meaning that serves to obscure the issues – I avoid using it). And Congress could easily pass a safe harbor that allows companies to declare whether they intend to be publishers or distributors and provides a legal process that can overturn censorship decisions rather than lose distributor status.

“If you can’t figure out how that’s a bad thing”

This is the whole thing. These groups have deliberately scared people into thinking the sky will fall without 230. It’s just not true. They base this on misinterpretations of cases like Stratton Oakmont and on completely misrepresenting intermediary liabilities law. I go through this in dozens of pages in my paper. There is one highly cited Section 230 paper that claims 230 is not a big deal because intermediary liability was trending toward “no liability” for publishers anyway. They provide citations to 5 cases and NONE of the cases actually support the claim. This kind of deceptive practice is typical in 230 papers. It’s a house built on a foundation of lies. All to convince individuals that it’s for their benefit, when it’s really for the benefit of industry – who are after all the people who pay the bills for groups like EFF (and EFF refuses to disclose its donations because they don’t want you to know this).

Stephen T. Stone (profile) says:

Re: Re: Re:11

I could spend all night replying in length to you, but I’m not willing to wallow in your pigsty for that long. I’ll ask you One Simple Question, be on my way, and let everyone else tear you apart piece by piece:

For what reason should the law hold the owner(s) of a social media service responsible (i.e., legally liable) for a death threat made by a third-party user of that service if, unlike newspapers and book publishers, the service didn’t know about the threat until well after the user posted it?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:12

I’m glad you brought this up. This is a common talking point from 230 supporters. I discuss it in my paper. It shows that 230 supporters have no idea what they are talking about. They fundamentally do not understand intermediary liabilities law. Here is why:

Publisher liability for newspapers does not arise from their ability to screen all content. At all. If a newspaper stopped screening its content tomorrow, it would still have publisher liability. This liability arises from the assertion of the right to control the viewpoints expressed. Assertion of the right to control viewpoints is the essence of being a publisher and it alone brings liability. Whether you are capable of screening all speech you distribute has no impact on this: what it does impact is your decision to behave like a publisher. If you are incapable of screening all content, then you should decide not to be a publisher because you are not capable of publishing responsibly or effectively.

Prior to the Internet, no similarly positioned printing press had ever been stupid enough to assert editorial control without having the means to properly screen material and ensure it was not harmful. Prodigy was uniquely stupid. So was AOL.

In a related argument, it is often asserted that since websites do not screen 99% of content (and they don’t pre-screen it), they should not be held liable for that content they do not screen. Again, this fundamentally misunderstands intermediary liability law. Publishers cannot screen only a portion of materials and claim distributor liability over the rest. Because the liability stems not from the screening and censoring of material, but from the assertion of the right to screen and censor all materials. By asserting the right to censor all materials, all materials are controlled by the publisher – through either direct censorship or self-censorship meant to satisfy the publisher’s requirements. And all materials remain under threat of censorship at any time. Put another way: the site maintains control over all speech, even if it doesn’t exercise any control or it does so at a future time. It is always in control of the speech. That is a publisher.

Book publishers are actually similarly situated. Most book publishers probably do not censor 99% of the text in the books they publish. Yet they have never argued they should only be responsible for the text they do censor. They are responsible for all of it. As they should be. Because regardless of the actions they take, they assert the right to control it all.

These are key points in intermediary liabilities law and it’s clear prominent 230 supporters don’t have the first clue how this works. It’s a sad spectacle. It speaks to the total dominance of this topic by industry and the deterioration of what passes for academic discussion due to that domination – which has led discussion to be stale, with everyone repeating the same talking points and thinking the same way – the way Google thinks.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:14

“If that’s the metric by which you judge everyone else, what makes your argument – parroted by the vested interested ranting about “Big Tech” to end users – superior?”

I actually didn’t make a judgment about that. I merely pointed out that legal discussions of 230 are weak and have been for decades. And I blamed it on the lack of diversity in views. The control industry asserts over debate has been effective, but also resulted in poor scholarship. My judgment that the views are weak is not based on their ubiquity, but rather my assessment of the arguments themselves.

Anonymous Coward says:

Re: Re: Re:13

Book publishers are actually similarly situated. Most book publishers probably do not censor 99% of the text in the books they publish. Yet they have never argued they should only be responsible for the text they do censor. They are responsible for all of it. As they should be. Because regardless of the actions they take, they assert the right to control it all.

So are you just like, totally unaware of the long line of cases, like Winter v. Putnam, that says you’re full of shit?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:14

Try reading Winter v. Putnam. First paragraph: “Putnam neither wrote nor edited the book.”

This means Putnam was acting as a distributor in this case. They did not exercise editorial control. They were redistributing an already published book and had no ability to control its contents. Putnam had no liability in this case because they had no capability to edit or change the book. They would only be liable if they had notice of problems ahead of time and kept selling it.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:15

I have had a chance to review this case now – which I had not seen before because it’s not important to this law – it’s tangential. The case doesn’t turn on the publisher/distributor issue – though it could have.

Instead, the court finds that a publisher has no duty to verify everything in the book is accurate. Assigning duties to parties is a balancing test and making a publisher verify everything in a book would be too burdensome. Imagine requiring a publisher to verify that Einstein’s Theory of Relativity was correct before publishing it. Or Darwin’s Theory of Evolution. How would a publisher verify that a cookbook’s method for cooking a salmon is accurate? If the cook time is too short and someone gets food poisoning, is the publisher responsible? The court says ‘no’.

In footnote 9, the court explains that if the case involved “libel or fraudulent, intentional, or malicious misrepresentation”, the plaintiff would have a stronger case. Because those duties are required of publishers. But those are not alleged.

So, I initially spotted that the case involved a publisher acting as a distributor, but that turned out to not be the main reason the case failed. The main reason it failed is because the publisher didn’t have a duty to prevent the main injury caused. Contrast this with defamation, which publishers do have a duty to prevent.

Anonymous Coward says:

Re: Re: Re:16

Assigning duties to parties is a balancing test and making a publisher verify everything in a book would be too burdensome. Imagine requiring a publisher to verify that Einstein’s Theory of Relativity was correct before publishing it. Or Darwin’s Theory of Evolution. How would a publisher verify that a cookbook’s method for cooking a salmon is accurate? If the cook time is too short and someone gets food poisoning, is the publisher responsible? The court says ‘no’.

Imagine typing all that out and insisting that it’s irrelevant as to whether or not websites should be liable for hosting content from billions of users.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:17

Imagine typing all that out and insisting that it’s irrelevant as to whether or not websites should be liable for hosting content from billions of users.

You don’t get it. The court in that case was confronted with whether publishers – who have the greatest liability – should have that particular liability. It has nothing to do with whether they should be seen as publishers or distributors. They were publishers.

The issue with websites is whether they should be seen as publishers. And this question is answered with a different test. Once you reach that answer, you then ask what exactly publishers are liable for. And defamation is one of those things – as this case reiterated by saying that if the plaintiff had alleged defamation, they would have a better case.

Your argument would make sense if the case involved an attempt to determine whether they were a distributor or publisher. But it did not. It involved determining what types of activities publishers are responsible for. Defamation: yes. The things alleged in this case: no.

Publisher/Distributor/Common carrier status is not determined by arbitrary things like your capabilities. It is determined by your behavior. What matters in the law is behavior.

A grocery store cannot argue they should be immune to cleaning up spills because they are understaffed. If you undertake the activity, you are responsible. So before you undertake the activity, it is your job to make sure you are capable of doing it properly.

Anonymous Coward says:

Re: Re: Re:18

So before you undertake the activity, it is your job to make sure you are capable of doing it properly.

And yet, here we are saddled with endless entities established for copyright enforcement, who have not only failed to stop piracy but have developed such a reputation for false positives, judges are no longer willing to give them free subpoenas.

This is a joke of a metric you’re using, because it simply doesn’t align with how the world works.

Rocky says:

Re: Re: Re:7

Industry is a generic term. Industry at the time 230 passed was Prodigy, AOL, & CompuServe. Prodigy & AOL helped write 230 & were instrumental in creating misleading arguments to justify it. Of course, industry today is a lot larger than that.

You better tell Ron Wyden and Chris Cox this, they’ll certainly be very surprised.

As for this “legal catch-22” – read my paper. There was no legal catch-22. This was dreamed up by Prodigy and pushed hard by EFF & CDT – all of whom wanted to freeze government out of the Internet and were willing to misrepresent a lower court decision to get their way. The two cases had different outcomes because the two companies had vastly different behavior.

You better tell Ron Wyden and Chris Cox this, they’ll certainly be very surprised.

230 proponents misrepresent Stratton for decades, claiming that it held that any moderation would make a site a moderator. This is false, and I quote key lines in the opinion demonstrating this.

You can’t yank lines out of a conclusion, it’s the whole context that matters. Plus, you don’t seem to understand how any kind of forum or social media actually functions. Any interactive site available to the public that moderates UGC falls within the conclusion made in Stratton.

What Stratton held was that viewpoint moderation makes you a publisher.

There’s no mention of viewpoint moderation anywhere in the case. Why do you feel the need to lie what’s in the case? Dishonest much?

Industry won, consumers lost – a story as old as time.

Only if you are a drooling idiot. I’ll just ignore the rest of your rant, because it has little to do with factual reality.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:8

“You better tell Ron Wyden and Chris Cox this, they’ll certainly be very surprised.”

I don’t think so. They invited the Prodigy & AOL lawyers to draft the law. This is not a secret. It’s discussed in Jeff Kosseff’s excellent history of 230. In fact, when Kosseff wanted someone to comment on the powers of 230, he asked Prodigy’s lawyer. Because that was their job on the team: Prodigy & AOL lawyers focused on the main text of the law – the powers they would give themselves. As reported by Kosseff, the Policy section was written by Wyden & Cox and the Findings section was adapted from a CDT report. They only had a couple weeks to write the law, so they divided and conquered, which I believe is why the Policy section does not align with the text of the law. It was written hastily, haphazardly, and the Policy section is largely wishful thinking. The attitude of everyone is that if you just give the companies immunity, they will figure out how to make it work. They will benevolently fulfill the Policy goals by using their unlimited freedom. This is a stupid way to make legislation and it spectacularly failed, but few people have noticed because the people who wrote the law control the public discussion of it.

“You better tell Ron Wyden and Chris Cox this, they’ll certainly be very surprised.”

They would probably be surprised by this. By their own admission, they didn’t know jack about intermediary liabilities law when they wrote 230. They just read a newspaper article in the Wall Street Journal and “intuitively” thought the Stratton Oakmont case didn’t make sense. They didn’t know the judge’s reasoning. They didn’t understand the caselaw or why it came down the way it did. They just decided it had to be wrong. And Prodigy, AOL, and EFF/CDT were all too happy to agree. And by their own admission, they saw 230 as an opportunity to pass bipartisan legislation that everyone would love and would raise their profile. They had a Republican (Cox), a Democrat (Wyden), industry (AOL & Prodigy), and consumer groups (EFF/CDT). Everyone supported it! It was too good an opportunity. Indeed, it helped propel Wyden to the Senate (he is actually one of my favorite Senators in general).

“Any interactive site available to the public that moderates UGC falls within the conclusion made in Stratton.”

Again, it depends on what you mean by “moderate”. That’s a vague, useless word. Lots of moderation does not fall under Stratton. Viewpoint moderation does. Viewpoint moderation is simply not necessary. Especially because sites can simply leave that to users. As I stated, sites can even offer an opt-in service to do this for users. So long as the user drives the choice. Hardly anything would change in practice for users who like the site censorship. Users who want to trust the censorship of the site could still do so. But it would be a sea change for other users, who could decide for themselves what to see.

“There’s no mention of viewpoint moderation anywhere in the case.”

The court doesn’t use those magic words, no. But the court describes the policies that made them publishers and those policies were viewpoint moderation policies. As I said, the opinion was not perfect. It has flaws – as do many opinions. My paper includes a long discussion on editorial control and describes how courts struggled and failed to properly define editorial control for decades. The Supreme Court failed badly in Tornillo. One of the main advances of my paper is that I synthesize what courts are doing and not quite understanding and what common and state laws are doing. And I arrive at a definition of editorial control that perfectly describes the intention of those laws and the actions of courts in interpreting those laws. It’s a major advancement in the field.

Anonymous Coward says:

Re: Re: Re:9

And I arrive at a definition of editorial control that perfectly describes the intention of those laws and the actions of courts in interpreting those laws. It’s a major advancement in the field.

So why are you here bitching to randos on the Internet instead of currying favor with the moneyed institutions who have the motivation and alignment to get your paper more traction?

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:8

Outclassed? No one has even made an argument yet. They just make vague claims that are not based in the law, but based in necessity. “There will be lots of lawsuits” is not a legal argument and it’s also not substantiated. It’s just opinion – and it’s an opinion based on a negative view our the justice system, which is a strange position for law professors to take. If you believe there will be too many lawsuits in an area, the proper way to address that is with targeted legislation. It’s not by completely obliterating that area of law – in the process sidestepping discussion of whether that’s a good idea on the merits. “We cannot have the normal responsibility in this area of the law because our legal system sucks” is not a great legal argument.

Stephen T. Stone (profile) says:

Re: Re: Re:9

“There will be lots of lawsuits” is not a legal argument and it’s also not substantiated.

“A handful of lawsuits could be enough to sink a smaller company” is a substantiated argument, though. Get rid of 230 and smaller platforms⁠—like, say, Mastodon instances⁠—would face immense pressure to either stop moderating (which would chase off a lot of users) or shut down (which would chase off every user) so they can avoid even the threat of a platform-destroying lawsuit.

As I said elsewhere, 230 is what gives platforms the leeway they need to moderate without risking a legal threat. If you can think of a better way to give interactive web services that immunity without Section 230, you’d be the first.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:10

I say it’s not a legal argument because it doesn’t address the legal merits. Should the companies have liability? Depends on what they did. If a company distributes defamatory speech and refuses to take it down, why shouldn’t they face liability? That’s bad. It’s immoral behavior. A bookstore that did that would face liability. A person would, too. A bar that doesn’t remove defamation from a bathroom stall is liable. Why not websites? Why do they get special treatment to behave like animals? The discussion of the number of lawsuits sidesteps the merits of those lawsuits. Zeran v. AOL is a great example. They basically gaslit Kenneth Zeran and ruined his life. They didn’t care because they didn’t have to. It was egregious, evil behavior by AOL. That’s great! say Section 230 supporters like Eric Goldman. Personally, I don’t want to live in that world. And it’s why the Internet is often a shithole. People complain about Kiwi Farms or similar sites. Shithole sites exist because of 230. We don’t have to and should not have to tolerate sites that traffic in defamation or other bad speech.

Intermediary liability laws establish a good legal system for encouraging moral behavior by publishers & distributors while maximizing free speech. Section 230 casts that system aside in favor of a vigilante system where companies are the judges of what is proper behavior. Section 230 is at its heart anti-democratic. It believes that large, for-profit companies are a better judge of what is moral than our legal system, established by politicians who are elected by the People. 230 is fundamentally undemocratic. Because early EFF was fundamentally undemocratic. They were against anything that involved the government, instead believing that they and their friends – the early leaders of the Internet – knew better and would do a better job than democratically elected leaders. You may agree with that, but it is fundamentally undemocratic and authoritarian.

Rocky says:

Re: Re: Re:11

Section 230 casts that system aside in favor of a vigilante system where companies are the judges of what is proper behavior. Section 230 is at its heart anti-democratic. It believes that large, for-profit companies are a better judge of what is moral than our legal system, established by politicians who are elected by the People. 230 is fundamentally undemocratic.

Their property, their rules, their 1A rights. This isn’t hard to understand unless you feel entitled to use someone else’s property against their will.

And I see you confuse what is moral and what is legal. You can be morally right and still break the law and vice versa.

The reality is that section 230 has been a boon to democracy because it has allowed many more voices to be heard. My guess is that you don’t understand why that is considering your rhetoric.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:12

“Their property, their rules, their 1A rights.”

Another common false talking point. Again, this is not true for offline distributors. No one ever argued “their property, their rules, their 1A rights” for BookSurge. They had to maintain policies that scrupulously guarded their distributor status, which meant they were not able to viewpoint censor the books they printed. And they won their court case because of this.

No one says “it’s the grocery store’s property, their rules, their rights” when someone sues a grocery store for slipping on the wet floor. Because when you open your property to the public, you owe duties to that public. It’s not the same as someone sitting in their home saying “my house, my rules”. You are opening your “house” to the public. A grocery store invites the public in. A speech distributor invites the public into their distribution medium (store, website, etc). And they undertake activities that can put the public at risk (defamation, etc). They have a moral and legal obligation to the public. Or at least they did before 230 – and offline publishers/distributors still do.

“This isn’t hard to understand unless you feel entitled to use someone else’s property against their will.”

It’s not about feeling entitled. It’s about their actions. When you become a distributor of speech, you surrender your right to censor. That’s the tradeoff. If you want to censor, you can accept the higher responsibility. No one is “entitled” to have you distribute their speech. You can always choose to accept publisher responsibility. But if you choose to be a mere distributor, you cannot discriminate because that’s the definition of a publisher. Make the choice. Live with the consequences. And I devoted pages of my paper to explaining the philosophical reasons why this choice exists and how it promotes free speech.

“And I see you confuse what is moral and what is legal. You can be morally right and still break the law and vice versa.”

That’s a simplistic view. Of course, we don’t outlaw everything immoral. But the law is in fact based on morality and primarily encourages moral behavior. And intermediary liability laws in particular are about morality/justice and promotion of free speech.

“The reality is that section 230 has been a boon to democracy because it has allowed many more voices to be heard.”

This belief rests on the fundamental misrepresentation of Stratton Oakmont and intermediary liabilities law that I already described. Without these misrepresentations, this claim collapses. Then you are left with the reality: that 230 preferences the speech of a few large mega corporations at the expense of its users, who are all subject to censorship by them. Single mega corporations vs billions of users is not a difficult calculation to do.

Anonymous Coward says:

Re: Re: Re:13

Ah, yes, this magical anti-big corporation pill that would work if only Section 230 didn’t exist. But somehow nothing would change for ordinary users because Section 230 couldn’t possibly protect them either, or a lack of Section 230 wouldn’t change existing law. But somehow Section 230 is an undefeatable shield or something because Google and the EFF.

Putting your stock in Trump might have got you FOSTA, but it turns out he couldn’t follow through with his bid to end Section 230. You chucklenuts sure know how to pick a champion for your causes.

Toom1275 (profile) says:

Re: Re: Re:3

Yet there were no issues prior to 230’s passage. Of course, the Internet was much smaller back then and we didn’t really have social media.

The above is malicious deliberate revisionist bullshit.

In the real world, 230 was passed to address the existing problems of idiot judges inventing novel liabilities like in Prodigy, and the intent behind the law was specifically to encourage the growth of large social platforms.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:4 You have been misled

There were a grand total of two cases at that time. One where the Internet service won and one where they lost. As I indicated, that is not evidence of a lawsuit crisis.

And the judge in Stratton Oakmont invented nothing. His decision was grounded in NY state intermediary liability law and also referenced SCOTUS’s Tornillo opinion. His holding was correct, although misrepresented for decades by 230 supporters. You have bought it. I don’t blame people for buying it: they have been sold this stuff by a lot of people they trust.

I myself always thought of EFF as a “liberal” org dedicated to free speech. It wasn’t until I did the research on this that I realized they were actually radically libertarian, anti-government, and mostly Republicans. Barlow, one of their founders, heaped praise on Dick Cheney until a fallout over neoconservatism. EFF was not a pro-consumer group, but rather an anti-government group whose libertarian ideology usually aligned with pro-consumer interests, but not always, particularly in the case of Section 230. To get around this, they worked with industry and academia to bamboozle the public into believing 230 is pro-consumer. They were phenomenally successful at this trick.

Anonymous Coward says:

Re: Re: Re:5

I myself always thought of EFF as a “liberal” org dedicated to free speech. It wasn’t until I did the research on this that I realized they were actually radically libertarian, anti-government, and mostly Republicans

You should refresh yourself on actual Republican talking points. But then they’ve never been able to decide how much government they actually want.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

I urge anyone here – especially Masnick – to take up my arguments

From what I see your claim that the removal of Section 230 would not lead to a surge in lawsuits isn’t grounded in anything more than you saying “No, it won’t happen, I pinky promise.” It’s the same argument made when FOSTA proponents insisted that “No, this law will be great for women! Taking down Backpage will be great for women!” The results? Law enforcement found it even harder to police sex crimes, and Backpage was taken down before FOSTA was even implemented. The supposed protections against FOSTA abuse? Didn’t exist.

So you’re going to have to do quite a bit more to justify your EFF fearmongering.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:2

“From what I see your claim that the removal of Section 230 would not lead to a surge in lawsuits isn’t grounded in anything more than you saying “No, it won’t happen, I pinky promise.”

Everyone is speculating in regards to what will happen when 230 disappears. I base my speculation on facts. 230 proponents base theirs on invented talking points that are not accurate reflections of the law. Who do you think is more likely to be right?

“It’s the same argument made when FOSTA”

I did not support FOSTA and do not support FOSTA because Section 230 sets up a catch-22. You cannot amend 230 to carve out areas because of the way it operates. This is explained in my paper. 230 induces sites to act like publishers. If you carve out areas where 230 no longer applies, those sites now have publisher liability and cannot even retreat back to distributor liability because their whole business is based around 230 giving them publisher powers. This forces them to essentially drop the exceptional areas entirely. And we saw this with FOSTA. 230 is all or nothing. The problem is that 230 has these obvious issues, so legislators will keep coming back trying to fix them over and over again. This problem will recur. It’s baked into 230.

Stephen T. Stone (profile) says:

Re: Re: Re:3

230 has these obvious issues

No, it doesn’t. The only things you seem to have an issue with in re: 230 are…

  • how 230 puts liability for third-party speech on the people who wrote the speech instead of the platform hosting it
  • how 230 allows platforms to choose what speech they’ll host instead of being forced by law to host all legally protected speech

…and those are issues that only entitled assholes would have.

Yes or no: Do you believe the government should have the right to compel any interactive web service into hosting any third-party speech that it would otherwise refuse to host?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:4

You are just repeating talking points you have been taught to memorize and do not understand the law. Let me walk you through this again:

“how 230 puts liability for third-party speech on the people who wrote the speech instead of the platform hosting it”

Again, as I already stated, this is a false claim. Intermediary liability does not make you responsible for other people’s speech. It makes you responsible for failing to fulfill your duties with regards to that speech. If a customer in a grocery store spills water on the floor, you are not responsible for the spill, but rather responsible for failing to clean it up if it hurts someone. The grocery store has a responsibility to make sure the activities it undertakes do not harm the public. Likewise, distributors & publishers of speech have a responsibility to make sure the activities they undertake do not harm the public. THERE IS NO DIFFERENCE BETWEEN THEM. Every person or entity has responsibilities to the public to ensure their activities do not harm the public…except websites under 230. They alone are exempted from the standards of civil society.

“how 230 allows platforms to choose what speech they’ll host instead of being forced by law to host all legally protected speech”

Intermediary liability laws don’t “force” anyone to distribute speech. First, they have chosen to distribute speech. They can always choose not to do so. Second, these laws don’t force anyone to carry anything – instead they force a choice. A choice between carrying everything and receiving lesser liability or discriminating and receiving more liability in exchange for that privilege. The ability to screen and censor speech you distribute is a great privilege. It allows you to enforce your own beliefs on the speech of everyone you distribute. That is massive power. With that power comes responsibility. If you claim the right to control it, you own it for liability purposes. This is the way things must be. It is the way things work offline.

If you think only “entitled assholes” would believe in this law, you have essentially called every state, every court, every expert in this field an “entitled asshole” because this is how the law has operated essentially forever. In fact, given that Section 230 deviates so sharply from the law offline and prior to 230, it would seem to me that people who promote 230 are “entitled assholes” who think they shouldn’t have to follow the rules that everyone else follows. That is what an entitled asshole would look like.

“Yes or no: Do you believe the government should have the right to compel any interactive web service into hosting any third-party speech that it would otherwise refuse to host?”

No. Because intermediary liability laws don’t compel any speech. These laws have been around forever and have been litigated in the US Supreme Court. The Court did not agree with you that they amount to state compulsion of speech. The Court has ratified that they are compatible with the 1st Amendment. Indeed, these laws exist partly to promote free speech.

Stephen T. Stone (profile) says:

Re: Re: Re:5

Ah, another Gish Gallop. I could go through everything there, but I’mma just pick out a handful of things and let someone else handle the rest if they’d like.

distributors & publishers of speech have a responsibility to make sure the activities they undertake do not harm the public. THERE IS NO DIFFERENCE BETWEEN THEM.

You would have a point if Twitter, Facebook, and the like were either distributors or publishers of any third-party speech. So please, show me the law that says they are⁠—not the outline of a law that doesn’t exist, not a court ruling that would’ve said that if not for 230, but an existing and currently active on-the-books law (or binding legal precedent).

Intermediary liability laws don’t “force” anyone to distribute speech.

But removing the immunity from legal liability for third-party speech would force social media companies to decide whether to host all legal speech⁠—including the really heinous shit⁠—or shut down altogether. A small pro-queer Mastodon instance shouldn’t have to decide whether it will host anti-queer speech or shut down to avoid a potential lawsuit.

The ability to screen and censor speech you distribute is a great privilege.

And we finally hit on your fundamental problem: You believe Twitter and any service like it screens/distributes both first- and third-party speech. Your fundamental misunderstanding of how social media platforms work isn’t, and shouldn’t be, grounds to rescind Section 230 and thus force interactive web services of all kinds into choosing between “no moderation” and “shut down everything”.

And since you refuse to accept your wrongness despite several people pointing it out, I’m done being nice.

If you claim the right to control it, you own it for liability purposes.

I have another question you can answer, you Freeze Peach dipshit:

Assume I run a Mastodon instance with a thousand users⁠—nothing a couple of moderators and myself can’t handle, but nothing that would impress the average Internet user. My instance bans all the usual kinds of speech that a service like mine would: bigotry, threats of violence, all that bullshit. Now assume that Section 230 is repealed. In the wake of such a move, my choices are simple:

  1. I can risk having legal liability for third-party speech thrust upon me by choosing to continue my moderation efforts.
  2. I can refuse to take that risk by refusing to moderate any speech (since knowledge is required for legal liability) but risk running off my userbase.
  3. I can both refuse to take the legal liability risk and intentionally run off my userbase by shutting down my instance altogether.

Which one should I choose?

intermediary liability laws don’t compel any speech

But they would compel sites to stop moderating speech if they want to stay open and avoid legal liability for speech they didn’t write or screen or publish, which essentially means the law would compel a site into hosting any and all legally protected third-party speech.

Your implied desire to take down social media services by destroying Section 230 is noted. Please go fuck yourself at your earliest convenience. No one here but our troll brigade will ever be willing to support your ridiculous bullshit. Door’s to your left; don’t let it hit you where the Good Lord split you. 🖕

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:6

You would have a point if Twitter, Facebook, and the like were either distributors or publishers of any third-party speech…

I’m sorry, but this is hilariously wrong. Distributing third party speech is literally their whole business. They invite you to join the site and post your third party speech for them to distribute to others. That’s their whole business model. You clearly don’t understand this law.

But removing the immunity from legal liability for third-party speech would force social media companies to decide whether to host all legal speech⁠—including the really heinous shit⁠—or shut down altogether.

Indeed. Notice you just agreed they are not forced to host anything. But rather forced to choose between different liability levels. By your own admission, they can choose not to host speech, they just have to accept more liability. This does restrict their ability to speak – they cannot speak as freely as they would desire. But the Supreme Court has said this is fine. This satisfies the 1st Amendment. It is not a violation.

A small pro-queer Mastodon instance shouldn’t have to decide whether it will host anti-queer speech or shut down to avoid a potential lawsuit.

Why not? Because you say so? What is your legal reasoning for this? Do you know anything about intermediary liability and why these laws exist? Obviously not. This is the problem. 230 “experts” never understood this law. They changed it based on their total ignorance. And then they spent decades propagandizing laypeople – who have no way of knowing this law. Congratulations: you are a mark just like conservative voters are marks for Republicans. You have been so taken advantage of that you will attack anyone who demonstrates it to you, rather than those who took advantage of you. I would bet that your politics generally are quite skeptical of corporations, yet those corporations have convinced you to defend them when it comes to Section 230. I find this appalling.

You believe Twitter and any service like it screens/distributes both first- and third-party speech.

They screen 3rd party speech. None of this has anything to do with 1st party speech. And I believe they do pre-screen speech: they use automated tools. And after that pre-screening, they reserve the right to screen the speech at any time in the future. None of this is about screening speech, but about asserting the right to screen & censor that speech.

rescind Section 230 and thus force interactive web services of all kinds into choosing between “no moderation” and “shut down everything”.

As I have already shown, this is not the choice they face. You have been propagandized to believe this is the choice because it induces fear and leads you to defend Section 230 in an emotionally charged manner, as is evident by your repeated cursing and name calling, and saying you can no longer be “nice” just because I disagree with you. Witness how emotional you have become. It is not lost on me that the degree of anger from you is no doubt related to your inability to refute what I say coupled with how emotionally attached you are to your belief. Must be frustrating. Now you know how people who supported the Iraq War feel. They were duped, too. They were emotionally attached to the war. They respond with anger to rational arguments, just as you do. This is the point of triggering the fear response.

Which one should I choose?

I don’t know. That’s a business decision. You choose them based on your goals and tolerance for liability.

If your goal is to provide an echo chamber where no one can say things you personally disagree with, then you should choose to be a publisher, accept the greater liability, and accept that your forum will be smaller than you hope because you need to be able to ensure posts are legal.

If your goal is to distribute the speech of everyone and provide an open forum for legitimate discussion – actual free speech – then you should choose to be a distributor and not censor the speech.

If having responsibility for your actions is too much for you to handle, you should just shut down. Similarly, if a grocery store feels that cleaning up customer spills is not worth the hassle, they too should close down. There will be plenty of other grocery stores that do not find this to be a great burden.

It is amazing to me that you insist a website controlling what can be said on their site promotes “free speech”. If everything you say has to meet approval of someone, that is literally the definition of speech that is not free. It’s speech. But it’s not free speech. Of course, people who already agree with the site owner’s will not find the restrictions limiting at all. But that is not how we define free speech. There are plenty of people in China who agree with the government and would never be censored. That doesn’t mean China has free speech.

But they would compel sites to stop moderating speech

There is that vague word again that has no legal meaning. They don’t compel anything. They do give sites a choice. If the site wishes to have lower liability, they are compelled to stop viewpoint moderation (not all moderation). That if is a big ‘if’. Any restriction can be described as a compulsion if you remove all of its conditions. Every law could be described as a compulsion in this way. If you can legitimately choose something else, you are not in fact compelled to do something (if that choice is not realistic, then it becomes a compulsion, but the choice here is realistic – it’s not a mirage).

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:8

Uh, the 3rd party would be the author. The legal test of whether a speech distributor is a publisher is whether they exercise editorial control. As discussed in my paper, legal tests in many states often add the criteria that they control the means of publication. But this requirement is actually redundant because controlling the means of publication is the mechanism by which editorial control is exercised. Thus, an entity that does not control the means of publication is incapable of exercising editorial control.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:10

LOL. I do and TechDirt does. We both do. That does not determine who is a publisher. Obviously, an author always determines when their speech is published. If I decide not to write something, it’s not published. Stephen King decides when his books are published, but he’s the author. His publisher also decides. If they don’t want to publish it, they don’t. Similarly, if TechDirt decides not to publish my comments, they don’t have to. They can remove them. Because they are a publisher. This is the power of publishers.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:12

This is like saying that people who support the 1st Amendment “really want to force racist speech everywhere.” No, we just believe the legal principle is more important than suppressing speech. I believe in the marketplace of ideas this country was founded on. According to that theory, racist speech will be unpopular and lose out in the marketplace. Thus, when you leave racist speech on Twitter, it will get little traction and even show people the speaker is a racist, which is useful information.

Sadly, many people who call themselves “liberal” have abandoned the principles of free speech. Instead, they believe that certain speech is “dangerous” or “reprehensible” and thus must be squashed. They claim this is not speech suppression by hiding behind a claim that it’s okay as long as the government is not doing it. That shows a fundamental lack of understanding of the concept of free speech. If that is your position, you support the 1st Amendment, but not free speech. The 1st Amendment is merely one rule enforcing free speech against the government. Free speech as a concept is much broader than that. And all the reasons why free speech is desirable in the context of the government also apply to private actors.

The reality is that suppression of speech in a free society always backfires. Unless you are willing to use a gun or the power of the state to enforce speech suppression, your efforts will be counter-productive. When not punished for wrong thoughts, people will naturally see censors as bad actors. Because people intuitively understand that bad actors are the ones who censor speech. If your intentions are not nefarious, you would allow people to see the speech and think for themselves. Only bad actors think speech must be suppressed and kept from people. This is why it always backfires. Attempts to censor “anti-vaxxers” for example will only create more. After all, if they are being censored, they must be saying some truth that powerful interests don’t want you to hear. Censorship is not just authoritarian, but foolish.

Rocky says:

Re: Re: Re:13

Sadly, many people who call themselves “liberal” have abandoned the principles of free speech. Instead, they believe that certain speech is “dangerous” or “reprehensible” and thus must be squashed.

Sadly, many people (like you) think that the right to free association is something bad. They disguise this as a free speech issue putting forth the notion that those choosing who they want to associate with are “liberals” who want to censor “dangerous” or “reprehensible” speech. This shows a fundamental lack of the concept of free association and how it works in conjunction with free speech. Without freedom of association you can’t have freedom of speech, what you get instead is forced speech.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:14

You are just demonstrating your ignorance. Freedom of association is of course a good thing. But when you undertake certain activities, you are necessarily restricted. As I noted, common carriers have no right of free association. They have to serve everyone. And this is compatible with the 1A right of free association. Similarly, speech distributors are limited in their right of “free association”. If you have an issue with this, take it up with the Supreme Court and hundreds of years of legal jurisprudence.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:16

Why do you misrepresent what I say all the time? I never said platforms are common carriers. I simply used common carriers as an example of extreme restrictions on “right of free association” that do not in fact violate the 1st Amendment. If common carriers can have their right of free association restricted so sharply, why is it suddenly a problem for a lesser restriction on distributors? The answer is that it’s not. The right of free association is not absolute. It can be restricted if justifiable. And the Supreme Court has found that restrictions on common carriers and speech distributors are justifiable.

Anonymous Coward says:

Re: Re: Re:15

I admit I am ignorant of the caselaw, but what would prevent an author from being the same as the publisher and/or the distributor? I assume much of the caselaw is based on circumstances of most authors not running their own printing press, but not through impossibility by strictly defining them all as necessarily seperate entities. Is it possible the Post Comment button essentially combines author, publisher, and distributor roles?

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:16

The author’s behavior does not really determine who is a publisher. When there is no publisher, the author is said to “self-publish”, although legally this doesn’t mean much because the author already has all the liabilities of a publisher – publisher liability merely assigns the liabilities of an author to another entity.

Whether a distributor of speech is a publisher is determined by whether that distributor exercises editorial control over the speech it distributes. That in turn is based on whether the entity controls the means of publication. The means of publication is simply how the speech enters the world. The printing press, the website, etc. An easy way to ask if an entity controls a means of publication is to ask whether denying distribution affects the status of the publication. If this distributor denies the distribution, is the work still published? Note: don’t ask whether it can be published elsewhere. For example, if a bookstore declines distribution of a book, it is still published. Copies were already printed. They don’t control the printing press. In contrast, if Twitter denies distribution, the speech is not published. You can always go publish elsewhere, but that’s not what this question asks – book authors can also go publish elsewhere.

An entity that controls the means of publication must be careful if it wants to have distributor liability. Any form of censorship, such as declining to print a book, etc, will turn it into a publisher because their control over the means of publication means that any decisions they make will constitute control over the speech – editorial control.

Years back there was a company called BookSurge. They operated online, but their business was an offline printing press. You submitted a PDF of your book – they printed it. Their TOS was very specific in notifying customers they do not screen books and would not deny printing of a book based on its content. The only right they reserved was the right to refuse printing a book if required to do so by law. They eventually were sued because someone alleged they printed a defamatory book. The court went through their policies and saw that they maintained their distributor status and the case was dismissed. But the point is that to maintain that status they were required to refrain from exercising editorial control. Bookstores, as a counter-example, do not have this issue because they lack the control over means of publication. Lacking this control, their editorial decisions can never operate to control the speech. So they are more free to do what they want than a printing press.

Websites like Twitter are both printing presses and distributors. They perform both functions. So they have to be careful what they do with regards to the printing press function – or at least they had to before 230 gave them the law’s greatest gift.

Interestingly, BookSurge was bought by Amazon, shuffled around repeatedly, renamed, and eventually basically shuttered. It appears to be a case of Amazon buying what it saw as a competitor just to shut it down. It may be the case that Amazon realized BookSurge operating offline was a legal liability. At one point they offered BookSurge printing capabilities to people who sold self-published eBooks on Amazon. But Amazon’s eBook business is protected by Section 230, so they claim the right to deny publication to any books they don’t like. If an author used the BookSurge feature to print their book, Amazon would then be liable for the contents because they acted like a publisher in regards to the content – since they had 230 immunities – then they transferred the book to the offline world – where 230 does not operate. So allowing eBook authors to print their books is more trouble than it is worth because it brings publisher liability back into play.

Anonymous Coward says:

Re: Re: Re:15

Churches and political associations amongst other distribute speech, and they are allowed to decide what speech they allow in their meeting and on any web site that they control.

Also note that common carries have a duty to keep what they are carrying for you private, whereas you use social media to make statement that can and are seen by total stranger, so naturally common carriage rules do not apply.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:16

Churches and political associations amongst other distribute speech, and they are allowed to decide what speech they allow in their meeting and on any web site that they control.

Churches & political associations don’t exercise editorial control over these materials because they don’t actually print them. If they do, then they become the publisher and are responsible for their contents in respect to defamation, etc.

Also note that common carries have a duty to keep what they are carrying for you private, whereas you use social media to make statement that can and are seen by total stranger, so naturally common carriage rules do not apply.

You have the reasoning for this backwards. The reason common carriers are not allowed to inspect your stuff is precisely because they are critical services. These businesses exist on a spectrum: publishers, distributors, common carriers. At one end, publishers are required to inspect your stuff. In the middle, distributors are not required to inspect your stuff. At the other end, common carriers are prohibited from inspecting your stuff. These rules are based on the function that you choose to undertake.

Again, these rules have nothing to do with the determination about whether you are a common carrier. For example, an ISP is likely a common carrier with regards to this law (whether the FCC treats them as one is a different question). I use my ISP for, among other things, posting messages that everyone can read on Twitter. Whether the service is private to me or allows me to reach other people publicly is irrelevant. Twitter is not deemed a common carrier because they are not a critical service.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:18

Publication is the act of making public, and does not include the copyright provision of fixed form. A speech in a public space is a publication, and the speaker can be held responsible for slander, which with your insistence means that intermediate liability make the organization also responsible.

The issue is that the bar is not providing the means of publication. Only the location. The means is the spoken word, provided by the author. You do not become a “publisher” of speech simply by someone speaking on your property. You become a publisher by providing the means and exercising editorial control.

A bar might be held liable under distributor notice liability if a person repeatedly defames someone publicly, the bar is notified of this defamation, and continues to allow that person to do it. To my knowledge, no such claim has ever been made. Generally, such claims are foreclosed because 1) the potential for damages in such cases is almost zero and litigation is expensive, and 2) recovering punitive damages requires “actual malice” be shown.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:20

So all churches and political associations etc. which provide the forum, and limit what is acceptable speech on their premises then.

No. A church is not a means of publication. It’s a location. A church is likely already liable for speech in the church anyway, as people who speak in the church are almost certainly agents of the church in some capacity.

You seem to be attempting to poke holes in intermediary liability law by trying to find edge cases where it would generate outrage. Intermediary liability law has been around a long time. You are not the first person to do this. And it does get complicated in edge cases, but the Internet is not one such case.

Even if a church were to be found to be liable, there is something called the radio exception, discussed in my paper. A radio station who is a publisher invites a guest on for a live interview. During the interview, the guest defames someone. The radio station had no way of knowing beforehand that the guest would defame someone and thus no way of preventing it. Courts resolved this by only holding the radio station to notice liability, not publisher liability. This was written as an exception to the normal rules, but it actually was not. Rather, it was a recognition that in the case of interviews, the radio station surrenders editorial control and reverts to a mere distributor. Stations allow guests to say whatever they want in interviews. They do not control it (they guide it through questions, but that is merely influence, not control).

Anonymous Coward says:

Re: Re: Re:13

I believe in the marketplace of ideas this country was founded on.

The market place exists so long as you have a platform on which to publish your speech. The market place have never required a one stop shop selling competing products, so stop trying to achieve the free speech equivalent of forcing a Ford dealer to also sell Trabants.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:14

stop trying to achieve the free speech equivalent of forcing a Ford dealer to also sell Trabants.

I see these analogies all the time. They fundamentally misunderstand the legal issues. Distributing speech is not the same as selling cars. It comes with special legal responsibilities. If you sell propane, you also have special legal responsibilities with that. You cannot just setup a big propane tank and start filling people up. There are special laws you have to comply with. Distributing speech is no different. Just as distributing propane can harm the public, justifying special laws in that industry, distributing speech can also harm the public, justifying special laws in that industry.

These are simple concepts that are so basic they are the root of society. 230 “experts” sidestep all of this. They don’t want you to know or think about any of it. And it makes you look like a clown as a result. You should be really mad about this – not at me – but at this so-called experts who have duped you into looking like a fool. It’s rather the way Donald Trump dupes his followers. 230 supporters get to know what it feels like to be a Trump supporter because you have been duped in the same ways by those you trust. And when challenged on it, you lash out at those who know what they are talking about – just like a Trump supporter. Congratulations. I would start thinking about course correcting. 230 is a sinking ship. Do you want to go down with the ship?

Anonymous Coward says:

Re: Re: Re:15

230 is a sinking ship. Do you want to go down with the ship?

What do you propose happens when Section 230 goes down? How do you envision Section 230 proponents “going down with the ship”? Are you suggesting lawsuits or other forms of legal pursuit? Will there be fines? Under what charges?

You complain that Section 230 advocates leverage fearmongering to garner their support, but you seem to do the same thing.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:16

What do you propose happens when Section 230 goes down?

Most likely the Supreme Court will strike it down for one or both of the reasons I give in my paper. It’s unavoidable. To avoid it, the Supreme Court would have to 1) completely obliterate its severability jurisprudence, AND 2) completely obliterate its equal protection/first amendment jurisprudence.

And the justices who have even a tiny chance of doing this are the ones who already dislike Section 230. It will go down – and the decision is likely to be unanimous. It may take a while before a proper case is brought.

How do you envision Section 230 proponents “going down with the ship”?

Reputation damage. Especially for groups like EFF. Vast majority of people don’t realize EFF sat down and drafted a law with industry for industry, then lied to the public about it. That won’t sit well. Why do you think EFF won’t disclose its donors? It’s heavily industry funded. They pretend to represent consumers, but are funded by industry. When the interests of those two clash, who do you think they choose? I know how money works. EFF gets funded because industry is happy with their work.

This pains me because EFF has always been one of my favorite interest groups. I admire them. But not after this. My feelings changed a lot after researching 230. What I thought was a liberal group devoted to freedom was actually a Dick Cheney-loving anti-government libertarian outfit devoted to industry.

You complain that Section 230 advocates leverage fearmongering to garner their support, but you seem to do the same thing.

I’m not promoting fear at all. I’m being straight with you: 230 cannot survive Constitutional challenge and supporters need to come to grips with that and figure out what a post-230 world will look like. Ruth Bader Ginsburg warned that Roe v. Wade was going to fall and no one got angry about that. She was right. Why? Her point was that even if you agree with Roe – which I do – it was a political decision decided by judges. This would always keep it under threat of reversal. She urged legislation to codify Roe. It never happened. And Roe fell. I am warning similar for Section 230. It will fall at some point. It has to. It’s far shakier than Roe was. Impossible to defend.

The solution I present for this at the end of my paper is not even a drastic departure from the way things work now. In practice, little will change. People who like censorship can still use it. The big change is that people who don’t like it will not be forced to use it. Otherwise, not much else changes.

Anonymous Coward says:

Re: Re: Re:17

Reputation damage. Especially for groups like EFF. Vast majority of people don’t realize EFF sat down and drafted a law with industry for industry, then lied to the public about it. That won’t sit well. Why do you think EFF won’t disclose its donors? It’s heavily industry funded.

The only one who should have reputation damage here is you.

  1. EFF did not write 230
  2. 230 is about consumer protection, not industry protection
  3. EFF does disclose its financials, and it is not at all heavily corporate funded. Only $500k of its $15 million in funding came from companies.
  4. You’re full of shit.

https://annualreport.eff.org/

  • Signed, someone who knows.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:18

EFF did not write 230

EFF President Jerry Berman had fairly recently left EFF to found CDT – a similarly minded organization. Berman was one of the 5 people at the table writing 230. The “Findings” section of the bill was adapted from a CDT report. EFF was a strong public backer of the bill and helped push it in Congress. They were deeply involved in the process.

230 is about consumer protection, not industry protection

So we have been told. You certainly bought it.

EFF does disclose its financials, and it is not at all heavily corporate funded. Only $500k of its $15 million in funding came from companies.

They disclose only categories of donations, which tell us nothing. They don’t disclose individual donors. A substantial portion of donations come from “Foundations”. We know many of them are foundations run by tech giants. The largest portion comes from individuals. What individuals? Many of them are also tech giants.

Interestingly, CDT does disclose its donors, so we can see that it’s a list full of tech giants. EFF doesn’t disclose theirs because they don’t want us to know.

  • Signed, someone who knows.
Rocky says:

Re: Re: Re:19

What relevance has the EFF’s recent leadership and funding to do with something the EFF where involved in nearly 30 years ago?

You make the same shitty mistakes that all the self professed “smart people” do when they claim they know better, you take an amalgam of your superiority complex and current information interpreted as something nefarious and applies it to a timeframe to which it has zero relevance to build up an argument that defies factual reality.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:20

What relevance has the EFF’s recent leadership and funding to do with something the EFF where involved in nearly 30 years ago?

This is a fair point. And something I want to be more clear about. While EFF’s current funding comes heavily from industry and this will no doubt impact how they view their issues, I feel that EFF has become less pro-industry over time and more focused on consumers. However, institutions have a long memory and their past tends to control their future. The people who used to work there hire the people who come after them. If you applied for a position at EFF, you were not going to get hired if you said you didn’t agree with Section 230.

This is why institutions tend to hold on to the same positions forever and never change. There is also sunk cost: EFF’s identity is substantially built around 230. They cannot change their position now. This is why institutional corruption can be so damaging. People eventually die. Science, as they say, advances one funeral at a time. But institutions can live forever. And their ideas tend to live beyond the lifetime of one group of leaders. The institutions become like a person with infinite lifespan.

I do not intend to comment on the precise politics of the current iteration of EFF. But I know for sure – based on my research – that early EFF was deeply libertarian and anti-government. Their goal with 230 was to freeze government out of the Internet. They sincerely believed this was helping consumers because their ideology told them consumers and industry were on the same side against the Big Bad Government. Liberals who are not libertarian should see this for the foolish nonsense it is. But EFF, industry, and (later) academia sold them a misleading story.

One thing I touch on in my paper that I could write an entire paper about is the shocking corruption of tech academia. The situation is as bad – or worse – than anything seen during the peak of pharma’s powers. All the universities compete for tech industry money. Stanford basically is the tech industry – Google, more specifically. Stanford Law is Google’s law firm. Some of their leaders even came straight from Google. People known as the “top” legal experts in tech – especially on 230 – have had their careers goosed by Google & industry. Did they become leaders organically? Or did they become known as leaders because their friends in industry promoted their career? This same situation happened in medicine, where the “leaders” in academia were helped and promoted by pharma. It all appears organic to the untrained eye. But when one really digs into what occurs, you find that X person became professor at X big school based on a recommendation from Google, whose money the school was courting. From there, Google helped arrange them speaking engagements to raise their profile (and income) and got them in front of Congress, etc. At some point their profile is high enough that a top school comes calling. Over decades, this is how the ranks of top schools are filled out. This is what happened in pharma/medicine, and it’s what happened in the tech industry, too. The people in top academic positions all share Google’s views. And it’s not an accident. It’s baked into a rotten, corrupt system, where everyone is chasing Google (or Meta, etc) money.

Anonymous Coward says:

Re: Re: Re:19

EFF President Jerry Berman had fairly recently left EFF to found CDT – a similarly minded organization.

And if you knew why Jerry left and formed CDT you’d learn why EFF was not a part of the process and did not write the bill. While the two organizations now work together, your mistake is thinking that the split was amicable at the time and that the two organizations were aligned.

You have no clue. You weren’t there. Those of us who were know that you’re completely out of your depth.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:20

And if you knew why Jerry left and formed CDT you’d learn why EFF was not a part of the process and did not write the bill. While the two organizations now work together, your mistake is thinking that the split was amicable at the time and that the two organizations were aligned.

You have no clue. You weren’t there. Those of us who were know that you’re completely out of your depth.

You were like, totally there, man. Difficult to judge that claim since you are anonymous.

I am aware why Berman left EFF – they thought he was too close to the government. You probably think that means EFF was not a supporter of 230. That’s wrong. Despite disagreeing with how close Berman was to government (since government is the Big Bad Wolf), EFF was one of 230’s biggest supporters from the beginning. Mike Godwin was their legal counsel at the time and he has always been one of the most outspoken 230 supporters.

Barlow’s famous Manifesto was written in response to the Exon Amendment, which became the CDA. As I have stated repeatedly, EFF did not directly participate in the drafting of the bill. But they promoted it and the bill was a reflection of their ideology, with their former President taking an active role in drafting and the bill’s powers reflecting precisely what EFF wanted.

Anonymous Coward says:

Re: Re: Re:17

Reputation damage

Ah, yes, the one thing that Section 230 opponents say is protected by Section 230, because Section 230 somehow makes defamation laws not work anymore because you can’t sue someone in another country. You guys really need to keep your talking points consistent.

It’s funny that you bring up industry funding, because it’s somehow always dirty when it’s done by the EFF, yet never brought up as a problem when the RIAA does it. In fact there’s been trolls mentioning in the era of SOPA that industry funding laws like SOPA was a drop in the bucket of the money they spent every year (of course, what they didn’t realize that this implied that they had a lot more money to spare than they claimed was being “stolen” by pirates). Who do you think people are going to perceive as anti-consumer? It’s not going to be the organization that stood in front of the SOPA threat.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:18

Believe it or not, but the Internet and RIAA are in different industries. The RIAA has its power, its interest groups, and its politicians. Internet companies have theirs. Both groups tend to represent the interests of their industry funders. It just so happens that for EFF, industry interests typically align with consumers and against groups like RIAA. Not always, but typically. Net neutrality was a similar deal. You can tell which way tech academia will come down on an issue by asking yourself: what is Google’s position. It’s pretty much baked in when a large number of “top” professors either came from Google or work with them in some capacity. When Stanford Law school is swimming in Google money, it’s not difficult to see where they will come down on an issue. Just ask Google.

Stephen T. Stone (profile) says:

Re: Re: Re:14

I remain amazed at the ability of the Freeze Peach/“free reach” crowd to find new and fascinating ways of being wrong just so they can attempt to justify forcing Twitter, Facebook, etc. to host their speech. It’s fucking astounding at times, really. You’d think they would’ve learned by the failures of jackasses like Laura Loomer and the PragerU dipshits that the law doesn’t give them the right to force themselves (and their speech) onto any social media platform no matter how large.

This comment has been flagged by the community. Click here to show it.

Ryan says:

Re: Re: Re:15

It is not lost on me that your response lacks an argument. This is typical. Since you have no arguments, you resort to insults (“Freeze Peach”), guilt by association (Laura Loomer & PragerU), and attacking what you imagine my motives to be. I guess you are in good company because the first time I raised this issue with Mike Masnick on Twitter he: 1) cursed me, 2) insulted me, and 3) blocked me. So, I guess you learned from the best.

FWIW, I don’t even know who Laura Loomer is. I am familiar with some of the conservative criticisms of Section 230 and I find most of them wrong and lacking any basis in the law. My criticisms are liberal criticisms of 230, which is why I tend to think it’s a big deal the law was written by and for industry, whereas a conservative critique might not care. It’s bizarre that so many alleged liberals defend this law, which was written by deeply libertarian, conservative activists, which is apparent in its cooperation with industry. Barlow – the EFF co-founder – was a lifelong Republican whose worldview was dominated by anti-government sentiment. At times he called himself an anarchist because he essentially didn’t believe in government. That aligns nicely with the needs of industry and the wealthy, as it did for Section 230.

PaulT (profile) says:

Re: Re: Re:16

“the first time I raised this issue with Mike Masnick on Twitter he: 1) cursed me, 2) insulted me, and 3) blocked me”

So, are you against free speech, freedom of association, or both?

“FWIW, I don’t even know who Laura Loomer is”

Yet, thanks to section 230 allowing genuine criticism, you can search her name and find out. FWIW, she’s the impotent troll who got angry about being suspended from Twitter due to violating their terms of service, and handcuffed herself to the door of their HQ in a way that allowed everyone to walk past her without interruption.

“It’s bizarre that so many alleged liberals defend this law”

The part of 230 that’s actually under debate is the part that says, essentially, “if someone says something bad, you have to sue that person and not the platform they used to say it”, and “the platform can moderate without being held liable for the things they missed”.

I find it bizarre that any side of the political spectrum would oppose those things. Yes, “liberals” are OK with not being sued for things they didn’t do, and being allowed to moderate their own property….

Anonymous Coward says:

Re: Re: Re:17

So, are you against free speech, freedom of association, or both?

Give Ryan enough time, and I suspect he’ll try to pull a Shiva Ayyadurai lawsuit, suing Masnick because he didn’t get the attention he wanted. The difference being that Ryan is visibly even more of a nobody than Shiva was, and Shiva has had decades of experience shilling a fabrication as his persona.

You’d think that if Ryan had something substantial to bring to the table in the anti-Google fearmongering market he’d have brought his story to… literally anywhere else besides the website of a guy he absolutely loathes.

Anonymous Coward says:

Re: Re: Re:3

I base my speculation on facts.

How is your claim that the removal of Section 230 would not lead to a surge in lawsuits based on anything but your own expectation of what would happen? Exactly what facts are you using here, besides claiming to believe in the good faith that the people who have historically threatened lawsuits in the absence of Section 230 suddenly won’t follow through?

This comment has been flagged by the community. Click here to show it.

ECA (profile) says:

Juat a QUESTION

Who ever is ruling on this should be ASKED..
Ok, How many jobs are we going to need to cover this manually? And Who is going to pay for it.
1 Good computer spam monitor can catch how many per hour? More then 1 per second? 10? 50?
Times per min 3000, and we have ?? computers Watching this.
And because we cant Ban directly to a Site and person. We have to keep doing this constantly?
3000×60 min x24 hours? And humans take 4 times as long, and use the computer to Tag recent locations and data from the net of Incoming spam? How many are you willing to pay? THIS is a freebe we are doing. And Now we get to charge MORE for it…

thanks.

This comment has been flagged by the community. Click here to show it.

Benjamin Jay Barber says:

Mike gaslighting again

Mike is mentioning “algorithms”, but is citing a lawsuit challenging a completely different part of the law, and a completely different legal theory.

i.e. whether algorithms that promote speech, are content created by another, or content created by the company.

More interesting, is the fact that there are so many spam messages on here to begin with, which makes me wonder why that would be. I am suspicious as to whether someone is trying to run a SQL injection attack or something on the site, from the volume of spam.

Anonymous Coward says:

Re:

It really is very telling that despite bipartisan opposition to Section 230, and a lot of moneyed interests who would be very keen on a “Fuck Google” anti-230 proposal – they haven’t actually generated anything beyond vague threats and angry rape claims by the likes of Ryan, John Smith and F230.

It’s almost as if their anti-230 arguments aren’t the slam dunk they think they are.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...