Yes, Actually, The 1st Amendment Does Mean That Twitter Can Kick You Off Its Platform, Wall Street Journal

from the columbia-law-students-should-get-a-refund dept

Back in February, we did a thorough debunking of Columbia Law Professor Philip Hamburger arguing (bizarrely, and blatantly incorrectly) that Section 230 violates the Constitution in the pages of the Wall Street Journal. It was a nearly fact free opinion piece that got so much wrong I was vicariously embarrassed for anyone who ever got a law degree from Columbia University. In the intervening months, it does not appear that Prof. Hamburger has done anything to educate himself. Instead, he appears to be digging in, with the help of the Wall Street Journal again. Leaving aside the fact that the Wall Street Journal’s parent company has been lobbying against Section 230, and its various news properties have been among the most vocal in spreading blatantly false information about the law, I guess this is no surprise. But if the Wall Street Journal really believes this nonsense, then why won’t it let me publish my op-ed in their pages about how the WSJ is the worst newspaper ever, and regularly prints lies and nonsense to please its scheming owner in his hatred of the internet?

Anyway, Hamburger’s latest is, I guess, in some ways a response to everyone pointing out that he was wrong in his first op-ed. A key argument actual experts made was that what Hamburger was really mad at regarding content moderation was not Section 230 (as he claimed), but the 1st Amendment, which gave websites all the leeway they wanted to moderate content. So Hamburger’s new response, written with former Trump DOJ official Clare Morell, tries to argue that the 1st Amendment doesn’t actually protect website content moderation choices. It’s almost difficult to believe, but it’s even more wrong than his February article.

Does the Constitution require Americans to accept Big Tech censorship? The claim is counterintuitive but the logic is clear: If you submit a letter to this newspaper, the editors have no legal obligation to publish it, and a statute requiring them to do so would be struck down as a violation of the Journal?s First Amendment rights. Facebook and Twitter, the argument goes, have the same right not to provide a platform to views they find objectionable.

The claim is not counterintuitive. It’s one of those basic “property rights” things that most of us believe in. And the logic is clear, because… uh… that’s how the 1st Amendment works. But Hamburger has just discovered the other hammer that a bunch of clueless Twitter-pretend-lawyers discover when someone points this out. He thinks that “antidiscrimination” laws are the answer. Again, this is mildly embarrassing when @JoeBob2354192081 on Twitter comes up with it. It’s horribly embarrassing when a law professor at one of the top law schools in America comes up with it.

Another reason to doubt the First Amendment claim: Antidiscrimination laws are familiar limits on speech. The U.S. has a range of local, state and federal antidiscrimination laws with significant speech consequences, and courts haven?t held that they violate the First Amendment. One has a First Amendment right to bigoted speech, but not, according to the courts, in circumstances that, for example, amount to discrimination in employment or public accommodations.

Antidiscrimination laws apply to protected classes, and they are designed in response to long histories of systematic oppression. Your political party is not a protected class. And being an elitist fool is not a systematically oppressed class.

From there, Hamburger pulls out another debunked tool from the Twitter lawyers’ playbook: “common carrier.”

Yet another reason is that large tech platforms and services function as common carriers. The states and the federal government have the power to regulate common carriers, and this certainly includes the authority to ban discrimination. The common-carrier tradition can be traced to the common law, which viewed persons serving as common carriers as privileged by government. At the same time, it barred them from discriminating.

We’ve been through this before. Common carrier designations are extremely limited and they serve a particular purpose: specifically for natural monopoly interchangeable commodity services that serve in transporting things (that’s the carrier part) whether it’s people, goods, or communications. But social media isn’t just about transporting information from here to there. It’s about hosting it — forever. And that’s why common carriage laws make no sense at all. They’re also not interchangeable commodity services.

Also, it appears that Professor Hamburger does not actually understand the relevant caselaw, especially rulings from the Supreme Court on this topic. Again, this is embarrassing. If someone handed in Prof. Hamburger’s article to a good law professor, it would easily get a failing grade.

The large tech companies meet both definitions. They serve a public function, providing the public square or conduit for the information age. We meet and communicate on their services or platforms much more than on the grass of the village green.

Just two years ago, in the Halleck case, Justice Brett Kavanaugh made it abundantly clear that this argument is nonsense. Social media websites are not the public square and are not subject to the 1st Amendment. From Kavanaugh’s ruling that maybe Prof. Hamburger should look up:

By contrast, when a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum. This Court so ruled in its 1976 decision in Hudgens v. NLRB. There, the Court held that a shopping center owner is not a state actor subject to First Amendment requirements such as the public forum doctrine….

The Hudgens decision reflects a commonsense principle: Providing some kind of forum for speech is not an activity that only governmental entities have traditionally performed. Therefore, a private entity who provides a forum for speech is not transformed by that fact alone into a state actor. After all, private property owners and private lessees often open their property for speech. Grocery stores put up community bulletin boards. Comedy clubs host open mic nights. As Judge Jacobs persuasively explained, it ?is not at all a near-exclusive function of the state to provide the forums for public expression, politics, information, or entertainment.?

Notably, this ruling was not controversial, because it is so obviously supported by tons of earlier precedent. The kinds of precedent one would hope a law professor at Columbia University Law School would know about.

The entire piece is so amateurish, and so devoid of any connection to reality, it makes you wonder if the former DOJ official who co-authored it (who is now a “policy analyst” at a DC think tank), did pretty much all of the writing and somehow got this well known law professor to sign his name to it without realizing how embarrassingly bad it is. I mean, this nonsense is basically unforgivable:

That Big Tech is subject to common-carrier regulation is especially clear because Section 230 already recognizes the tech companies as akin to common carriers. Along these lines, Section 230(c)(1) protects Big Tech from being treated as ?the publisher or speaker of any information provided by another information content provider.?

That, um, is not Section 230 “recognizing tech companies as akin to common carriers.” Yes, common carriership also restricts liability from the carrier for the speech it carries, but it’s not a transitive thing where having that immunity automatically also means you’re a common carrier. This is not why 230 was put in place. This wouldn’t be difficult to look up. Hell, the two authors of Section 230 still speak about what they intended regularly. Just last year they debunked this nonsense idea that 230 made websites into common carriers. As they pointed out, the intent of the law was to do the exact opposite — to encourage websites to moderate their own communities as they saw fit:

The first is that Section 230 does not require political neutrality. Claiming to ?interpret? Section 230 to require political neutrality, or to condition its Good Samaritan protections on political neutrality, would erase the law we wrote and substitute a completely different one, with opposite effect. The second is that any governmental attempt to enforce political neutrality on websites would be hopelessly subjective, complicated, burdensome, and unworkable. The third is that any such legislation or regulation intended to override a website?s moderation decisions would amount to compelling speech, in violation of the First Amendment….

And then, the article gets even worse. I honestly have no clue what Hamburger is trying to say when he suddenly starts talking about fair use, as if fair use is a “public privilege.”

The public privileging of these companies is extraordinary. Consider the fair-use doctrine in copyright law. A teacher can copy a small number of pages to show to a class. Google can copy whole books, and even when it shows only a snippet to the public, it can use the entire volumes to develop its algorithms and offer the public an online index. This appears to have been important for Google?s early enticement of the public into its services.

Wut? Fair use is a public’s right. Not a public privilege. And we all get it. What Google did anyone can do. It seems like Hamburger is arguing that Google somehow gets different fair use rights than anyone else? But that’s… just not true. Everyone gets the same fair use rights. And I don’t understand what any of that has to do with the rest of his argument.

Section 230 privileges tech communication over print and in-person communication by excusing tech companies from liability in the courts. In contrast, paper and in-person communication is still fully subject to liability. The result has been to accelerate and accentuate tech dominance over other modes of speech.

This is just wrong. Again, embarrassingly so. Everyone — including tech companies — are still fully liable for speech they create. They are not liable for speech someone else creates. It’s that simple. Though it appears that an actual law professor from a top law school doesn’t understand this simple fact.

So it isn?t true that the large tech services and platforms reached their dominance merely by private effort. Their dominance is partly the product of public privileging, and this reinforces the conclusion that tech dominance over speech is not only private enterprise. It is also the result of enterprising capture of government.

I honestly don’t understand how we get from the pervious sentence to this one. Suddenly he seems to be dropping a “you didn’t build that” kind of argument into the middle of this. And, that makes no sense at all. I mean, sure, every business relies on some elements of public infrastructure — roads, plumbing, etc. — but that doesn’t make them all state actors. And, again, Hamburger seems totally oblivious that Section 230 applies equally to everyone online (including the Wall Street Journal, which hosts user comments).

Section 230 relieves the large tech services and platforms of liability for restricting a congressionally specified list of materials, even when the materials are ?constitutionally protected.? As one of us has explained in these pages, this is privatized censorship?a license to censor, free of concerns about ordinary legal duties that would apply to anyone else, including newspapers and individuals.

No, it is not a “congressionally specified list of materials.” Section 230 makes it clear that websites can moderate however they see fit. And they can do that because it’s their website. Just as Professor Hamburger will not let me come into his classroom at Columbia to spend a lecture explaining to his students (in great detail, with annotated charts and graphics!) why they will come out of any such class dumber than they started, and should ask Columbia University for their tuition back, any website can tell any user that they are violating its policies, and cannot post there any more.

Meanwhile, WHY IS PHILIP HAMBURGER CENSORING ME IN NOT LETTING ME TEACH HIS CLASS? I mean, Philip Hamburger clearly relies on public privileges to get to Columbia University, so it’s not like he’s a private individual. His censorship must violate the 1st Amendment, according to the logic of (oh look) Philip Hamburger.

The large tech companies are private, and the point isn?t that they violate the First Amendment when they censor users? speech. But they have participated in the censorship secured by Section 230?s privileges. It therefore is not unreasonable for states to protect Americans from the tech company?s government-sponsored censorship.

I’ve read this paragraph four times. It makes less sense every time you read it again.

Columbia Law School: this is embarrassing. I understand why the Wall Street Journal would publish this cynically dishonest nonsense because of Rupert Murdoch’s crusade against the internet. But, it’s still embarrassing.

Filed Under: , , , , , , , ,
Companies: facebook, twitter, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Yes, Actually, The 1st Amendment Does Mean That Twitter Can Kick You Off Its Platform, Wall Street Journal”

Subscribe: RSS Leave a comment
96 Comments
This comment has been deemed funny by the community.
Chris Brand says:

Thanks, Philip

"Section 230 privileges tech communication over print and in-person communication by excusing tech companies from liability in the courts. In contrast, paper and in-person communication is still fully subject to liability. The result has been to accelerate and accentuate tech dominance over other modes of speech"

I always wondered why why online communication was so predominant these days – whether it was the speed, or reach, those "network affects", or something else. Now I know – it was section 230.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

WTF?

Now I know a law degree from Columbia University isn’t worth the paper it’s printed on. I think a bit of the Ivy League élitism got to him, considering that nobody on the Supreme Court–including Trump’s appointees–(except Clarence Thomas) agrees with his fruitcake opinions regarding the constitutionality of §230.

Anonymous Coward says:

For the sake of argument, let’s invent an exaggerated scenario where Social Media Platform X wanted to absolutely destroy Person A’s reputation. If X published any libel, they would of course be liable for it. If, however, they instead deleted all positive words about A and amplified all invented and contrived negative stories about A, they have accomplished the same evil act, but are protected from the consequences.

Is there a solution to this problem, aside from hoping that competitors don’t behave the same way? (Or hoping that competitors will exist at all?

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: Re:

If, however, they instead deleted all positive words about A and amplified all invented and contrived negative stories about A, they have accomplished the same evil act, but are protected from the consequences.

I would assume that deleting all positive words about Person A and amplifying defamatory stories about them would mean that the Platform X has agency, and is not acting in good faith. But I’m not a lawyer, so my word is meaningless.

Anonymous Coward says:

Re: Re: Good Faith

I agree they would not be acting in good faith. I guess my issue is the blurry line between necessary content moderation and less-than-necessary editorializing the availability of content as a traditional publisher would. Currently, SM platforms are protected regardless of how they moderate (which is fine, I’m all about property rights). I think the concern comes when they moderate in a defamatory way (as in my example above) but cannot be held liable since they aren’t considered "publishers" despite taking editorial privileges. How ought such a malicious act be addressed, or even diagnosed?

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: Re: Re: Good Faith

How ought such a malicious act be addressed, or even diagnosed?

I think that’s up for the courts to decide. The protections provided by §230 aren’t absolute and in rare cases, they don’t apply, such as the Roommates case. I’m sure Mike Masnick (and especially Ken "Popehat" White) could be more informative.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

deleted all positive words about A and amplified all invented and contrived negative stories about A

This is what traditional "publishers" and "broadcasters" do … All. The. Time.

The solution there is … more venues for speech, so that people who are interested in hearing another perspective can … and people who would be offended by another perspective don’t have to be.

So, tech is not presenting a new problem. Any of the large social media sites carry a wider range of perspectives than any old-media sites do. Tech is not introducing a new problem unknown in the old-media world. Tech is building a solution (granted, an imperfect one, as all human constructs are) to the problem of the Wall Street Journal. Naturally, people who consciously benefited from the WSJ problem will outraged in the wallet by any attempt to solve it.

Anonymous Coward says:

Re: Re: Solution is...

This is what traditional "publishers" and "broadcasters" do … All. The. Time.

Not entirely accurate. Social Media can amplify false or libelous information if they want to, and receive no penalty for doing so because they aren’t technically "publishers", right? But if a traditional publisher printed something outright libelous, they can be held accountable, obviously, for libel.

If you publish that person A is a space alien, you can be held legally accountable. But if you merely amplify a thousand faceless individuals who claim A is an alien, then the public opinion is still tarnished, but Social Media gets to skip all the responsibility.

Am I missing something? This is my picture of how things currently work, if I am wrong please help me understand.

Samuel Abram (profile) says:

Re: Re: Re: Solution is...

If you publish that person A is a space alien, you can be held legally accountable.

I think their libel suit would be dismissed for the same reason Jerry Falwell’s suit against Hustler was dismissed: it’s so absurd nobody could believe it.

Then again, considering what the morons on January 6 believed…

Anonymous Coward says:

Re: Re: Re:2 Solution is...

it’s so absurd nobody could believe it

I was exaggerating to make the point: A social media platform can cause just as much damage to reputation as outright Libel, but face 0% of the consequences. All they have to do is amplify the libelous voices out there and tell the courts "but I didn’t say it, AnonDoe123 said it! Section 230!".

Maybe I’m missing something (please help me if I am), but I kind of think there IS a problem here and our legislature needs to create a better framework than blanket immunity for all editorialized content. If you’re a platform, great, how do we draw the line between content moderation and malicious editorializing? If you’re a publisher, great, how do we protect you from the pitfalls of content-moderation-at-scale without granting blanket immunity?

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: Re: Re:3 Solution is...

If you’re a publisher, great, how do we protect you from the pitfalls of content-moderation-at-scale without granting blanket immunity?

§230 doesn’t provide blanket immunity; e.g.:
-the roommates case
-IP law
-federal criminal law
-SESTA/FOSTA

And that’s off the top of my head! There are more if I cared to look…

Anonymous Coward says:

Re: Re: Re:4 Solution is...

Thanks, I’ll check it out. I’m not familiar with The Roommates case (I’ll look it up), but for the rest I was referring mostly to the particular issue of Libel and other forms of defamation. IP Law and Criminal Law I agree are pretty clear-cut exceptions, but do you think (pending my understanding of the roommates case) that SM should be immune to prosecution for promoting defamatory content? (or are they? I’m still learning the details)

Anonymous Coward says:

Re: Re: Re:5 Solution is...

IANAL, but I think if you could prove in court that the platform was making a concerted effort to remove positive commentary and amplify negative commentary, there would (or should) be liability in some form placed on the platform. Perhaps not for the comments themselves, for which 230 would apply, but for the actual malice present in the platform’s acts. False light laws, perhaps?

Anonymous Coward says:

Re: Re: Re:6 Prove in court

but I think if you could prove in court

Here is the crux, I think. How would you prove this, since moderation criteria and methods are typically proprietary information? Would some kind of transparency law be beneficial? Or could we perhaps make a standard for how "systemic discrimination" against an individual could be proven?

It isn’t a public square, but Social Media controls enough of public opinion (as Russia very kindly showed us) that we really should take close consideration of how to regulate their power over opinion through gratuitous use of moderation. It doesn’t happen horribly often, but enough that legislators should be taking notes.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:7 Prove in court

controls enough of public opinion (as Russia very kindly showed us) that we really should take close consideration of how to regulate their power over opinion through gratuitous use of moderation

It was pointed out somewhere else, but mainstream media does this too. Any outlet can "moderate" which stories to broadcast/publish, and which to ignore. How do you regulate any media outlet’s ability to manipulate public opinion without running straight into the First Amendment?

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:7 Prove in court

but Social Media controls enough of public opinion (as Russia very kindly showed us)

First off, Russia merely showed us howw easy it is to game Facebook, Twitter and social media because humans are easily manipulated creatures, and all the news outlets know it.

From content farms to actually paying Facebook money to boost Russian psyops, their actions simply showed us how easy it is to sow fear, uncertainty and doubt. On a budget, I might add.

we really should take close consideration of how to regulate their power over opinion through gratuitous use of moderation. It doesn’t happen horribly often, but enough that legislators should be taking notes.

Are you confusing the US with actual authoritarian countries like Singapore AND China? Because what you just said and suggested is done often enough in these places that it stopped being funny.

Maybe you should move to one of those places, since you clearly want to be a jackbooted thug.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re: Re:3 Solution is...

"All they have to do is amplify the libelous voices out there and tell the courts "but I didn’t say it, AnonDoe123 said it! Section 230!". "

…and then it first of all boils down to whether it’s protected speech – like an opinion or a factual account. It can only be libel if it’s a credible and infactual slander. Only then can you even successfully drag a libel or defamation case through court.

The reason publishers get hit by liability is because they write their own stories from whole cloth – they are the acting party writing the allegations.

The reason social platforms get off scot-free is because they allow anyone to post and then moderate posted comments against their own Terms Of Service every poster has agreed to adhere to.

"how do we protect you from the pitfalls of content-moderation-at-scale without granting blanket immunity?"

You can’t, because any attempt to do so is also the old question; "How do we prevent undesirable speech?"

At scale we either allow people to speak freely in private and public gatherings…or we fire two torpedoes amidships in whatever free speech provision our respective national charters provide. The fact that any attempt to curb online speech will violate every free speech provision in any national charter in the free world is telling.

This comment has been deemed insightful by the community.
This comment has been deemed funny by the community.
Nick-B says:

Re: Re:

If, however, they instead deleted all positive words about A and amplified all invented and contrived negative stories about A, they have accomplished the same evil act, but are protected from the consequences.

Ah, the Fox News effect.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

No 1st Amendment Right To Censor

So Hamburger’s new response, written with former Trump DOJ official Clare Morell, tries to argue that the 1st Amendment doesn’t actually protect website content moderation choices.

Prior to the CDA of 1996 that established section 230, there was no established first amendment right to moderate content. Aside from reading the first amendment and finding no such language, there were lawsuits that occurred prior to 1996. Among the most notable was Stratton Oakmont v Prodigy, in which Prodigy was found liable for defamation because it engaged in content moderation. Prior to that was Cubby v Compuserve, in which Compuserve escaped from liability because it did not engage in moderation. No first amendment right could be used in these cases.

-Getting censored proves that your opinion is the strongest.

This comment has been deemed insightful by the community.
Bloof (profile) says:

Re: Re: No 1st Amendment Right To Censor

‘Getting censored proves that your opinion is the strongest.’ Is a weird thing to throw around if you look at it. He’s openly admitting conservative hate laced claptrap w just opinion, not fact, yet is for some reason proud of how strongly conservatives cling to their ignorance.

People telling you to shut up when you say stupid things doesn’t make the stupidity correct, and your refusal to even consider why reflects badly on you, not them.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re: No 1st Amendment Right To Censor

There was no internet in 1959. Smith ran a bookstore. No moderation was occurring.

Koby: a bookseller choosing what books to stock and not to stock is moderating. And it is protected by the 1st Amendment. Because editorial choices are protected by the 1st Amendment.

-Getting censored proves that your opinion is the strongest

Why do you keep posting this bizarre support of child porn, spam, and ISIS?

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Re: Re: Re:2 No 1st Amendment Right To Censor

a bookseller choosing what books to stock and not to stock is moderating.

If you aren’t even aware of the speech in question, you cannot moderate it one way or another.

Why do you keep posting this bizarre support of child porn, spam, and ISIS?

Just like the section 230 exception, you are not acting in good faith. Child porn, repetition, and death threats are not opinions.

-Getting censored proves that your opinion is the strongest.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:3 No 1st Amendment Right To Censor

Child porn, repetition, and death threats are not opinions.

Then the repetition of "The election was stolen" and "Getting censored proves that your opinion is the strongest" means they aren’t opinions either. Thanks for clearing that up.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:3 No 1st Amendment Right To Censor

Just like the section 230 exception, you are not acting in good faith. Child porn, repetition, and death threats are not opinions.

But racist, xenophobic, homophobic, misogynistic, anti-semitic, Nazi-sympathizers, and plain ol’ assholery are all 1A protected opinions. Why should social media, or anybody for that matter, be forced to host those opinions if they do not want to be associated with them?

And of those racist, xenophobic, homophobic, misogynistic, anti-semitic, Nazi-sympathizers, and plain ol’ assholery opinions, why do you think that they are the strongest opinions if they get moderated?

Also, you still haven’t answered if you think "Hang Mike Pence" was a true death treat or political opinion. Why is that?

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re: Re:5 No 1st Amendment Right To Censor

"It should not be too hard to induce if he thinks "Hang Mike Pence" isn’t a death threat or not."

Considering that the mob bludgeoned an officer to death in the capitol entrance we might as well realize that Koby is willing to explain away murder.

As long as it’s done by white people, at least.

Anonymous Coward says:

Re: Re: Re:3 No 1st Amendment Right To Censor

-Getting censored proves that your opinion is the strongest.

Whatever helps you cope.
They’re just all out to get you – we get it.

If they can kick you off social media, what’s next?

It’s almost as if, for some strange reason, people like me just don’t want to be around people like you. I get it. You want to be included so you can free speech your fucking head off. And you don’t want to waste all that free speeching just telling each other whatever you think is important.

After all, if an asshole free speeches and there’s no one to listen but other assholes, how can you ever be truly free?

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re:3 No 1st Amendment Right To Censor

If you aren’t even aware of the speech in question, you cannot moderate it one way or another.

A bookseller CHOOSES what books to stock, Koby. That they weren’t aware that the book was deemed obscene is a different issue. The Supreme Court still found that a bookseller has the right to moderate their own property. Your claim that the 1st Amendment did not apply to moderation is blatantly, stupidly, false.

You come off as a foolish dupe, Koby.

Just like the section 230 exception, you are not acting in good faith.

Section 230 does not require good faith. And I am speaking to you in good faith. I have tried for however long you’ve been on the site to engage you in good faith, and you lie, dissemble, avoid, and then disappear. I’m trying to get through to you that you are an ignorant fool.

Child porn, repetition, and death threats are not opinions.

You said getting censored proves your opinion is the strongest (which you repeat incessantly. By your own ridiculous standards, your statement is not opinion, since you keep repeating it.

I also see you’ve now given me permission to delete your repetition (not that I needed it, but thanks anyway).

In the meantime, since you are finally responding to my questions: why do you hate private property?

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: No 1st Amendment Right To Censor

Getting censored proves that your opinion is the strongest.

So every time somebody advocating that Hitler was right and that the white race is the master race, and that post gets moderated, you think it’s the strongest opinion?

Or calling black people n-word and it gets moderated it’s the strongest opinion?

Or posts that call for death to all Jews (or Muslims for that matter) and get moderated, that they have the strongest opinions?

That’s some pretty fucked up shit there.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: No 1st Amendment Right To Censor

Getting censored proves that your opinion is the strongest.

Judge Orders Laura Loomer and Her Company to Pay More Than $120,000 in Attorneys’ Fees Over Lawsuit Against Muslim Rights Group

After she was permanently suspended from Twitter in 2018 due to years of anti-Muslim posts

And this is the kind of person that you support with that statement. You are a real piece of garbage, and you appear to be proud of that.

This comment has been deemed insightful by the community.
Thad (profile) says:

But if the Wall Street Journal really believes this nonsense, then why won’t it let me publish my op-ed in their pages about how the WSJ is the worst newspaper ever, and regularly prints lies and nonsense to please its scheming owner in his hatred of the internet?

I mean LBR it’s not like the WSJ editorial pages were a bastion of sanity before Murdoch.

Anonymous Coward says:

"public privileging"

"privilege" is, of course, derived from a latin phrase meaning "PRIVATE law"–that is, oen that allows some people rights that other people do not have. Targeted anti-discrimination laws (which allow some specified people legal rights what are denied to others) are one example; another is the Australian "link-to-news tax" designed to benefit the owner of the WSJ at the expense of, well, the public.

It is obviously absurd to regard the right needed by everyone who allows comments on his blog, as in any way "private". The WSJ feedback page (if and when they bother to create one), MySpace and new competitors (like Facebook), Altavista and new competitors (like Google), Techdirt–all equally share in the constitutional rights of free speech, and in various federal/state statutes discouraging frivolous lawsuits against their exercise of their first amendment rights.

But — "public privilege"–what could that even mean? How would you draw a picture of it, to add to the dictionary definition of "oxymoron"?

jimb (profile) says:

About that name...

Poor Prof. Hamburger. He should have picked different parents, to get a different name… unfortunately it appears that poor Prof. Hamburger has ‘ground beef’ for brains. I think the theory that the Trump lackey now flacking for some un-named DC think tank (two guesses which ideology they support… not that two are needed) wrote the article, and poor Prof. Hamburger is along, or being take for, a ride.

This comment has been flagged by the community. Click here to show it.

carl636 says:

Mr. Masnick needs to read 1984 again

Do you advocate for a new Minitrue and respect the new goodthink?

Maybe all the people who disagree with the current political correctness / "wokeness" need to go silent … like before the "contract with America" and the President Trump election.

Then the current political leadership will claim voter fraud or suppression when the suppression is actually from advocates like you.

Mike Masnick (profile) says:

Re: Mr. Masnick needs to read 1984 again

lol wut?

Literally nothing in this comment responds to the points raised in the article.

Maybe all the people who disagree with the current political correctness / "wokeness" need to go silent … like before the "contract with America" and the President Trump election.

No one has said that. And that has nothing at all to do with the issues being debated here.

Go back and learn something before you shit on my site again.

This comment has been flagged by the community. Click here to show it.

carl636 says:

Re: Re: Mr. Masnick needs to read 1984 again

The topic above was does the first amendment apply to "private" Social media.

The left and various ABC government agencies tell Social media that something or someone is posting misinformation and the "private" social media company moderates the informational post or individual out of existence. (1984/Minitrue Ministry of Truth)

Then according to your position; there does not exist any "public forum" Social media where the 1st amendment applies.

How would you feel if some company moderated your hosting service or internet connection out of existence because you posted misinformation?

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re: Mr. Masnick needs to read 1984 again

The topic above was does the first amendment apply to "private" Social media.

And the answer is no. And your original comment didn’t touch on any of that. It just spewed nonsense.

The left and various ABC government agencies tell Social media that something or someone is posting misinformation and the "private" social media company moderates the informational post or individual out of existence. (1984/Minitrue Ministry of Truth)

Which is not how any of this actually works in practice. It’s made up by conspiracy theory morons. Don’t follow morons, Carl.

Then according to your position; there does not exist any "public forum" Social media where the 1st amendment applies.

I have made clear in the past, that IF a gov’t official orders content taken down, then that gov’t official violates the 1st Amendment. I’ve written extensively about that. THIS is not about that. THIS is about a weird claim that social media is magically a gov’t actor even for choices it makes individually.

How would you feel if some company moderated your hosting service or internet connection out of existence because you posted misinformation?

I’ve had stuff moderated before (this site is full of examples). If I break someone’s rules, then I live with it. If the government did it to me, then it’s a 1st Amendment violation and I know how to fight for my rights. But if a private company asks me to go elsewhere, then I go elsewhere.

Scary Devil Monastery (profile) says:

Re: Mr. Masnick needs to read 1984 again

"Do you advocate for a new Minitrue and respect the new goodthink?"

Says the man who in earlier comments tried to feed us the narrative that because his wife had fled Cuban censorship the only right thing to do would be to have the US government impose similar censorship on private entities.

One of two things is proven; Your lack of good faith in this debate or your lack of the cognitive logic required to partake in this debate. And as per usual the only thing you have to argue with is "what ifs" which no sane person would credit at this point.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
sumgai (profile) says:

Rather than make everyone go back up and re-read the comment, I’ll repeat it here, with some emphasis for my point:

For the sake of argument, let’s invent an exaggerated scenario where Social Media Platform X wanted to absolutely destroy Person A’s reputation. If X published any libel, they would of course be liable for it. If, however, they instead deleted all positive words about A and amplified all invented and contrived negative stories about A, they have accomplished the same evil act, but are protected from the consequences.

Is there a solution to this problem, aside from hoping that competitors don’t behave the same way? (Or hoping that competitors will exist at all?

Sorry to disappoint you (and just about everyone who responded) but there is no problem here. Why not? Because there was no evil act. You might think there was one, but it should go without saying that not everyone agrees with you on this particular definition of evil.

Let’s see if we can defuse your misguidance. Sit back and toke another bowlful, we’ll get to the bottom of this together.

The exact aim of a privately held source of news (i.e. not one that is owned or sponsred by the government) is to spread the opinions of the ownership to other people. While altruism usually plays a small part of "why do this", the major reasons are either for ego, or for profit… and usually both. If you can provide another reason that stands the test of logic, then please do so.

That said, you have singularly failed to attach any blame for your example’s perceived problem where it should lay – at the feet of the actual party doing the disparagement, not at the "carrier" of the statement. In fact, what you propose would make every letter carrier liable for any mailings perceived by the recipient as a threat. Since you posed an extreme example, I have done so as well.

Stating that the media platform "amplified" the disparagement is a non-starter. Amplification, in this case, means nothing more than providing a platform (a "soapbox" if you will) whereby it may be safe to assume that a great number of people are nearby, and likely will give your mutterings some measure of attention.

As with print media, platforms come in all sizes, and have all manner of proclivities as to whom they may wish to associate. Even if they were to do everything by human hand (meaning, no AI), they are still doing nothing more than expressing an opinion by virtue of associating with the original speaker, something that’s even more important in the First Amendment than the part about free speech. And yes, just as speech can have consequences, so can association with others. For starters, witness the 6th participants.

tl;dr:
Expecting that a platform be neutral, or worse, mandated by law to be so, that’s nothing more than a sad exposure of naivety.

Endgame: stop trying to salve your "feelz" by attacking the nearest/largest pockets you can find. If you can’t find the proper party to sue for relief, then suck it up, pal. Life will go on, trust me.

Anonymous Coward says:

Re: Re:

In fairness, if the OP really meant words and not posts, he might actually have a point. Editing individual words out of posts in order to change the meaning of the overall post could, I think, make the editor liable for their edits.

Fortunately, no one will probably ever test that. I hope.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re:

In fairness, if the OP really meant words and not posts, he might actually have a point. Editing individual words out of posts in order to change the meaning of the overall post could, I think, make the editor liable for their edits.

It would. Section 230 says that you become an information content provider (an ICP) if you "in whole or in part" contributed to the development of the content. Removing words that changes the meaning would clearly qualify, meaning 230 does not protect that scenario at all.

John Thacker says:

Yes, it’s pretty much the "you didn’t build that" argument combined with the same sort of net neutrality / Title II / common carrier arguments that (I believe) were weak for ISPs but are certainly weaker here. (If they do apply to ISPs, they could, I suppose, apply to certain other infrastructure services like DNS or generic hosting or Cloudfare type services, but not social media.)

This comment has been flagged by the community. Click here to show it.

restless94110 (profile) says:

WSJ

Trch Dirt has sunk to new lows.

Now, the anti-Free Speech writers at Tech Dirt are claiming their good buddy (/sarc) the Wall Street Journal is the authority on Free Speech issues, especially in regard to the now-openly-fascist social media corps.

Wow. Please do tell…..what else is the WSJ–your newest BFF–right about. Surely all of their policy stances you will now need to embrace like your lost twin brother separated in the Cave of the Clan Bear by the wolves who raised you.

Certainly the God-given right to free speech give to all humans did not reach your species yet. I hear that old copies of the WSJ burn well when there’s nothing else to keep warm by.

Good luck!

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...