Republicans Blame CDA 230 For Letting Platforms Censor Too Much; Democrats Blame CDA 230 For Platforms Not Censoring Enough

from the which-is-it? dept

It certainly appears that politicians on both sides of the political aisle have decided that if they can agree on one thing, it’s that social media companies are bad, and that they’re bad because of Section 230, and that needs to change. The problem, of course, is that beyond that point of agreement, they actually disagree entirely on the reasons why. On the Republican side, you have people like Rep. Louis Gohmert and Senator Ted Cruz who are upset about platforms using Section 230’s protections to allow them to moderate content that those platforms find objectionable. Cruz and Gohmert want to amend CDA 230 to say that’s not allowed.

Meanwhile, on the Democratic side, we’ve seen Nancy Pelosi attack CDA 230, incorrectly saying that it’s somehow a “gift” to the tech industry because it allows them not to moderate content. Pelosi’s big complaint is that the platforms aren’t censoring enough, and she blames 230 for that, while the Republicans are saying the platforms are censoring too much — and incredibly, both are saying this is the fault of CDA 230.

Now another powerful Democrat, Rep. Frank Pallone, the chair of the House Energy and Commerce Committee (which has some level of “oversight” over the internet) has sided with Pelosi in attacking CDA 230 and arguing that companies are using it “as a shield” to not remove things like the doctored video of Pelosi:

But, of course, the contrasting (and contradictory) positions of these grandstanding politicians on both sides of the aisle should — by itself — demonstrate why mucking with Section 230 is so dangerous. The whole point and value of Section 230 was in how it crafted the incentive structure. Again, it’s important to read both parts of part (c) of Section 230, because the two elements work together to deal with both of the issues described above.

(c) Protection for ?Good Samaritan? blocking and screening of offensive material

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of?

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

It’s these two elements together that make Section 230 so powerful. The first says that we don’t blame the platform for any of the actions/content posted by users. This should be fairly straightforward. It’s about the proper application of liability to the party who actually violated the law, and not the tools and services they used to violate the law. Some people want to change this, but much of that push is coming from lawyers who just want the bigger pockets to sue. It involves, what I’ve referred to as “Steve Dallas lawsuits” after the character in the classic comic strip Bloom County, who explains why you should always focused on suing those with the deepest pockets, no matter how tangentially they are to the law violating.

But, part (2) of the law is also important. It’s the part that actually allows platforms the ability to moderate. Section 230 was an explicit response to the ruling in Stratton Oakmont v. Prodigy, in which a NY state judge ruled that because Prodigy wanted to provide a “family friendly” service, and therefore moderated out content it found objectionable (in order to support that “family friendly” goal), it therefore became automatically liable for any of the content that was left up. But, of course, that’s crazy. The end result of such a rule would be either that platforms wouldn’t do anything to moderate content, which would mean everything would be a total free for all — and you couldn’t have a “family friendly” forum at all, and everything would quickly fill up with spam/porn/harassment/abuse/etc — or platforms would basically restrict almost everything to create a totally anodyne and boring existence.

The genius of Section 230 is that it enabled a balance that allowed for experimentation and this includes the ability to experiment with different forms of moderation. Everyone focuses on Facebook, YouTube and Twitter — which all take moderately different approaches — but having a Section 230 is also what allowed for the radically different approaches taken by other sites: like Wikipedia and Reddit (and even us at Techdirt). These use very different approaches, some of which work better than others, but much of which is community-dependent. It’s that experimentation that is good.

But the very fact that both sides of the political aisle seem to be attacking CDA 230 but for completely opposite reasons really should highlight why messing with CDA 230 would be such a disaster. If Congress moves the law in the direction that Gohmert/Cruz want, then you’d likely get many fewer platforms, and some would just be overrun by messes, while others would be locked down and barely usable. If Congress moves the law in the direction that Pelosi/Pallone seem to want, then you would end up with effectively the same result: much greater censorship as companies try to avoid liability.

Neither solution is a good one, and neither would truly satisfy the critics in the first place. That’s part of the reason why this debate is so silly. Everyone’s mad at these platforms for how they moderate, but what they’re really mad at is humanity. Sometimes people say mean and awful things. Or they spread disinformation. Or defamation. And those are real concerns. But there need to be better ways of dealing with it than Congress stepping in (against the restriction put on it by the 1st Amendment), and saying that the internet platforms themselves either must police humanity… or need to stop policing humanity altogether. Neither is a solution to the problems of humanity.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Republicans Blame CDA 230 For Letting Platforms Censor Too Much; Democrats Blame CDA 230 For Platforms Not Censoring Enough”

Subscribe: RSS Leave a comment
128 Comments
Anonymous Anonymous Coward (profile) says:

Seriously though

Wouldn’t either move be a 1st Amendment violation? After all, Congress shall make no law is not just prominent, but the first words of the Amendment. Either proposal would be engaging in ‘prohibiting the free exercise’. If platforms do it, it is not the government. If the government tells platforms what to do, it is government.

"U.S. Constitution – Amendment 1"

"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances."

Thad (profile) says:

Re: Seriously though

I don’t know what amending 230 would do, but repealing it entirely would simply move the status quo back to what it was before the CDA was passed — viz, the Stratton Oakmont v. Prodigy decision that if platforms moderate content, they’re liable for whatever content they don’t remove.

This would be disastrous, for reasons Techdirt has covered repeatedly and at length. There would be legal challenges, there would be lobbying, there would be an awful lot of frivolous suits, and most US sites would either shut down comments entirely or not moderate them at all (including spam filters).

As for constitutional challenges? Maybe. The Prodigy case wasn’t appealed to the Supreme Court, so there’s always the possibility that a new challenge could make it up to SCOTUS and the precedent could be reversed. But that would take years.

Thad (profile) says:

Re: Re: Re: Seriously though

Did you quit posting as John Smith just to get around my filter? I know that’s why Blue started changing his name.

On the one hand, there’s something weirdly flattering about that. On the other, that’s some kinda creepy stalker shit, Johnny.

But then, when has "that’s some kinda creepy stalker shit" ever stopped you before?

Scary Devil Monastery (profile) says:

Re: Re: Re:3 Seriously though

"All we need is for Jhon to say "copyright terms prevent publishers from murdering authors" and I’ll have finished my bingo sheet!"

You must have been absent when he made that claim.

Although I’m pretty sure that was the claim made by his nickname "Bobmail" at torrentfreak, quite some time back.

Anonymous Coward says:

Re: Re: Re:2 Seriously though

You’re talking about the guy whose response to Masnick upon Shiva Ayyadurai failing to destroy this site was this:

"Your ugly POS wife is a better laugh. Your shit stain children even better. You backed down like the little pussy you are. The one who can’t get top-shelf women."

Stalker is Jhon Herrick Smith’s middle-middle name!

Scary Devil Monastery (profile) says:

Re: Re: Seriously though

"The first amendment isn’t an absolute right. If it were, then child porn wouldn’t be illegal."

The first amendment is certainly absolute until someone sees fit to rewrite it.

There are plenty of exceptions to free speech, all of which have in common that it has taken one or more supreme court decisions to formulate them. Until such rulings exist, however, the first amendment is indeed an unassailable absolute right.

Glenn Wright says:

Re: Seriously though

Not necessarily. CDA 230 specifically protects companies from the consequences of speech that’s not protected by the First Amendment, like threats and libel. For example, true threats are not protected by the First Amendment; if someone sends you a death threat on Facebook, CDA 230 makes it so you can sue the person who sent it, but you can’t sue Facebook.

That said, removing CDA 230 would seriously jeopardize the ability of social media platforms to exist, and I wouldn’t be surprised if the Supreme Court stepped in and ruled that they still don’t count as publishers.

Mason Wheeler (profile) says:

Either proposal would be engaging in ‘prohibiting the free exercise’.

Umm… huh? The only place that those words appear in the First Amendment, it’s immediately followed by the word "thereof", making it clear that it refers to the thing that was discussed immediately prior, which is religion, not speech.

(You’re not wrong about this being a likely violation of the First Amendment; only about how it applies here.)

The Real Dick Ottomy says:

False Dichotomy. Too little moderation AND target conservatives

This is Masnick’s usual attempt to position Un-Constitutional Section 230 as "opposed by both, therefore must be good".

But in fact, both complaints are true.

Masnick also wants Section 230 to provide corporations with absolute immunity AND government-conferred authority to control all speech.

That’s just his wishes for corporations. It’s not the law.

"Good Samaritarans" must be GOOD. Inarguable. It’s right there, black letter law.

Masnick’s duplicity on this is shown by that when arguing with me, he simply DELETED the "in good faith" requirement! — And then blows it off as not important:

https://www.techdirt.com/articles/20190201/00025041506/us-newspapers-now-salivating-over-bringing-google-snippet-tax-stateside.shtml#c530

Now, WHERE did Masnick get that exact text other than by himself manually deleting characters? — Go ahead. Search teh internets with his precious Google to find that exact phrase. I’ll wait. … It appears nowhere else, which means that Masnick deliberately falsified the very law under discussion. Probably because trying to keep me from pointing out that for Section 230 to be valid defense of hosts, they must act "in good faith" (to The Public), NOT as partisans discriminating against those they believe are foes.

The Real Dick Ottomy says:

Re: False Dichotomy. Too little moderation AND target conservati

Forgot to point out that mere statute CANNOT empower any entity to violate Constitutional Rights. Section 230 is therefore null and void. — Yes, no matter how often used in cases to get immunity (usually rightly), it STILL cannot empower corporations on the "material is constitutionally protected" point.

That’s the actual crux of argument. Masnick tries to buttress the censorship with non-controversial parts. — Because there’s BIG money in being able to control all speech. If corporations are able to shunt opposition into tiny outlets, they automatically win.

Scary Devil Monastery (profile) says:

Re: False Dichotomy. Too little moderation AND target conservati

"This is Masnick’s usual attempt to position Un-Constitutional Section 230 as "opposed by both, therefore must be good"."

By "un-constitutional" you mean as in "protects the constitutional rights of both comenters and platform owners"?

Section 230 is nothing other than the online equivalent of the right a home owner has to, in the real world, decide for themselves on whether a visitor gets to shout their opinions while standing in said home owners living room.

As usual, Baghdad Bob, you conflate the United States Constitution with the ruleset adopted by Borat the Dictator.

Anonymous Coward says:

Also in 230:

(3)
The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.

Was that a hint of the FCC’s abolished Fairness Doctrine? While all the big platforms certainly started out with minimum censorship, the vice keeps tightening ever so slowly. Will Conservatives and other Wrongthinkers be boiled alive like the proverbial frog, or will an online equivalent of Fox News emerge that splits social media in much the same way as cable news, along political, cultural, and ideological lines?

Thad (profile) says:

Re: Re:

Was that a hint of the FCC’s abolished Fairness Doctrine?

I read it as more of a riff on the "marketplace of ideas".

Remember, Section 230 was a direct reaction to Stratton Oakmont v Prodigy, a decision which held that because Prodigy moderated content, it was legally liable for content it didn’t remove.

(I was a Prodigy kid. I can assure you that Prodigy moderated content aggressively.)

230 was explicitly built on the premise that platforms can moderate content as they see fit.

Anonymous Coward says:

Re: Re: Re:2 Re:

Why should a platform be forced by law to host any kind of content?

If they are a state actor who should be treated as a common carrier, they should be.

I support a law that treats federal contractors as common carriers, meaning if they want to censor, don’t do it while playing with federal money.

Stephen T. Stone (profile) says:

Re: Re: Re:3

If they are a state actor who should be treated as a common carrier, they should be.

Please explain the reasoning behind your belief that any website that hosts user-generated content should be classified as a government-controlled “common carrier” and thus forced to carry any content it otherwise would not host.

FurAffinity, a site wholly dedicated to UGC, chooses not to host certain kinds of artwork. For what reason should the government have a right to make FurAffinity do otherwise?

Anonymous Coward says:

Re: Re: Re:2 Re:

The terms of service are biased and subjectively enforced.

The public tolerates this by NOT boycotting companies who sponsor it, however, so the marketplace has spoken.

If the public demanded that USENET rules apply or they won’t buy anything advertised on the site, that’s what we’d have, or USENET itself would still be populated more heavily.

Anonymous Coward says:

Re: Re: Re:5 Re:

There is no law, not even 230, that requires fairness in protected speech. Moderation has been recognized as a form of speech. So long as that moderation isn’t performed by a state actor there is no violation of any law, rule, restriction or anything else. The government can’t even intervene here and kill 230, a rule protecting free speech, without running afoul of the constitution.

The world is unfair. You’ll get used to it eventually. This is particularly funny since it was right wingers who first called left-wingers "snowflakes" but look how the right wing melts when their "unique and beautiful" views aren’t well accepted.

Anonymous Coward says:

Re: Re: Re: Re:

The presumption was that such moderation would be politically neutral

Citation needed.

However, should I be similarly questioned for a citation, I offer this transcript of the Congressional Record when the amendment was read and several speakers commented – and none of them spoke of a political neutrality requirement for the immunity conferred upon service providers. The relevant section starts with "amendment offered by mr. cox of California"

Anonymous Coward says:

Re: Re: Re:3 Re:

They mentioned "good faith" moderation.

Yes, they did. But that’s got nothing to do with political neutrality. The commenters were quite clear that they didn’t want providers to do nothing, but they also didn’t want them to face liability if they did something. There’s also a few bits in there about how there really isn’t a definition for what should or shouldn’t be removed, and it’s probably not a good idea for the government to create a definition.

Now, if you want to argue that a particular platform’s moderation is not being done in good faith, you could certainly do that – I’ve not heard of any CDA230-related cases that make that argument, so it might be an interesting angle to take but also would probably be much harder to convince a judge that it was actually happening.

the law doesn’t have to require neutrality for Congress to impose the condition

The law (CDA230) doesn’t have to require neutrality for Congress to impose the condition (a neutrality requirement) on a law (CDA 230)….?

Um, what?

circumvents two centuries of precedent in this country, and runs counter to all other countries.

So? Just because it was done that way before, or is done that way elsewhere, doesn’t mean it’s the right way to do it.

Stephen T. Stone (profile) says:

Re: Re: Re:3

the law doesn’t have to require neutrality for Congress to impose the condition

Any imposition of neutrality would constitute a breach of the First Amendment. Until the Supreme Court says otherwise, corporations have the right of association — and that includes the right to avoid association with certain people/kinds of speech. Imposing content neutrality on social interaction networks would violate that right.

Anonymous Coward says:

Re: Re: Re:4 Re:

Any imposition of neutrality would constitute a breach of the First Amendment.

You mean like the way we do with the phone company or the USPS violates the First Amendment? Common carrier is unconstitutional now?

Until the Supreme Court says otherwise, corporations have the right of association — and that includes the right to avoid association with certain people/kinds of speech. Imposing content neutrality on social interaction networks would violate that right.

Congress has the right to pass a law which says that internet sites who have UGC and are federal contractors shall be treated as common carriers.

PaulT (profile) says:

Re: Re: Re:3 Re:

How about we stop with the idiotic nicknames and just say that the community can set its own standards for who is acceptable in their particular community Then, those people negatively affected go to communities where they are accepted, rather than whine that someone else isn’t forced to play with them?

There’s plenty of places out there that will accept you, but you don’t have the right to use someone else’s property just because they’re more popular than you are.

Anonymous Coward says:

It’s these two elements together that make Section 230 so powerful. The first says that we don’t blame the platform for any of the actions/content posted by users.

The harm inflicted by a platform in amplifying/spreading defamation is separate from the harm inflicted by the user. Every country in the world EXCEPT the US recognizes this, and even the US did with distributor liability.

Section 230 allows people to weaponize search engines, and if IP addresses don’t prove authorship, or people use a "burner" IP that can’t be traced (or are judgment-proof or posting from another country), the target of defamation is defenseless.

I’m sure if someone ever used an untraceable IP address to post reviews of pro-230 lawyers and claim that they sexually abused children or female clients, the lawyers would scream bloody murder and their pro-230 position might change. Of course I’m not recommending anyone DO this but instead just demonstrating the potential for harm.

The other problem is that people believe what they read online, then repeat it in their own words without linking to the original post and that makes them a publisher and liable for being dumb enough to believe and repeat what they read. Sometimes the defamation is on a questionable site (like a white-supremacist or anti-Semitic site) so they can’t quote it but by not quoting it they become liable.

Someone who wanted to game the system could easily have defamation about themselves planted online, go around arguing with people, wait for the people they argue with to Google them, then let nature take its course and sue those people for libel once they repeat what they’ve read (they can plant the defamation on a site the pawn wouldn’t want to link to for maximum impact).

Employers who believe defamation and deny someone a job because of it should be sued into bankruptcy.

Section 230 is fatally flawed.

Anonymous Coward says:

Re: Re: Re:2 Re:

And yet, here you are, expressing an idea about “weaponizing” defamation and Section 230 that has not, and will never, become a reality. Sounds like that “ad hom” has more truth to it than you care to admit.

There are entire forums and websites devoted to weaponizing 230 by posting content for the explicit purpose of defaming people and having that defamation turn up when one’s name is searched.

The "don’t date that guy" type of site is one (no, I’ve never been named on one).

Stephen T. Stone (profile) says:

Re: Re: Re:3

There are entire forums and websites devoted to weaponizing 230 by posting content for the explicit purpose of defaming people and having that defamation turn up when one’s name is searched.

The people being (allegedly) defamed can sue over the content. They can have it declared defamatory and ask for its removal via court order. The existence of CDA 230 doesn’t prevent either action.

PaulT (profile) says:

Re: Re: Re:3 Re:

"There are entire forums and websites devoted to weaponizing 230"

There are entire forums devoted to the idea of zombie invasion, that doesn’t mean that it has actually happened.

Why are you always unable to give actual proof of your claims? If it doesn’t happen with any kind of regularity, it doesn’t justify stripping the rights of millions and the employment of thousands as you are demanding.

Wendy Cockcroft (profile) says:

Re: Re: Re:3 Re:

So you don’t think Ripoff Report exploits Section 230 or relies on sites like Google to spread their words?

As one who was defamed on ROR, no. It has the option to moderate or not. If they get sued, they will only remove the words deemed defamatory in a court of law; the rest of the negativity (the parts that are purely opinion) remain up. Section 23 isn’t responsible for this, they are. Search engines don’t spread anything, they simply index content. I blame no one but the troll who posted that content for what was posted there.

In ROR’s defense, they allowed me to post a rebuttal, so every time someone reads the troll post, they can read the rebuttal too.

Gwiz (profile) says:

Re: Re:

Section 230 allows people to weaponize search engines, and if IP addresses don’t prove authorship, or people use a "burner" IP that can’t be traced (or are judgment-proof or posting from another country), the target of defamation is defenseless.

So what? The right of anonymous speech has existed prior to the Constitution of the United States and has been consistently upheld by our courts as a First Amendment right. The one difference between the right to anonymity and other 1A rights is the fact that once you give it up (or it’s taken from you) you can never reclaim it. Once again to paraphrase Blackstone: "I’d rather 100 defamation cases go unpunished as opposed to one persons right to anonymity be stripped from them."

The other problem is that people believe what they read online…

Yes, some people are gullible and stupid, but that doesn’t mean I have to give up my rights because of their shortcomings.

Section 230 is fatally flawed.

No, it’s not. It’s because of Section 230 that we are able to have this discussion in this comment section in the first place because this comment section wouldn’t exist without it.

Personally, I made the decision to keep my and personal/professional identity and my online identity separate back in the late 90’s and have never regretted it.

Anonymous Coward says:

Re: Re: Re:

The other problem is that people believe what they read online…
Yes, some people are gullible and stupid, but that doesn’t mean I have to give up my rights because of their shortcomings.

Exactly, and these people are sitting ducks for being manipulated into being sued by those who would use them as pawns.

Some 4chan idiot wants to poison my coworker against me, the coworker grabs the bait, winds up sued, and then blames ME. Pathetic.

Anonymous Coward says:

Re: Re: Re:2 Re:

If these gullible people are being duped into believing something untrue about you by some bad actor, then aren’t they victims of the bad actor as well as you?

Yes they are. People are predisposed to believe the worst about those with whom they disagree. Experienced internet users know how to manipulate this predisposition to turn these people into unwitting pawns.

Blaming you for suing them isn’t that unreasonable – after all, you are choosing to sue them.

Yes, in that situation I would be choosing to defend my rights, and the lawsuit would be caused by the pawn’s willingness to believe something defamatory about someone they don’t like written by someone they never met. In fact, trying to warn them of this will often just empower them to make even more defamatory statements.

Smart people won’t fall into this trap, but not everyone is smart. The trap is not set by the plaintiff, who was simply targeted by those who didn’t like him or her, but was set by instigators who cannot be located but who write very serious-sounding posts designed to induce third parties to grind their axe.

Perhaps if enough people fall into this trap, or the wrong person does, it will be prevented, but we’re not there yet. While it’s not Section 230’s "fault" the law definitely makes it possible, and without 230, it wouldn’t happen because ISPs wouldn’t let themselves by tricked, though I’d imagine if some admin didn’t like a poster they might go out on a limb and get sued.

Anonymous Coward says:

Re: Re: Re:2 Re:

these people are sitting ducks for being manipulated into being sued by those who would use them as pawns
If you are so sure this plan of yours would work and is obvious to anyone with half a brain, please show us one instance of it working. At all. On any level.

You mean name names and put targets on people’s back. Not necessary.

I did cite a case where "reiterating" content was a key element in proving one was a publisher rather than a distributor, and that should be sufficient.

We know that if someone posts, without attribution, a defamatory statement that they are a publisher and not a distributor.

We also know that there are people who will repeat what they find in Google without bothering to link to the original source, which makes them publisher.

One need not jump off a building to know that doing so is likely to cause death. The demand for specifics is therefore more indicative of a desire to target the people named.

Anonymous Coward says:

Re: Re: Re:3 Re:

You mean name names and put targets on people’s back. Not necessary.

Yes, it’s truly not. In such cases you can simply blank out the people’s names and offer a court document instead.

Nobody needs to know names. What needs to be known that your interpretation of the law exists outside of your fevered, fanciful imagination.

PaulT (profile) says:

Re: Re: Re:3 Re:

"I did cite a case"

Did you? Would you mind linking again, since you refuse to offer people a way to search your previous posts?

"We know that if someone posts, without attribution, a defamatory statement that they are a publisher and not a distributor."

Yes – but the person who posts that would be liable, not the platform they used to post it, and certainly not someone showing them where that platform is.

"We also know that there are people who will repeat what they find in Google without bothering to link to the original source, which makes them publisher."

No, the fact that stupid people exist does not change the nature of a business.

That One Guy (profile) says:

Be careful what you wish for...

The ‘funny’ part of this is that if they do get their way neither side is going to be happy with the result.

Reinstate a penalty for moderation and sites are likely to go one of two ways, either moderating nothing, which will anger the idiots who think they’re not doing enough, or moderating heavily anything that even might be objectionable, angering the idiots who think that sites are already moderating too much.

In their rush to throw tantrums because social media platforms aren’t doing what they want them to they’ve completely missed that even if they win social media platforms still won’t do what they want them to.

cpt kangarooski says:

I’m inclined to agree with the Democrats here, to an extent. While sites absolutely don’t have to delete content, and should not be compelled to, that doesn’t mean that it Sites shouldn’t want to. Certainly, were I running a site that allowed user content to be posted, I would be greatly concerned about not providing any assistance to harmful speech, including by not providing a platform for it or for the people who engage in it, or by offering them connections to my users or by doing business with businesses that did tolerate it. Let malicious users go elsewhere to exercise their right of free speech.

That One Guy (profile) says:

Re: Re:

While sites absolutely don’t have to delete content, and should not be compelled to, that doesn’t mean that it Sites shouldn’t want to.

If that was as far as it went most people on TD would likely agree with you(with the discussion then shifting to what should be removed, how it would be done, how to minimize collateral-damage…), it’s when ‘should‘ shifts to ‘should be required to’ that the problems and objections crop up.

PaulT (profile) says:

Re: Re: Re: Re:

"assuming that’s when the moderation kicks in’

Well, that’s the big problem here – it’s not. They’re not trying to hold platforms responsible for not dealing with reports properly. They’re trying to hold them responsible for anything that ends up on the site. Which means that pretty much any site of any size would need to use some kind of automated filter – even if you can personally deal with the normal level of traffic you get, can you really deal with any potential spikes, or deal with it when you’re asleep?

Which means the end of most sources of user interaction. We’ll be left with a few sites with deep enough pockets to deal with lawsuits (read: the already entrenched giants) and everybody else reduced to a broadcast model.

Anonymous Coward says:

Re: Re: Re:3 Re:

"Which would reduce our ability to interact with each other just because some people can’t behave themselves."

The solution, it seems, is to quit whining about content you don’t like and move on to content you do. People whining about a few bad actors is going to fuck it up for everybody

It is no different than the stories Mike forces people to read on this site that they had no desire to read and are wondering what the Techdirt angle is. Get over it. Or build a bridge and get under it would seem more appropriate for the whiners.

cpt kangarooski says:

Re: Re: Re: Re:

I suspect that the worst posters are comparatively few in number; a social graph is probably the way to go. Don’t just delete posts, delete posters.

Given the financial resources of the major sites, I’d suggest coordinating with anti-hate groups (SPLC, ADL, etc.) to basically dox the people in question so that they can be excluded en masse, and infiltrate their private boards so that you can avoid having to always be reactive.

cpt kangarooski says:

Re: Re: Re:3 Re:

I don’t mind anonymous or pseudonymous speech, but if someone is abusive, and a platform claims to be serious about not allowing such things, trying nothing and being all out of ideas is not a great plan.

If you’re serious about not providing support to neo-nazis or whomever, you’d better know who they are.

That said, it shouldn’t be mandatory. But effectively shunning the dregs of society is not so far out there that good citizens should be unwilling to do it of their own free will.

cpt kangarooski says:

Re: Re: Re:5 Re:

What I suggest is that the platforms take advantage of section 230 to voluntarily effectively identify and boot such users from the platforms.

They should not engage in harassment or publicly doxxing the users. But I am not averse to them comparing notes or seeking assistance from above-board groups who are apt to be better at connecting the dots and staying on top of trends, and who are, within certain boundaries that would need to be understood, unlikely to themselves be penetrated or corrupted.

I admit, it is kind of like Red Channels except for assholes, and this gives me pause, but I think people can agree that this is a more serious problem that does not seem to have good solutions. The Hollywood Ten were not running people down with cars, shooting people, spreading communicable diseases because they refused to get vaccinated, etc.

It’s not a panacea, and it shouldn’t be the only thing that is done, but I think platforms have a social, though not a legal responsibility to keep their platforms from being used maliciously and that they should do something effective to accomplish this.

Given how easily any existing measures have been circumvented, it’s time to take it up a notch. But if you have a suggestion that goes beyond what’s being done now, please make it.

Stephen T. Stone (profile) says:

Re: Re: Re:6 Re:

What I suggest is that the platforms take advantage of section 230 to voluntarily effectively identify and boot such users from the platforms.

No, what you said was:

Given the financial resources of the major sites, I’d suggest coordinating with anti-hate groups (SPLC, ADL, etc.) to basically dox the people in question so that they can be excluded en masse, and infiltrate their private boards so that you can avoid having to always be reactive.

That isn’t “tak[ing] advantage of section 230”, that is outright authoritarian bullshit — and it is bullshit for which you openly and unapologetically advocate. I mean, have you thought through the consequences of Twitter, Google, and Facebook pooling together resources to effectively spy on the entire goddamned Internet so they can keep assholes off Twitter, Google, and Facebook?

cpt kangarooski says:

Re: Re: Re:7 Re:

I would imagine that its about as much spying as they do now in order to advertise to people. I am skeptical that people are good at maintaining totally separate identities online, and if the ad companies are as good as they’re made out to be, it only takes a little information to irreversibly connect a person’s commercial identity (for ordering things online) to their “anonymous” or “pseudonymous” posting identity as a troll, nazi, etc. So the information is likely already known to Google and almost certainly to a collaboration of Google, Facebook, and Amazon.

Other than that people are already creeped out about it, is there a major consequence that isn’t already happening? If you’re worried about intelligence agencies doing the same thing or piggybacking, that ship has probably already sailed.

Deplatforming of this nature should be done with a light touch, but at the end of the day it is relatively harmless. No one is kicking anyone off the net, no one is preventing assholes from making their own version of Google and Facebook with hookers and blackjack (like Conservapedia) and there’s probably few enough of them that a modicum of civility and reason could be restored by kicking out a small number of hard-core troublemakers.

While I get that it is a distressing idea that we may have come to this point (and we certainly do not want to go further and let the government get involved in deplatforming people) there is a sickness and it’s not clearing up on its own. Some sort of affirmative treatment is called for before things get worse.

What’s your suggestion for the malicious malaise afflicting society these days? Make popcorn? I’m still happy to hear about milder yet effective alternatives. And you didn’t actually say what harms you anticipate from my suggestion, either.

Stephen T. Stone (profile) says:

Re: Re: Re:8

You are calling for the major tech companies to spy on the entire Internet, with the help of third-party companies, so they can effectively punish assholes if they post bullshit on a platform owned by a major tech company. Imagine if you could be banned from Twitter because of something you said here, or vice versa.

If you see no issues with that proposition, I cannot help you.

PaulT (profile) says:

Re: Re: Re:9 Re:

I think the problem is a typical one when dealing with normal, decent people – they call for tools but do not consider the way the tools can be abused. It makes sense if people who think in a similar way are given those tools. Unfortunately, people in the real world will not always think that way.

It is sadly better for society to put up with trolls, abuse, hatred, etc. than to face the alternative where good people are attacked with the tools we would use to stop that.

Wendy Cockcroft (profile) says:

Re: Re: Re:10 Re:

Yes, but that’s what mute and block buttons are for. I use them all the time when people annoy me. We don’t have to put up with trolls, etc., at all.

Honestly, it seems to me that refusing to engage with them is the better way. Too many people see a need to interact with them and have the last word. It’s a stupid way to behave. Ignore, mute or block, and move on.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »