Incentivizing Better Speech, Rather Than Censoring 'Bad' Speech

from the there-are-other-solutions dept

This has gone on for a while, but in the last year especially, the complaints about “bad” speech online have gotten louder and louder. While we have serious concerns with the idea so-called “hate speech” should be illegal — in large part because any such laws are almost inevitably used against those the government wishes to silence — that doesn’t mean that we condone and support speech designed to intimidate, harass or abuse people. We recognize that some speech can, indeed, create negative outcomes, and even chill the speech of others. However, we’re increasingly concerned that people think the only possible way to respond to such speech is through outright censorship (often to the point of requiring online services, like Facebook and Twitter to silence any speech that is deemed “bad”).

As we’ve discussed before, we believe that there are alternatives. Sometimes that involves counterspeech — including a wide spectrum of ideas from making jokes, to community shaming, to simple point-for-point factual refutation. But that’s on the community side. On the platform side — for some reason — many people seem to think there are only two options: censorship or free for all. That’s simply not true, and focusing on just those two solutions (neither of which tend to be that effective) shows a real failure of imagination, and often leads to unproductive conversations.

Thankfully, some people are finally starting to think through the larger spectrum of possibilities. On the “fake news” front, we’ve seen more and more suggestions that the best “pro-speech” way to deal with such things is with more speech as well (though there are at least some concerns about how effective this can be). Over at Quartz, reporter Karen Hao recently put together a nice article about how some platforms are thinking about this from a design perspective… and uses Techdirt as one example, in how we’ve created small incentives in our comment system for better comments. The system is far from perfect, and we certainly don’t suggest that every comment we receive is fantastic. But I think that we do a pretty good job of having generally good discussions in our comments that are interesting to read. Certainly a lot more interesting than other sites.

The article also discusses how Medium has experimented with different design ideas to encourage more thoughtful comments as well, and quotes professor Susan Benesch (who we’ve mentioned many times in the past), discussing some other creative efforts to encourage better conversations online, including Parlio (which sadly was shut down after being purchased by Quora) and League of Legends — which used some feedback loops to deal with abusive behavior:

In one experiment, Lin measured the impact of giving players who engaged in toxic behavior specific feedback. Previously, if a player received a suspension for making racist, homophobic, sexist, or harassing comments, they were given an error message during login with no specifics on why the punishment had occurred. Consequently, players often got angry and engaged in worse behavior once they returned to the game. League of Legends reform card.

As a response, Lin implemented ?reformation cards? to tell players exactly what they had said or done to earn their suspension and included evidence of the player engaging in that behavior. This time, if a player got angry and posted complaints about their reformation card on the community forum, other members of the community would reinforce the card with comments like, ?You deserve every ban you got with language like that.? The team saw a 70% increase in their success with avoiding repeat offenses from suspended users.

However, the key thing, as Benesch notes, is getting past the idea that the only responses to speech a large majority of people think is “bad” is to take it down and/or punish the individual who made it:

?There is often the assumption in public discourse and in government policymaking and so forth that there are only two things you can do to respond to harmful speech online,? says Benesch. ?One of those is to censor the speech, and the other is to punish the person who has said or distributed it.? Instead, she says, we could be persuading people not to post the content in the first place, rank it lower in a feed, or even convince people to take it down and apologize for it themselves.

Obviously, there are limits on all of these options — and anything can and will be abused over time. But by at least thinking through a wider range of possibilities than “censor” or “leave everything exactly as is” we can hopefully get to a better overall solution for many internet discussion platforms.

Meanwhile, Josh Constine, at TechCrunch recently had some good suggestions as well specifically for Twitter and Facebook for ways that they can encourage more civility, without resorting to censorship. Here’s one example:

Practically, Twitter needs to change how replies work, as they are the primary vector of abuse. Abusers can @ reply you and show up in your notifications, even if you don?t follow them. If you block or mute them, they can create a new throwaway account and continue the abuse. If you block all notifications from people you don?t follow, you sever your connection to considerate discussion with strangers or potential friends ? what was supposed to be a core value-add of these services.

A powerful way to prevent this @ reply abuse would be to prevent accounts that aren?t completely registered with a valid phone number, haven?t demonstrated enough rule-abiding behavior or have been reported for policy violations from having their replies appear in recipients? notifications.

This would at least make it harder for harassers to continue their abuse, and to create new throwaway accounts that circumvent previous blocks and bans in order to spread hatred.

There may be concerns with that as well, but it’s encouraging that more people are thinking about ways that design decision can make things better, rather than resorting to just out and out censorship.

Filed Under: , , , ,
Companies: facebook, medium, twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Incentivizing Better Speech, Rather Than Censoring 'Bad' Speech”

Subscribe: RSS Leave a comment
59 Comments
aerinai says:

I was thinking about this exact same problem with Facebook and Twitter’s ‘Trolls-for-Hire’ issue (namely the Russian-Government backed kind). Technically they aren’t breaking any rules or laws, but they are ‘fake’ accounts spreading misinformation in a nonconstructive way; so still ‘bad actors’ in a community.

Blocking them just sends them a sign that you are on to them. All that does is make them create yet another new anonymous account and start spouting off again and again and again. Sounds like the same solution that MPAA and RIAA are employing with copyright censorship and we all know how well that is going….

I was pondering the idea of creating ‘echo chambers’ that members of a given vitriolic nature are able to interact in their own little ‘walled garden’ if you will. Your comments will show up in your feed, your posts aren’t taken down… just they might not be seen on someone else’s account.

Who cares what you spout when we identify you as a troublemaker, (e.g. troll/racist/abuser) when only you can see it. If you aren’t in a community that you can correct the behavior, the silent treatment might be a great way to keep the community whole. With things like Facebook and Twitter being so vast that you can’t possibly see every possible message someone posts, they already make decisions on what you want to see. They can just weight these troll accounts to the bottom and no one would be the wiser.

Wendy Cockcroft (user link) says:

Re: Re:

You’re describing muting. Twitter does that; I’m not sure about FB. The best thing about it is that the twerp you’ve muted doesn’t know they’re muted (unless you tell them) so they don’t feel the need to make alternate accounts.

Please note, muting only works if you stop responding prior to doing so or they’ll realise what you’ve done.

Rich Kulawiec (profile) says:

It's too late for Twitter and Facebook

MUCH too late.

Those of us who’ve been around for a while know — from the first-hand experience of our own failures — that you can’t retrofit abuse prevention. It has to be designed-in before the first line of code is written or the first server plugged in. Belatedly trying to slap band-aids on after the fact has never worked, it’s not working, and it’s not going to work.

Not that this will stop them from trying, or that’ll stop them from claiming success when it’s obvious to everyone that they’ve failed. But when they take the podium and make those claims, ignore what they say and look instead what they’ve done: if it’s just another set of tweaks to a system design that was fatally flawed before it was built, then what you’re listening to isn’t progress: it’s just bullshit.

Anonymous Coward says:

Yep...

“A powerful way to prevent this @ reply abuse would be to prevent accounts that aren’t completely registered with a valid phone number, haven’t demonstrated enough rule-abiding behavior or have been reported for policy violations from having their replies appear in recipients’ notifications.”

Ha ha… now you have to “work/put in your time” to have a voice. I like this, you start off without a say at all, just exactly what democracy is all about!

This will only create a more powerful platform of bullying than you can imagine. Black Mirror “Nosedive” anyone?

Your entire article seems to be promoting a Global Public rating system for everyone as the cure for “bad speech”!

How is this a good suggestion? I saw right through it from the get go!

TKnarr (profile) says:

Re: Yep...

The suggestion still allows those without completely-registered accounts or with no prior history to judge them on to have their say. It just doesn’t let them shove their way into my notification feed and my attention. That’s the way the world works: when you’re a newcomer to a community with no history in it people don’t pay nearly as much attention to what you say as they do to a long-time member with a rich history of making good points. If you intend to be a long-term member of the community the lack of history remedies itself in relatively short order. If all you want is to have other people notice you screaming and react to you… sooooo not my problem.

Anonymous Coward says:

I don’t get why Tech Dirt keeps pushing the idea that moderating on a site is a free speech violation when in practice we can all setup our own sites, forums, and the like to discuss matters. It would be better for Tech Dirt, in the pursuit of free speech, to promote more distributed and decentralized hosting/sharing of people’s views. Meaning, stop saying Facebook and Twitter needs to accommodate users and start saying users should be able to host an application like Twitter or Facebook to connect with people they have an inkling of interest in communicating with. Also, promote laws to ban ISPs from blocking self-hosted services/apps as part of Net Neutrality. Not only do you get away from the fear that moderation will lead to some soft censorship, it would make it physically impossible. If someone doesn’t want to hear your drivel they don’t go to your account and you can’t force them to do otherwise (and vice versa). It’s a win-win and it’s less thorny than coming up with Fairness Doctrine 2.0.

Anonymous Coward says:

Re: Re:

“I don’t get why Tech Dirt keeps pushing the idea that moderating on a site is a free speech violation”

huh??? they are not saying that. where do you get that from?

Are you one of those that whined about the NFL going after knee takers and saying they had a right to kneel during the anthem but brow beat all of the posters on this forum whining about getting their posts flagged?

Puleeze!

Anonymous Coward says:

Re: Re: Re:

I don’t care if the NFL commissioner decides to regulate what players do since it’s the NFL and not the US government. It’s likely he chose the right action as to not make it a league reg since it would invite law suits.

Also, getting the ban hammer or your posts flagged isn’t a free speech issue for the same reasons as to whether or not the NFL chooses to regulate the behavior of its players since both are PRIVATE PLATFORMS.

It’s really weird that folks like you want to create Fairness Doctrine for the Internet when the courts already ruled it doesn’t work for the airwaves. Like it or not, if I host a website I don’t have to give you a comment section to screech at me. The freedom to speak or to express yourself doesn’t mean you get a subsidy on its dissemination nor a built-in audience for it to be received. Once you admit to those two points then we can have a fruitful discussion. Until then, you’re no better than those alt-right clowns that try to force everyone to “debate” them.

Mike Masnick (profile) says:

Re: Re:

I don’t get why Tech Dirt keeps pushing the idea that moderating on a site is a free speech violation when in practice we can all setup our own sites, forums, and the like to discuss matters. It would be better for Tech Dirt, in the pursuit of free speech, to promote more distributed and decentralized hosting/sharing of people’s views.

These two things are not mutually exclusive. Indeed, we have REGULARLY advocated for a more distributed and decentralized system, just as you say. But the fact is, as it stands today, most people still use these services. And it’s perfectly reasonable to argue that they should be designed better.

Anonymous Coward says:

Re: Re: Re:

You can ask for better designed services, but it won’t happen until there’s money in it. Right now, they (SV execs) don’t see the cost of proper moderation and better social networking as worth it. The only way you’re going to get them to think it’s worth it is hurt them. And the only two ways I know of hurting a site like Twitter is either not using it or blocking any/all ads. There’s really no third option here.

IMO, the better option is to not use the services and build our own. It’s slow and it takes time to develop but the fact that Mastodon continues to grow despite its flaws is a good example of this. It does so without pandering to political views it just promises federated social networks and self-hosting options.

That Anonymous Coward (profile) says:

o_O
yeah Twitters not getting my phone number.

Perhaps a better way for twitter to handle it is to use dev/null.
Rather than show the ZOMG YOUR BLOCKED message… show them nothing.
Let them think the target is just ignoring them.

You tell them they are blocked they rev up the attacks because they got a cookie for the bad behavior. The reward is I was so awful they blocked me.

Much of the “hate speech” I see people bitching about on Twitter really is OMG THEY CALLED ME A NAME I CALL OTHER PEOPLE ALL THE TIME!!!!

We have the professional “victims” who thrive on showing the world how poor innocent them is under attack, while they keep doing things to provoke people.
We have the jackholes who say horrible things trying to get a response.
Twitter has given people the ability to have a list of ‘forbidden words’ for their time line, yet some people are still screaming that someone said a bad word.

Twitter has made the system so stupid trying to please everyone. I saw a RT of someone who reported 3 accounts who said exactly the same threats, and she got 3 different responses from Twitter ranging from suck it up to we removed it & banned them.

Twitter is inconsistent in what they do.
Someone with a blue check has WAY more leeway in what they say to people & if someone uses the exact same words back they end up with a timeout.
Elevating some users over others ALWAYS ends well.
Their insane idea for a safety team to protect people was hyper one sided and enraged people more.
Twitter needs to decide do they just want a happy place for blue checks to decide who is worthy of speaking to them or a platform for all.

So much could be handled if instead of playing fairy godmother they said use the tools we gave you before demanding new ones.

Look at the happy happy psychos we have here, while they like to talk about the conspiracy keeping them from commenting… its the community who looks at the rant, decides its of no value, presses the button and waits for it to take its course. Occasionally one of us snorted 1 to many pixie stix and responds, even though we know its a mistake… because we think we might say the thing that finally stops the insanity. More often than not we read 1 line and hit the report button… and life is better.

That Anonymous Coward (profile) says:

Re: Re: Re:

The problem is people can’t “feel” they won until that message appears.
They called me a mean name, I can’t let this stand and I must slap them with my glove. I need to report them, call them out, send my team after them, and keep doing things until I win.

No one wants to be the grown-up first, they want to clap back and win… most of them aren’t that creative.

Dude is bothering you, hit the button… don’t obsessively refresh their feed to see if they are still talking about you. Don’t keep talking about them after you block them trying to rile up people to be your attack dogs & get ahead.

More than once I’ve said bye felecia and blocked someone, and not looked back. I’m sure other people have blocked me for not living up to their expectations, my heart bleeds. I’m not always nice people, but I’m pretty open about that fact.

Anonymous Coward says:

more important

America has a sworn enemy who has made it clear they plan on nuking us while our government that just spent trillions of American dollars on underground network of hardened bunkers who just sits back and waits for this enemy to develope every planned missile and weapon capable of doing everything on their hitlist. I don’t know about you, but I didn’t get an invitation to my bunker yet, have you?

Anonymous Coward says:

Re: Re: more important

Ok I’ll take the pills and inquire elsewhere while you nice people here at techdirt discuss why its so important not to cuss online when blogging.. rather than demand answers to fucking important issues like why your government is going underground and leaving us to roast here on the surface after we’ve bent our backs double appeasing those dicks all our lives..

Anonymous Coward says:

Re: Re: Re: more important

I’m trying to save lives and you attack my charactor? Maybe there is some nobility here at techdirt listening to and reading people’s benign topics of discussion in the potentially catastrophic face of mortality caused by a handful of very bad people on the planet. Like the quartet that played on the Titantic right up to the straight up stance it took before it sank. But to attack my sanity? I mean WHY DRAG MY DOCTOR INTO IT?

Stephen T. Stone (profile) says:

Re: Re: Re:4 more important

If the North Korea situation had any actual relevance to the article at hand, maybe you would have a point. But it does not. So you do not.

And if you want to save lives, get off your ass, then do something tangible and productive. Complaining here is as productive as a date with Rosey Palms—and it is equally as self-serving.

Anonymous Coward says:

Re: Re: Re:5 more important

I guess its all clear now. I have no idea who this Rosey Palms is, so I’ll leave her to you.. You don’t get the fact that all the legislation being pushed by the government concerning hate speech is so you and I and everyone else will not be able to lift out fist up and pound it down when saying anything however factual if it concerns our dislike and disdain for some political figure doing or saying something that it outright wrong even treasonous. YOU SIT DOWN.

Stephen T. Stone (profile) says:

Re: Re: Re:6 more important

all the legislation being pushed by the government concerning hate speech

I cannot recall any such legislation. Even if it were, the hatefulness of a given expression of ideas and thoughts does not make it illegal, even in a public setting. “Hate speech”, like “obscenity”, lacks a narrow definition that would allow for the suppression of such speech without collateral damage to protected speech. This is why “hate speech” is still protected speech under the First Amendment.

you and I and a bunch of blah blah blah yakety shmakety

This has nothing to do with the original article. It has barely anything to do with the comment that started this chain. Please try to stay on a specific track of thought.

YOU SIT DOWN.

Unless and until you can put an actual gun to my actual head and give me that order, I will remain non-compliant. Do what you must; I have no fear of you.

Stephen T. Stone (profile) says:

Re: Re: Re:8 Oh, and one more thing...

Rosey Palms. It’s Rosey Palms. If you are going to feign outrage at a euphemism older than I am, at least have the goddamn decency to spell it right.

Unless you happen to be on a date with her while you talk about…whatever the hell it is you were talking about in this thread. I wanna say “Korean cuisine”—was it something like that?

Anonymous Coward says:

Re: Re: Re:9 Oh, and one more thing...

You are so very scary and pervose I suppose I have never had an enemy say the things to me especially in a public forum that I have sought counsel from my attorney. He said you crossed lines of decency and provoke unconscienable acts of animal eroticism. that you should be locked away at some kind of zoo.. I lost track of what he was saying my jaw was on the floor.

boomslang says:

> that doesn’t mean that we condone and support speech designed to intimidate, harass or abuse people. We recognize that some speech can, indeed, create negative outcomes, and even chill the speech of others.

I’m curious as to how you would apply this to the argument against strong encryption. It’s easy to treat encryption as a binary issue (there’s either strong encryption, or broken, i..e, no encryption).

But what if strongly-encrypted comms involve speech that creates negative outcomes and chills the speech of others?

Should freedom of speech be a binary issue? And if not, can we/should we use the 1st Amendment as an argument in favor of strong encryption?

Anonymous Anonymous Coward (profile) says:

Re: Re:

There are, or used to be, other investigative techniques. To rely solely on encrypted comms fails to make use of all those other techniques, that used to be the way of law enforcement, who appear to be to lazy to employed those other techniques because they require leaving the office.

Then there is the whole sharing intel issue, you know, the one where some agencies knew about people taking flight training but didn’t want to learn how to land, but didn’t tell the other agencies because it’s ‘our’ intel?

Anonymous Coward says:

Re: Rectifying encryption with nuanced free speech

Let’s see…

Encryption is a binary issue, and the experts waste a lot of breath trying to explain this to politicians. I understand politicians being skeptical of it being a binary issue as most things aren’t, but that’s what it is. But then because a computer recieves an encrypted communication (or downloads a publicly published message) doesn’t mean that communication needs to be shown.

As for freedom-of-speech, that means everyone should be able to say pretty much whatever they want but noone is abliged to listen. Basically this is handled exactly the same way as with encryption.

So there’s nothing to remedy between their stances on encryption and free-speech.

Stephen T. Stone (profile) says:

Re: Re:

what if strongly-encrypted comms involve speech that creates negative outcomes and chills the speech of others?

We punish the outcomes instead of either the speech itself or the way it was delivered. We allow encryption to remain intact instead of outlawing or weakening it because “bad guys” use it. We recognize that hateful or offensive speech may create negative outcomes without outlawing such speech.

John Doe says:

How TechDirt Can Do Better

I’ve basically stopped commenting here for one very simple reason – you guys trash comments from IP addresses you don’t like. What’s worse is that you do not give any indication ahead of time that you are going to trash a comment.

I use a VPN service and there are obviously spammers also using this VPN service. Its reasonable (but not necessarily the best option) to give extra scrutiny to posts from IP addresses used by this VPN. But my experience is that Techdirt doesn’t just give extra scrutiny, you guys frequently just blackhole those posts.

So I spend a lot of time writing a post, only to get a message that it is going to be reviewed by editors. I come back the next day and my completely appropriate comment still hasn’t been made public. That’s the kind of behaviorial conditioning that quickly teaches people that their contributions are not valued, and so only a fool would continue making contributions that have an unknown chance of being tossed aside.

The very least you could do is tell your users up front if they might be wasting their time.

And yes, I recognize the irony of making this comment which will probably also be tossed aside. Its the first one I’ve written in about half a year and I’m only chancing it because its highly relevant and maybe, just maybe, you guys will take note and do something about it.

That One Guy (profile) says:

Re: How TechDirt Can Do Better

Unless you comment on a friday night or weekend they’re generally pretty good about going through the ‘Held for moderation’ comments fairly quickly, so I find it kinda hard to believe that they are blackholing ‘completely appropriate’ comments simply because they were caught by the spam filter.

I’ve seen the kinds of comments that make it through moderation, so you’ll excuse me if I’m hesitant to accept the ‘my completely appropriate comments were completely blocked’ claim simply on your word.

Anonymous Coward says:

Re: Re: How TechDirt Can Do Better

For the sake of argument, I’ll accept that is true.
But even so, when it takes days for a post to be published that significantly reduces the utility of that post because after a few days everybody has moved on. Nobody is paying attention anymore.

Imagine what its like to expend the time and energy to contribute, only to have your contributions effectively ignored. Its not obvious from the editorial side, but it is extremely discouraging to put in that effort for naught. It would be one thing if it were published and nobody responded, but being forced to be invisible until nobody GAF anymore is disheartening in the extreme.

Maybe it is impractical on your end to work through all comments in a timely fashion, but when the process treats actual people as just another cog in the wheel it is inevitable that anyone with self-respect will simply opt-out. The least you can do is alert people up front that the work they put in won’t get the same treatment as everyone else.

I like to say that the "war on terror" is really a war on dignity. The same thing applies to the war on comment spammers/trolls. Any actual people who get caught in the crossfire get treated as less than human and more like bots. If that happens enough (where "enough" is a pretty low threshold) then any self-respecting person will just stop trying. At which point they won’t even show up in your site metrics.

If you can, try putting yourself in ‘our’ shoes, send all of your comments to an admin and have them randomly wait hours or days before making them visible. I think the experience of being ignored despite your best intentions will be eye-opening. It is literally dehumanizing.

That One Guy (profile) says:

Re: Re: Re: How TechDirt Can Do Better

If you are using a method of submitting comments that you know is likely to cause your comments to be caught by the spam filters(VPN, Tor), then they shouldn’t need to ‘alert’ you at all, you should know already that your comments stand a good chance of being flagged and caught. What more do you want them do do, check every IP address pre-post before allowing you to post so they can give an additional warning?

Regarding your ‘try if from my perspective’, I’d suggest doing the same thing. Picture if you will what the comment section would look like without a spam filter in place. I can tell you that the one time I saw it fail on a notable level was bad enough that I do not even want to think about how bad it would be without it in place, such that having some comments caught by the spam filter is very much the lesser of the two evils.

Mike Masnick (profile) says:

Re: Re: Re: How TechDirt Can Do Better

But even so, when it takes days for a post to be published that significantly reduces the utility of that post because after a few days everybody has moved on. Nobody is paying attention anymore.

It rarely, if ever, takes "days." We check the filter every few hours during weekdays, and we try to check once a day on weekends. If stuff gets busy we don’t always get that far.

But trust me, the system is much better than if we just allowed all those posts to go through. These days, the spam filter probably catches 1000 spam comments per day… and maybe catches anywhere from 2 to 10 legit comments per day. That’s about it. On a cost benefit basis, there’s really no other solution.

Many other sites refuse to allow VPN or Tor posting at all. That seems like a worse solution to us. And not all VPNs or Tor are blocked. But if the system senses a service that has a history of spamming…

Narcissus (profile) says:

Re: How TechDirt Can Do Better

@John Doe: Perhaps it might be an idea to drop TechDirt TechSupport a mail. I experienced similar problems despite not (always) using a VPN and even being a Techdirt Insider. Also I think my posts are not generally offensive (unless you are offended by stupidity perhaps).

After I dropped a mail to them asking why they looked into it and there was some problem with my account. They fixed it and since then all my posts go through without being held for review.

Anonymous Coward says:

I think this really may be a problem of protocol vs. platform, as some soul (author?) on Techdirt already mentioned.

If social media networks were protocols with a client, usable by anybody on any OS and curated by inclusion instead of a general morass, it’d be a little more like the dying IM clients, but not bad.

Opt-ins for social networking instead of exposing every user to the general morass of people is a net gain and you only have yourself to blame for letting the wrong users in. Strangers need permission to message you personally, period.

It may sound wrong or exclusionary, but when it’s the client users’ prerogative who gets to see what, it can be much easier to manage.

As for trolls, well… I try not to care about them. I believe that 99.999% of death/harm/rape threats are from people eager to send you pizzas more than bullets, and just get a laugh out of making somebody squirm or cry because of typed characters on a screen.

If those threats were all or even somewhat legitimate, we’d be having a very different kind of conversation.

On the other hand, if you’ve said something “wrong”, and they show proof that they know who you are and what your workplace number is, you bet your job might be in jeopardy.

**That** I have no solution for yet. 😛

John85851 (profile) says:

Hide comments from the trolls

I don’t remember where I saw this, but it’s similar to what another poster said:

The idea is that if a user gets too many downvotes from the community then all of his posts are hidden from the community. The user can still post comments and he’ll still see his posts, but no one else will.
This way, he can post all the insults that he wants *and* he can’t play the victim by saying “ohmergerd, my right to free speech is being censored!”.

After a while, he’ll get bored because his insults aren’t having any effect and no one’s responding to him.

Alan says:

The problem is, it's really hard to even have a civil public conversation

Everyone likes to see their comments/replies posted everywhere for all to see, but that’s at the core of the problem. It’s a lot more fun to flame or troll when you know what you write will be seen by thousands of people, rather than by one person.

To really have a productive online conversation, it should be mostly private (and potentially anonymous), so that you’re conversing with just one person, not 1,000. To encourage civil behavior, there can be a rating system in which the participants rate each other, and third parties can occasionally “rate the raters” to keep participants honest.

For all the communication tools on the Internet, there is a gaping hole: there is no way to choose the characteristics of who you want to communicate with. If I’m a liberal but pro-life male Democrat from the midwest, I should be able to choose to “converse” about abortion with a pro-choice female Republican Trump supporter from the northeast. Sadly, there’s no good way to arrange such a conversation, yet it would certainly foster more productive outcomes than you typically get from even a moderated group discussion on today’s social media offerings.

Wendy Cockcroft (user link) says:

Re: Re: The problem is, it's really hard to even have a civil public conversation

That’s a lot of effort to go to just to be a jerk since you’d have to either flag from various browsers or clear cache and cookies every time you wanted to flag.

I’ve accidentally voted comments funny that I wanted to vote insightful. Clicking Insightful again removed the vote.

Leave a Reply to Anonymous Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »