Can A Community Approach To Disinformation Help Twitter?

from the experiments dept

A few weeks ago Twitter announced Birdwatch as a new experimental approach to dealing with disinformation on its platform. Obviously, disinformation is a huge challenge online, and one that doesn’t have any easy answers. Too many people seem to think that you can just “ban disinformation” without recognizing that everyone has a different definition of what is, and what is not disinformation. It’s easy to claim that you would know, but it’s much harder to put in place rules that can be applied consistently by a large team of people, dealing with hundreds of millions of pieces of content every day.

Facebook has tried things like partnering with fact checkers, but most companies just put in place their own rules and try to stick with it. Birdwatch, on the other hand, is an attempt to use the community to help. In some ways it’s taking a page from (1) what Twitter does best (enabling lots of people to weigh in on any particular subject), and (2) Wikipedia, which has always had a community-as-moderators setup.

Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.

In this first phase of the pilot, notes will only be visible on a separate Birdwatch site. On this site, pilot participants can also rate the helpfulness of notes added by other contributors. These notes are being intentionally kept separate from Twitter for now, while we build Birdwatch and gain confidence that it produces context people find helpful and appropriate. Additionally, notes will not have an effect on the way people see Tweets or our system recommendations.

Will this work? There are many, many reasons why it might not. Wikipedia itself has spent years dealing with these kinds of questions, and had to build a kind of shared culture and informal and formal rules about what kind of content belongs on the site. It’s a lot harder to retrofit that kind of thinking back onto a platform like Twitter where pretty much anything goes. There is, also, of course, the risk of brigading and mobs — whereby a crew of people might attack a certain tweet or type of information with the goal of getting accurate information being declared “fake news” or something along those lines.

Twitter, I’m sure, recognizes these challenges. The details of how Birdwatch is set up certainly suggests that it’s going to watch and iterate as it goes, but the company recognizes that if it can get this right, it could be quite useful. That’s why, even if there’s a high risk of failure, I still think it’s an interesting and worthwhile experiment.

Some of the initial results, however… don’t look great. A bunch of clueless Trumpists have been trying to minimize the traumatic experience that Alexandria Ocasio-Cortez recently described as her experience during the insurrection at the Capitol on January 6th. Because these foolish people don’t understand that the Capitol complex is a set of interconnected buildings, they are arguing that AOC was “lying” when she talked about the fear she felt while initially hiding in her office during the raid — since her office is in the connected Cannon Building, and not in the domed part of the Capitol complex. It turned out that some of the fear came from a Capitol police officer yelling “where is she?” and barging into the office. AOC, not realizing it was a Capitol police officer, recently spoke movingly about how afraid she was that it was an insurrectionist.

Since they started trying to make this argument on social media, AOC responded, pointing out that the entire Capitol complex was under attack (even if it wasn’t, the fact that you’re in a building across the street from a riotous mob that clearly wouldn’t mind killing you, is a perfectly good reason to be afraid). She also mentioned the two pipe bombs that were found near the Capitol, which were not far from the Congressional office buildings.

However, if you go to Birdwatch, it shows a bunch of disingenuous people trying to present AOC’s statements as disinformation.

Of course, this just shows exactly the problem of trying to deal with “disinformation.” It is often used as a weapon against people you disagree with, where you might nitpick or argue technicalities, rather than the actual point.

I am hopeful that this experiment gets better at handling these situations, but I recognize the huge difficulty in doing this with any sort of consistency at scale, when you’re always going to be dealing with disingenuous and dishonest actors trying to game the system to their own advantage.

Filed Under: , , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Can A Community Approach To Disinformation Help Twitter?”

Subscribe: RSS Leave a comment
53 Comments

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Good for the Gander

Of course, this just shows exactly the problem of trying to deal with "disinformation." It is often used as a weapon against people you disagree with, where you might nitpick or argue technicalities, rather than the actual point.

Of course, you were all smiles when the "fact checks" were used against people with whom you disagreed during the 2020 election cycle.

You are concerned that the birdwatch feature won’t work, and you’re probably right. But reason why it won’t work because of bias, on both sides. When it comes to political arguments, the only way for a platform to build trust is to remain neutral and not take sides by becoming an arbiter. For social media, that means remaining hands-off and let the political actors build or perhaps destroy their own reputation. Don’t do fact-checks, or bird watching, and noone gets to be the Ministry of Truth.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

When it comes to political arguments, the only way for a platform to build trust is to remain neutral and not take sides by becoming an arbiter.

Person A: I believe gay people should be able to live out and proud.

Person B: I believe the government should execute gay people only for being gay.

Twitter, under legal orders to be “neutral”: Both of these viewpoints are valid and should be respected as such.

This comment has been flagged by the community. Click here to show it.

Stephen T. Stone (profile) says:

Re: Re: Re:

Let’s take an example and make it as extreme as possible, where no one can disagree with you as you beat it to the ground.

That’s the whole point: You have to deal with that kind of extreme speech if you insist on some form of “neutrality” for yourself or others. Either you’re fine with treating both of those positions as equally valid or you’re not.

And technically, only one side is extreme. If I had said the A opinion is “I believe straight people should be put to death only for being straight”, that would’ve been the polar opposite of the B opinion. But I chose an opinion that sits right in the middle of the Overton Window, then swung to one extreme — which, I should add, is an actual position for which actual people have actually, seriously, legitimately advocated.

I wanted to expose, in as simple a way as I could, the inanity of wanting forced “neutrality” — and how refusing to take a side will likely still leave people thinking you’ve taken a side anyway. That you see what I’ve done as a “strawman” instead of a moment to reflect on the idea of forced “neutrality” is a “you” problem.

crade (profile) says:

Re: Re: Re:2 Re:

I don’t think expressing extreme views in and of itself is a problem, neither of those examples are disinformation. Saying you think gay people should be executed should be kept in check by being shunned by decent people everywhere. Being extreme doesn’t make you right or wrong,

the disinformation problem comes when people’s views are unsupportable in reality and they start lying to convince people, when person B tries to do anything to back up their position, they lie because they have nothing to back it up other than "just cuz".

When they paid or forged some pseudo scientist to put out a study saying that gay people rigged the election and stole it from trump and start spreading links around to that, or even just made up some bullshit story to support their position, thats what you need to get in front of

Stephen T. Stone (profile) says:

Re: Re: Re:3

I don’t think expressing extreme views in and of itself is a problem

In a vacuum? Sure. But we’re not living in a vacuum. Advocacy for extreme positions can end up overwhelming a social media service if not kept in check via moderation. People targeted by such positions (e.g., gay people) will do their best to avoid it until either they can no longer avoid or they tire of doing the work necessary to avoid it. Those people will leave the service for something better. Eventually, that service will end up trying (and failing) to solve the “Worst People Problem”.

crade (profile) says:

Re: Re: Re:4 Re:

But thats because the Advocacy for extreme positions you are talking about is bullshit. If you were a Newton or Einstein of our time advocating for an extreme position that isn’t based on bullshit, you wouldn’t be overwhelming social media will bullshit you would be quietly stating truths and letting society absorb them until your position wasn’t extreme anymore

Stephen T. Stone (profile) says:

Re: Re: Re:5

the Advocacy for extreme positions you are talking about is bullshit

I hate to tell you this, but people really do advocate for things like reinstating segregation, forced birth (by way of outlawing abortion), the govenrment enacting a death penalty for homosexuality, and other similarly bullshit positions. For what reason should the law force Twitter to host such advocacy besides a ridiculous fealty to the concept of “neutrality”?

PaulT (profile) says:

Re: Re: Re:5 Re:

"But thats because the Advocacy for extreme positions you are talking about is bullshit"

You think they’re bullshit because current moderation tactics and "cancel culture" are protecting them from you. Take a wander down the less savoury paths of the internet, and you’ll see these views widely expressed. They’re just not allowed on mainstream platforms at the moment.

"If you were a Newton or Einstein…you would be quietly stating truths…"

…and never be heard above the noise of anti-science morons proudly proclaiming that they know more than scientists because they watched a YouTube video… and that’s how it is now, with moderation allowed.

crade (profile) says:

Re: Re: Re:6 Re:

They are bullshit because they are unsupportable under scrutiny and people have to lie or resort to other things like spamming or other fallacies to support them. By definition their views wouldn’t be prejudice if they weren’t based in ignorance.

".and never be heard above the noise of anti-science morons proudly proclaiming that they know more than scientists because they watched a YouTube video"
by spamming, linking to a youtube video with a bunch of untrue debunked stuff in it or otherwise doing things that objective rules could handle.
We aren’t talking about one opinion vs another here, we are talking about valid arguments on one side vs a bunch of argumentative fallacies and uncivil discourse on the other. You don’t need to take sides about who is right, you just need to keep your discourse civil. Don’t let people spam without making a real argument, don’t let people get away with lying, linking to or spouting stuff that is thoroughly debunked just to try to drown out the face that it’s thoroughly debunked

PaulT (profile) says:

Re: Re: Re:7 Re:

"by spamming, linking to a youtube video with a bunch of untrue debunked stuff in it or otherwise doing things that objective rules could handle"

Yes, so the important thing is to concentrate on the rules and methods by which disinformation gets spread. You can’t battle it with correct information alone, because the problem is that people do gravitate toward comforting lies rather than hard truths. There’s an old saying that a lie can spread halfway across the world before the truth has put its shoes on – and that saying far predates the internet.

"You don’t need to take sides about who is right"

Well, I disagree there. The correct approach when, for example, dealing with someone spreading dangerous misinformation about vaccines and PPE that are literally getting people killed is not to pretend that they are even remotely welcome at the same level as the truth. Sometimes there’s room for discussion and argument. Sometimes, one side is simply wrong. You can deal with that in a more civil way, of course, but they can never be right – and it’s dangerous to entertain them as if they can be.

"Don’t let people spam without making a real argument, don’t let people get away with lying, linking to or spouting stuff that is thoroughly debunked"

…and herein lies the issue. What you end up with there is people intent of spreading truthful information have to waste most of their time debunking stuff that’s already been debunked hundreds of times, and while you can get others to see sense the people spreading the misinformation may have financial or religious reasons for lying (in which case you can never simply get them to stop by stating the truth).

We see that here. How often in these threads does someone dive in with bad arguments that were debunked years ago, then whine about some conspiracy against them whenever they’re debunked again? Now, imagine that on much larger forums, where it’s not as easy to recognise the names of people who are right vs. wrong and less easy to spot the patterns of debunked arguments.

crade (profile) says:

Re: Re: Re:8 Re:

I fully admit It’s a tough problem, but I don’t think the answer is to switch from ref to arbiter.

I disagree that it’s dangerous to entertain them as if they could be right.
The biggest trouble I have is suppressing that stuff systematically just lends it legitimacy. It just gives them a legitimate argument when they had none before.. They do have good arguments, but can’t post them. "trust me, unlike other places, the system only ever suppresses the bad stuff here" just doesn’t cut it for me.

I quite like the way techdirt handles things, but it’s still subject to mob justice and encourages the protective wuss bubble phenom. The same system on parlor would flag anyone trying to point out simple contradictions in their conspiracy theories

I’m leaning more towards combining that with what twitter was doing, basically systematically keep track of viral fallacies, flag them as such and get them out of the way, but still have a way for anyone curious to find out why they did so with an opt in of some sort.

PaulT (profile) says:

Re: Re: Re:9 Re:

"The biggest trouble I have is suppressing that stuff systematically just lends it legitimacy. It just gives them a legitimate argument when they had none before"

I’d say the opposite. It might give them an air of legitimacy among a certain minority that’s already on their side or close to it. But, they will congregate outside of the mainstream. On the other hand, if you give them reign on a platform, whether that’s a social media platform or a "both sides" interview on TV, that gives them an air of mainstream legitimacy they don’t have if they’re pushed to one side.

What’s concerning here is not just the effect this has on real discourse, but the effect it has with legitimising fringe groups. I don’t have time at this moment to locate the sources that have been mentioned, but I have heard it claimed on several podcasts I listen to that these fringe groups are growing because of their initial air of legitimacy, and that’s proven by studies.

What I mean here is that if you think back 15-20 years ago, extreme sites and communities did exist. You had 4chan/8chan/etc., you had Stormfront, you had a number of toxic cesspools. But, most people not already looking for such things would encounter them. You’d never accidentally come across them by scrolling through your normal news and social sites.

But, newer groups are smarter. They don’t come straight out and say "hey, let’s kill all the Jews, and wasn’t Hitler great?". They’ll start by "asking questions" and "offering alternative viewpoints". Then, when you’re engaging with them and the algorithms kick in, you start getting recruited by people to go and storm the government. I’ve heard individual stories of people going from relatively sane to full on Q cultists in less than a year, and it all starts with those "alternative" viewpoints that are clearly dogwhistles to those in the know, but might pass as genuine for people to believe them.

There’s no easy solution here, but from what I’ve seen over the years I guarantee that letting them speak wherever and however they want with the hope that honest debate will drown them out is somewhat naive.

"I’m leaning more towards combining that with what twitter was doing"

What Twitter was doing was clearly not working well enough. The only thing that’s worked appears to have been kicking off Trump and a number of other prominent morons allied with Nazis and QAnon. Before that, those groups were recruiting…

crade (profile) says:

Re: Re: Re:10 Re:

I don’t think so, this is basically the system we have right now where a lot of the system encourages sticking to certain points of view instead of encouraging discussion and understanding and actually resolving anything..

What you are talking about, to me seems like trying to double down and have the platform not only encourage certain viewpoints but actively manipulate the discussion in certain directions, it’s basically trying to influence the discussion towards whatever your platform wants or decides. You need to assume "platform knows best" in those cases, or even worse gov knows best if it comes from regs. Platform’s motives vary wildly, you can’t assume how they are going to use that manipulation.

I think of it like what would the same system look like in parler or in china. Could the same system be used to make the fanatics worse or pull the wool over everyone’s eyes?

PaulT (profile) says:

Re: Re: Re:11 Re:

"a lot of the system encourages sticking to certain points of view instead of encouraging discussion"

I’m sorry to say, that’s not just the "system". It’s the tendency for people to gather in tribes, and to favour their preferred conclusions over challenging facts.

I tried many times IRL to discuss the relative benefits or otherwise of Brexit, and half of what I got parroted back to me were Daily Mail/Express talking points about immigration and how they need to punish people who refuse to integrate and "take back the country" even though that makes no sense from a pragmatic point of view. Those points were often repeated back to me by people who work, retire or otherwise depend on a lifestyle in Spain while themselves refusing to learn the local language or integrate. This may have been made worse by the effect on social media, but I assure you that wasn’t the problem. It was a problem with a certain type of person not applying critical thinking to their sources, and a tendency to reject honest retorts as "project fear". This type of crap is accelerated by easy access to propagandists on social media, but it’s not going to be fixed by a few sites changing their algorithms.

"it’s basically trying to influence the discussion towards whatever your platform wants or decides"

Well, in a way yes. We’ve tried putting up with these groups either unchallenged or left to be challenged with facts, and they overran certain areas of these platforms. At some point, you have to kick out the bad actors. This is true of a disruptive group who keep picking fights in your local pub, and it’s true of people who do the same on social media.

Just as a quick data point – Trump was tolerated on Twitter for years despite constantly breaking the rules of the platform. This came to a head when Trump and his sycophants spread false claims about election fraud, something which directly led to the insurrection attempt at the Capitol on Jan 6th. Twitter tried supplying these people with fact, fact check, warnings that they were spreading lies, and it did no good. They dug their heels in and claimed that the attempts to fact check were themselves a form of censorship.

But, after the storming of the Capitol, Twitter decided it had enough and kicked him off at long last. Guess what happened?

https://www.cnet.com/news/after-twitter-banned-trump-misinformation-plummeted-says-report/

Misinformation about the election immediate dropped by up to 73%. Playing nice, dealing with facts and allowing both sides to have a say did not work. Kicking off the most frequent and brazen abusers of the platform worked. The majority of users can continue to use the platform as they did before – they just aren’t having to wade their way through propaganda by bad actors who will not try to have a fair and balancer conversation in any way.

"Platform’s motives vary wildly, you can’t assume how they are going to use that manipulation."

…and as long as people are free to choose which platform they frequent, rather than being forced to put up with people the community they belong to, that should be a lesser problem than allowing them to be overrun by people who have no interest in honest debate.

I understand some of these concerns, but the lesson of the last few years, which has led to a massive rise in damaging rhetoric, white supremacy and actual cults getting an easy recruitment vector, it’s time we tried something else.

crade (profile) says:

Re: Re: Re:12 Re:

yeah wasn’t trying to say it’s just the system causing, trying to say it’s not something we want to set up a system to encourage or make even worse than the natural tendencies start with.

Really the problem I see we have isn’t "what" to filter out, it’s how to identify it at scale. Whether you are trying to fact check stuff, or just check things to ideologically match what your platform wants you face the same problem.. how can actually get it done? Techdirt system works well at scale, but I think Wikipedia system also works pretty well at scale.

Kicking off people who violate the rules isn’t an issue from my standpoint. You could set up your rules to say people can’t lie too much or people must support my viewpoint, but either way you have to figure out who is breaking the rules and enforce them at scale.

From my viewpoint Trump was tolerated for years because he was the president and twitter felt like they didn’t have the moral authority to just override the electorate when we was using it for presidential business without a better reason than just violating their TOS rules. The problem there was that you guys elected trump more than that twitter wasn’t "controlling" him enough.

I agree that more should be done, and actually I think the result is the same.. Basically crack down harder on garbage. I just think the decision on what is garbage should try to be based on what lines up with objective reality rather than a particular philosophy.. In this case they are one and the same since the right wings have abandoned truth and reality completely

PaulT (profile) says:

Re: Re: Re:13 Re:

"Really the problem I see we have isn’t "what" to filter out, it’s how to identify it at scale"

Well, no, I’d argue the first is the most important thing. If you’re filtering based on certain criteria (reports from users, links to know false propaganda), it’s actually relatively easy to filter.

"Whether you are trying to fact check stuff, or just check things to ideologically match what your platform wants you face the same problem.. how can actually get it done?"

These are very different things, though. If you’re fact checking, then you do what Twitter do – label things with warnings that certain facts are disputed, and take further action if accounts seem to be requiring a lot of fact check warnings. Ideology is vastly more complicated, but what that should essentially boil down to is community guidelines.

Let’s use a non-political example. I’m a member of a number of movie buff groups, some focussing on mainstream stuff, some on cult and horror stuff that’s my preferred genre. Because I’m not an asshole, I keep to the general community guidelines, and don’t try to force more divisive and controversial subject on to the mainstream sites. If I were to be an asshole and I kept posting screenshots of my favourite scenes from Cannibal Holocaust in between people trying to discuss the latests Pixar and Marvel movies, I’d expect to get a lot of complaints and for my posts to be removed and even to be banned when I kept doing it. That’s not difficult to understand or for the community to enact, exactly, but then I’m not the kind of asshole that insists that because i want something on a specific community, that doesn’t mean the community have to accept me.

That all this boils down to, really – people with niche, unpopular, even offensive, views are trying to force them into the mainstream that don’t want them. It’s up to each community to work out what’s acceptable, and you can’t please everyone, so the larger, more mainstream sites will stick to what’s least controversial. It’s not their fault if certain types of people deliberately pretend that their opinions override those of the community they’re trying to engage with.

"Techdirt system works well at scale"

I very much doubt that. It works here because it’s a relatively small community, with the most popular threads getting a few hundred posts. But, already we have problems with threads being derailed by the deliberately ignorant, repeating oft-debunked lies. That won’t scale to threads of tens of thousands of posts with only "more speech" in response to the lies. Something else needs to happen at that scale.

"Wikipedia system also works pretty well at scale"

Largely because of things like locking pages that keep getting edited with misinformation, and pages upon pages of angry discussion behind the scenes as to what’s acceptable on a specific page.

"From my viewpoint Trump was tolerated for years because he was the president and twitter felt like they didn’t have the moral authority"

LOL, no. He was tolerated because he generated a huge amount of traffic to the site, both from the tweets themselves and from the media breathlessly reporting on every stupid thing he said there. They ditched him once he because too controversial and the impending backlash would lead to both a drop in traffic and legal issues (wrong-headed as it might be, you can guarantee that their role in the insurrection as a result of tolerating election misinformation is going to be used against them in attempts to both remove their legal protections and add new liabilities).

Despite the whining from the Q types, this is all about business at the end of the day, not morality or politics.

"I just think the decision on what is garbage should try to be based on what lines up with objective reality rather than a particular philosophy."

The inability of some people to define objective reality or understand that their subjective understanding of it might not mesh with the needs of the communities they’re trying to gatecrash is what got us here. I prefer the idea of each community being able to moderate as their community sees fit, and for people to go to a community that actually wants them there if they dislike what that community does.

Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

"Nice strawman! Let’s take an example and make it as extreme as possible, where no one can disagree with you as you beat it to the ground. Gee, there are no logical fallacies here."

That "extreme example" will be the norm under Koby’s suggestions, so if the reply is a straw man it inherited that aspect from the original assertion.

A legal compulsion to keep communication "neutral" completely and utterly wrecks any and every semblance of free speech and the right to even have or communicate an opinion.

Every sane person realizes this. Alt-right trolls do as well but are apparently hoping if they just keep babbling newspeak for long enough eventually people will get confused enough to forget basic logic.

Anonymous Coward says:

Re: Re: Re: Re:

"Gee, there are no logical fallacies here."

Which logical fallacy might you be thinking of here?

Appeal to Extremes, Reductio ad Absurdum or something else?
I fail to see how either of the two above named fallacies are applicable to discussion of things that are absurd, extreme or otherwise friggin crazy.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Good for the Gander

For social media, that means remaining hands-off and let the political actors build or perhaps destroy their own reputation. Don’t do fact-checks, or bird watching, and noone gets to be the Ministry of Truth.

And the president becomes a medical expert that kills millions.

This comment has been deemed insightful by the community.
Rocky says:

Re: Good for the Gander

Of course, you were all smiles when the "fact checks" were used against people with whom you disagreed during the 2020 election cycle.

Can you show us some examples of those fact checks?

When it comes to political arguments, the only way for a platform to build trust is to remain neutral and not take sides by becoming an arbiter.

Even if said political argument is pure lies, deranged conspiracy theories or hate-speech? Why should politicians be allowed to break a platforms TOS? Are they above civil and federal law? Anyway, nobody trusts a platform that is filled with assholes running rampant. Well, the assholes do to some degree I guess. See how well that turned out for Parler…

For social media, that means remaining hands-off and let the political actors build or perhaps destroy their own reputation.

I have a better idea, let the government create a social media platform for politicians and those who want to interact with them while letting the private actors run the non-governmental social media. That’ll solve all problems with political speech and social media. The politicians can swim in their cesspool while the public is free of all the assholes.

Don’t do fact-checks, or bird watching, and noone gets to be the Ministry of Truth.

I don’t think you understand what the Ministry of Truth is, its whole purpose is to distort the truth which is the opposite of fact checking. You should really read 1984, you would probably learn a thing or two.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Good for the Gander

Of course, you were all smiles when the "fact checks" were used against people with whom you disagreed during the 2020 election cycle.

When was I all smiles about that? Looking back, I believe the only time I wrote about Facebook’s fact check feature was… to point out I didn’t think it would work? https://www.techdirt.com/articles/20161113/00431436029/let-them-eat-facts-why-fact-checking-is-mostly-useless-convincing-voters.shtml

Koby, I keep asking you this and you never answer. Why do you always lie?

It’s pathological.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Good for the Gander

"Of course, you were all smiles when the "fact checks" were used against people with whom you disagreed during the 2020 election cycle."

Can you point to any fact checks that were misleading or incorrectly used, or are you just butthurt because your lying friends were accurately shown to be liars?

"But reason why it won’t work because of bias, on both sides"

"Both sides" is bullshit on most issues. If you have one side that believes in medical science and pragmatic approaches to a pandemic, and another who believes that it’s all a hoax to allow communist chips to be implanted, then one side is objectively wrong. Your "both sides" shit is what has disproportionately cost nearly 500k American lives so far.

"For social media, that means remaining hands-off and let the political actors build or perhaps destroy their own reputation"

We tried that. That’s why we’re having discussions about whether a QAnon weirdo who believes in Jewish space lasers and stalks mass shooting victims should be on education boards.

"noone gets to be the Ministry of Truth"

Ministry implies government. Until they get involved, your whining is importent.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Will this work?

Will this work?

In a word: No.

To Trumpists, everything anyone else says is disinformation, lies, "fake". To non-Trumpists everything that comes out of a Trumpist’s mouth is garbage, bullshit, false. Given that the country is split almost down the middle how could any algorithm choose what is truth based solely on the opinions of readers?

There is no automatic solution for this and no social solutions either. Platforms will simply have to choose what is "truth" on that platform for moderation purposes and continue handling the moderation themselves. For those whose political opinions differ from the position chosen by the platform, suck it up, buttercup (aka "fuck your feelings").

This comment has been deemed insightful by the community.
Blake C. Stacey (profile) says:

Wikipedia has a policy of "NPOV" — Neutral Point of View. That’s not neutral in the sense that Republicans would want, i.e. false balance, but rather an ethos of sticking to what the sources say. They’ve also developed a lengthy guideline for what can qualify as a "reliable source". And there’s a more specific guideline just for writing about fringe theories, and a guideline for the extra-careful standards for writing about medicine. The acronyms are thick on the ground, because Wikipedia editors are the sort of people who think that all the problems of education and epistemology can be solved by applying more acronyms. Mind you, this is just a slice through the rulebook — we haven’t even gotten to the guidelines for "notability", which say what topics deserve to have articles about them. Or the Manual of Style, or the rules for Conflict-of-Interest editing. In short, it ain’t simple.

The question raised by Birdwatch is, if you’re trying to do community moderation to build a site that is fact-based and isn’t a complete garbage fire, could the rules actually be any simpler? How do you write simple guidelines for a problem that is, itself, necessarily complicated?

This comment has been deemed insightful by the community.
Thad (profile) says:

Re: Re:

I saw a good article the other day about teaching media literacy: Lizard People in the Library

There’s a section where it talks about Wikipedia’s standards:

It’s ironic that Wikipedia, a social platform once demonized by educators as unreliable, now is a global avatar of strict adherence to a set of retro principles about how to properly establish and document information through objectivity, appealing to generally accepted facts, and references to authoritative sources.

(links omitted from copy-paste)

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Section 230 destroyed Guy Babcock, but MANICK won’t admit it because he used his lawyer buddies to destroy me, and everyone he believes is a friend of copyright. Because MANICK loves pirates.

The veil will come off of the shadow group financing this cesspool of a site’s so-called "independent journalism". MANICK is going to get what’s coming to him, same for his whore wife and trailer trash kids. He’s going to learn what it means to anger a bear, a bear that knows which cocks to suck and which cocks to step on. I’m going to tear him a new asshole so big he’s going to fit Stephen T. Stone in it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

PaulT (profile) says:

Re: Re: Re:3 Re:

It’s impossible to work out what’s going on with these posts. Mailing list? That’s a blast from the past, but I thought it was a different deranged lunatic who whined about his spam for his fraud sales operation being less effective in the age of social media, and who we haven’t heard from in a while.

Was I wrong about that, or is this just a parody posting that mixes and matches previous troll posts? Poe strikes again…

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:4 Re:

You would know, Paul. Thanks to you hounding my past accounts I’ve had to keep my own information closely guarded, because MANICK likes it when his attack dogs stalk me.

Dont worry though. As soon as I reveal my name in court everything nasty you said about John Smith becomes actionable. You’ll rue the day you crossed me.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:5 Re:

"Thanks to you hounding my past accounts I’ve had to keep my own information closely guarded"

As usual, you make no sense. Because people kept replying to your random bullshit under the many pseudonyms you created in order to pretend we were replying to more than one person instead of the known idiot, you have to guard information you never provided to us?

"As soon as I reveal my name in court everything nasty you said about John Smith becomes actionable"

As soon as you reveal your name in court, things that people said about a fake character you created becomes actionable? Is the reason this is taking you so long is because you’re trying to find a jurisdiction that doesn’t laugh at you as soon as you present your claims?

"You’ll rue the day you crossed me."

I doubt it. You’ve been threatening people for years and all we have from you is new and interesting ways to mock you for your disconnect from reality.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

AOC was a half-mile away from the break-in, but she “almost died.” This is rich coming from a woman who was mostly silent during the BLM riots and lootings when she wasn’t supporting them. All while calling for “defunding” the police. Who cares about having police as long as the plebes are getting hammered and not me, right? Look up lying politician and you’re bound to see this flake’s mug staring back at you. No wonder the American people hate their politicians.

This comment has been flagged by the community. Click here to show it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...