Internet Content Moderation Isn't Politically Biased, It's Just Impossible To Do Well At Scale

from the stop-this-dumb-narrative dept

The narrative making the political rounds recently is that the big social media platforms are somehow "biased against conservatives" and deliberately trying to silence them (meanwhile, there are some in the liberal camp who are complaining that sites like Twitter have not killed off certain accounts, arguing -- incorrectly -- that they're now overcompensating in trying to not kick off angry ideologues). This has been a stupid narrative from the beginning, but the refrain on it has only been getting louder and louder, especially as Donald Trump has gone off on one of his ill-informed rants claming that "Social Media Giants are silencing millions of people." Let's be clear: this is all nonsense.

The real issue -- as we've been trying to explain for quite some time now -- is that basic content moderation at scale is nearly impossible to do well. That doesn't mean sites can't do better, but the failures are not because of some institutional bias. Will Oremus, over at Slate, has a good article up detailing why this narrative is nonsense, and he points to the episode of Radiolab we recently wrote about, that digs deep on how Facebook moderation choices happen, where you quickly begin to get a sense of why it's impossible to do it well. I would add to that a recent piece from Motherboard, accurately titled The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People.

These all highlight a few simple facts that lots of angry people (on all sides of political debates) are having trouble grasping.

  1. If you leave a platform completely unmoderated, it will fill up with junk, spam, trolling and the like, thereby decreasing its overall utility and pushing people away.
  2. If you do decide to moderate, you have a set of impossible choices. So much content requires understanding context, and context may be very different, even for the same content when viewed by different people.
  3. If you're going to moderate at scale, you're going to need a set of "rules" that thousands of generally low paid individuals will have to be able to put into practice, reviewing pieces of content for just a few seconds (a recent report said that Facebook reviewers were expect to review 5,000 pieces of content per day.
  4. It is impossible to make rules like that that can easily be applied to all content. A significant percentage of content falls into gray areas, where it then becomes a judgment call by people in a cubicle in the middle of reviewing 5,000 pieces of content.
  5. At that rate, many mistakes are made. It is collateral damage of moderation at scale.
  6. People caught in the crossfire of collateral damage will rightly make a big stink about it and the social media companies will look bad.
  7. Meanwhile, some of the reasonable moderation decisions will hit trolls hard (see point 1 above) and those trolls will then take to other platforms and make a huge stink about how unfair it all is, and the social media companies will look bad.
Put this all together and it is a no win situation. You can't leave the platform completely unmoderated. But any attempt at moderation at scale is going to have problems. The "scale" part of this is what's the most difficult for most people to grasp. As Kate Klonick (again, author of an incredible paper on content moderation that you should read as well as author of a guest post here on Techdirt) notes in the Motherboard piece:

“This is the difference between having 100 million people and a few billion people on your platform,” Kate Klonick... told Motherboard. “If you moderate posts 40 million times a day, the chance of one of those wrong decisions blowing up in your face is so much higher.”

Later in the piece, Klonick again makes an important point:

“The really easy answer is outrage, and that reaction is so useless,” Klonick said. “The other easy thing is to create an evil corporate narrative, and that is also not right. I’m not letting them off the hook, but these are mind-bending problems and I think sometimes they don’t get enough credit for how hard these problems are.”

This is why I've been advocating loudly for platforms to move the moderation decisions further out to the ends of the network, rather than doing it in a centralized fashion. Let end users create their own moderation system, or adapt ones put together by third parties. But, of course, even that has problems as well.

No matter what choices are made, there are significant tradeoffs. As the Motherboard article also highlights, what seems like a "simple" rule gets hellishly complex quickly when applied to other situations, and then you've suddenly increased the "error" rate and people get angry all over again and the whole mess gets blown out of proportion again.

“There's always a balance between: do we add this exception, this nuance, this regional trade-off, and maybe incur lower accuracy and more errors,” Guy Rosen, VP of product management at Facebook, said. "[Or] do we keep it simple, but maybe not quite as nuanced in some of the edge cases? Balance that's really hard to strike at the end of the day.”

As the Oremus piece notes, the "bias" of platforms when it comes to moderation is not "liberal" or "conservative," it's Capitalist. Having a platform overrun with spam and trolls is bad for business. Hiring enough people who can adequately review content within the correct context is somewhere between insanely cost prohibitive and impossible. So the platforms muddle by with imperfect review processes. Making moderation mistakes is also bad for business, and the platforms would love to minimize them, but "mistakes" are often in the eye of the beholder as well, again reinforcing that this is an impossible task. For everyone screaming about how Alex Jones should be kicked off platforms, there's a similar number of people screaming about how awful the platforms are that do kick him off. There is no "right" way to do this, and that's what every platform struggles with.

And, if you think that these platforms are unfairly silencing "conservatives" (which is the prevailing narrative right now), it's probably because you're not paying enough attention elsewhere. Black Lives Matter and other civil rights groups have complained about "racially biased" moderation in the opposite direction, saying that minority groups are regularly silenced on these platforms. Indeed, it's not hard to find a ton of reports about black activists having content removed from social media platforms. And for all the talk of Infowars being taken off these platforms, how many people noticed that the Facebook page of the Venezuelan socialist TV station Telesur was recently taken down as well.

Yes, it's fine to point out that these platforms (mainly Facebook, Twitter and YouTube) are really bad at moderating. But, unless you're willing to actually understand the scale at play, recognize how many mistakes are going to be made (and recognize how trolls are going to go nuts over correct decisions), you're playing into a false narrative to argue that any of these platforms are "targeting" anyone. It's not true.

Filed Under: bias, content moderation, errors, filters, mistakes, scale, social media, spam, trolls
Companies: facebook, google, twitter, youtube


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    John Smith, 28 Aug 2018 @ 5:06am

    Re: Re: Re: Re:

    Internet whistleblowing can be de-indexed and shadowbanned into oblivion, much like any other message which suffers that fate now.

    I acknowledge the right to censor content while calling it flawed. It can be legal and still wrong, or still not a viable business model. I noted that AOL lost its dominance due to censorship, just as Facebook and Twitter will inevitably lose theirs.

    The Two Live Crew case involved more than government censorship. A better example might have been "Sun City" by the Artists Against Apartheid, a song which included Bruce Springsteen at the height of his popularity, which nonetheless got very little airplay or exposure on MTV, all private networks, and whose message was not heard by many.

    The proper response to censorship is not to trust advertising on any site which practices it because one is not getting the full story about the advertisers (by definition).

    My primary argument isn't that censorship should be illegal, merely thatengaging in it is corporate suicide, as it was for AOL, and as it almost was for Twitter. The internet detects censorship as damage and routs around it.

    Without censorship, many users will attempt to silence someone by force, crossing the lines into harassment, defamation, and physical threats, as often occurred on USENEAT and which still occurs on other platforms. It's irrational, or seems that way, but one has to wonder what is really at stake if one is willing to risk prison or being sued into financial ruin just to win an internet argument. The idea that it's just "fun" is belied by these risks that are being taken by speakers. Most won't risk their survival unless something much bigger is at stake.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Special Affiliate Offer

Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.