Content Moderation At Scale Is Impossible: That Time Twitter Nazis Got A Reporter Barred From Twitter Over Some Jokes
from the free-speech? dept
Reporter Jon Schwarz, over at The Intercept, has yet another story of content moderation at scale gone wrong, focusing this time on Twitter and his own account. It seems that a bunch of white supremacists on Twitter got mad at him, found an old joke, taken out of context, reported it en masse, and Twitter blocked him over it. Schwarz’s story is worth reviewing in detail, but I think he gets the wrong message out of it. His take is, more or less, that Twitter doesn’t much care about lowly users, and can’t be bothered to understand the context of things (we’ll get to the details of the spat in a moment):
It would be easy to interpret this as active contempt by Twitter for its users. But it?s more likely to be passive indifference. Like any huge corporation, Twitter is focused on the needs of its customers, which are its advertisers. By contrast, Twitter?s users are not its customers. They?re its product. Grocery stores don?t care if a can of soup complains about being taken off the shelf.
Similarly, contrary to speculation by some, I don?t think CEO Jack Dorsey secretly sympathizes with his Nazi user base. He probably just enjoys being a billionaire. As he?s said, ?from a simple business perspective ? Twitter is incentivized to keep all voices on the platform.? Whatever else you want to say about Nazis, they definitely drive engagement, which in turn lets Twitter charge higher prices for diaper ads.
I even sympathize a little bit with Twitter?s conundrum. They aspired to be a globe-straddling highly profitable monopoly that had no responsibility for what their users did. This was a circle that couldn?t be squared. Proctor & Gamble doesn?t want its promoted tweets to appear right above hypothetical user @hh1488 livestreaming himself massacring 17 congregants at an Albuquerque mosque.
I was simply caught in the natural dynamics that flow from this contradiction. The structure of multinational publicly-traded corporations inevitably puts them somewhere politically from the center-right to the hard-right.
While an interesting take, I’d argue that it gets nearly every important point confused. Indeed, I’d argue that Schwarz is making the very same mistake that conservatives who blame Twitter for supposedly anti-conservative bias are making: looking just at their own situations and the content moderation choices they’re aware of, and imparting on the company some sort of natural political motive. Twitter is neither “liberal” nor is it, as Schwarz says, “center-right to the hard-right.” It’s a company. It doesn’t have political beliefs like that. (Relatedly, in the past, I’ve made it quite clear how misleading and unhelpful the whole “if you’re not paying, you’re the product” line is).
So, now let’s dig into the specifics of what happened to Schwarz and why, rather than it being some sort of political bias at play, or (as Schwarz hints at in his opening) Twitter bending over to appease white supremacists, that this is yet another manifestation of Masnick’s Impossibility Theorem… that it’s impossible to do content moderation well at scale.
What happened here was that first Schwarz made a joke about Fox News host Tucker Carlson, who appeared to be doing some dog whistling:
Tucker, just say Jewish, this is taking forever pic.twitter.com/QnONrYFIEX
— Jon Schwarz (@schwarz) March 12, 2019
As Schwarz notes, he is referencing a joke from the sitcom “30 Rock”:
In fact, I was referring to a famous ?30 Rock? joke, which had now assumed human form in Carlson. When NBC executive Jack Donaghy decides that TGS, the TV-show-within-the-show, doesn?t have wide enough appeal, he complains to its head writer Liz Lemon:
JACK: The television audience doesn?t want your elitist, East Coast, alternative, intellectual, left-wing ?
LIZ: Jack, just say Jewish, this is taking forever.
That’s not the joke he got blocked over, though. Instead, former KKK leader and all around awful person, David Duke, took that joke and paired it with another out-of-context joke from a few years earlier to mock Schwarz. I’m not linking to Duke’s tweet, but this was the joke that he paired with the one above to say “These are not good people, folks.” Which, truly, is some world class projection.
In case you’re unable to load the image, Schwarz’s 2015 tweet had said:
you know, it actually would make much more sense if jews and muslims joined forces to kill christians.
As Schwarz explained, in context, this is actually the kind of snarky reply that Duke would have historically agreed with, because it was part of a longer thread criticizing Israel (something Duke does frequently, though perhaps with other motivations in mind):
But Duke is such a cretin that it never occurred to him that my 2015 joke was exactly what he adores: criticism of Israel. That?s hopefully clear even out of context. But thanks to Twitter?s advanced search function, you can see that I was talking specifically in the context of two events ? the publication of photographs of Gaza taken after Israel?s bombing campaign in Operation Protective Edge, and the murder of three Muslim students in Chapel Hill, North Carolina. That week I also wistfully suggested, ?how about nobody kill anybody and then we go from there.?
Either way, lots and lots of Carlson and Duke fans reported that particular tweet to Twitter (and, of course, bombarded Schwarz with the kind of Twitter barrage that seems all too common these days) and Twitter took action over that tweet.
I have no great insight into how this particular decision went down, but having spent a lot of time in the past few years talking with content moderation folks at Twitter (and other social media platforms), what’s a lot more likely than Schwartz’s theory is simply this: Twitter has constantly had to tweak its rules over time, and because there is a decently large number of people on the “trust and safety” team, they need to have rules that can be easily understood and carried out — and that means that understanding context is generally not possible. Instead, there will be some more bright line rules — things like “no references to killing or violence directed at specific protected classes of people.” This is the kind of rule that you could easily see put in place on just about any set of content moderation rules.
And, when looked at through that lens, Schwarz’s tweet, even in jest, would trip that line. It’s a statement about killing people of a particular religion.
As for why it only caused trouble four years after the tweet, again, the reason is pretty simple. It’s not because Twitter Nazis were reporting it so often, but because anyone reported it. Twitter doesn’t review each and every tweet. They review tweets that come to their trust & safety team’s attention. And, I’ve heard first hand from people at Twitter that if they come across older tweets, even ones that have been up for many years, if they violate current rules, they will be subject to action.
Again, from the position of thinking about how to run a content moderation/trust & safety team at scale, you can totally see how these rules would get put in place, and how they’d actually be quite sensible. I’m guessing just about every internet platform that has any kind of content policy has just such a rule. And it’s easy to sit here and say, “but in context, it’s clear that he’s making a joke” or “it’s clear he’s trying to make a very different point and not literally advocating for Jews and Muslims to kill Christians.”
But how do you write those exceptions into the rules such that an entire diverse team of employees on a trust & safety team can understand that?
You can try to put in an “except for jokes” clause, but that would require everyone to be able to recognize a joke. Also, it would lead to gaming the system where people would advocate for such killings… and then claim “just joking!” It would also require a team that is culturally sensitive and able to recognize humor, context, and joking for nearly every cultural group around the globe.
That’s literally impossible.
And that’s why this is just yet another example of why content moderation at scale is impossible to do well. It has nothing to do with politics. It has nothing to do with left-right. It has nothing to do with Twitter appeasing neo-Nazis. It has everything to do with the impossibility of moderating speech at scale.