The Impossibility Of Content Moderation Plays Out, Once Again, On YouTube
from the no-one-will-agree dept
I was traveling a bit this week so didn’t watch the slow motion train wreck that was happening on YouTube in real time. The latest situation began when Vox video producer Carlos Maza posted publicly on Twitter about Steven Crowder — one of those ranty angry “comedians” — kept posting “repeated, overt attacks on my sexual orientation and ethnicity.” He noted that Crowder’s fans had taken to harassing and doxxing him and generally being assholes. He “reported” the content to YouTube, saying that he felt the content violated its policies on bullying and harassment. After a few days, YouTube posted via Twitter (oddly) a kinda weird explanation, saying that after reviewing the videos, they didn’t actually violate YouTube’s harassment policies.
Lots of people got angry about that decision, and then YouTube changed its mind (partly), choosing to (maybe temporarily) demonetize Crowder’s channel until he agreed to “address all of the issues with his channel”, specifically “continued egregious actions that have harmed the broader community” whatever that means.
As Robby Soave at Reason notes, this is a solution that pissed off absolutely everyone and satisfied absolutely no one. Though, there is one thing that pretty much everyone agrees: boy YouTube sure pointed a pretty large cannon at its own foot in dealing with this one (seriously, don’t they employ people who have some sort of clue about these kinds of communication issues?).
As Soave points out, there’s really no good results here. He’s correct that Crowder does seem to be an asshole and there’s no reason to express any sympathy for Crowder being a jerk and attacking someone for their sexual orientation or ethnicity. Crowder deserves to be called out and mocked for such things. At the same time, it is quite reasonable to sympathize with Maza, as being on the end of such targeted harassment by assholes is horrific. Part of the problem, here, is the disconnect between what Crowder himself did (just be a general asshole) and what Crowder’s followers and fans did (taking Crowder’s assholish comments and escalating them into harassment). That puts a platform like YouTube (once again) into a really impossible position. Should it be holding Crowder responsible for the actions of his crazy deranged followers (which it can easily be argued he winkingly/noddingly encouraged) even if Crowder didn’t do the harassment directly, and was just generally an asshole? It’s a tough call. It may seem like an easy call, but try to apply that standard to other situations and it gets complicated fast.
Katie Herzog, at The Stranger, posted a thoughtful piece about how this particular standard could boomerang back on the more vulnerable and marginalized people in our society (as is the case with almost any effort towards censorship). Even if Crowder is deeply unfunny and a jerk, this standard creates follow on effects:
Crowder is a comic, doing exactly what comics do: Mocking a public figure. There’s nothing illegal about that, and if YouTube does reverse its decision and start to ban everyone who mocks people for their sexuality or race, they’re going to have to ban a whole lot of queer people of color who enjoy making fun of straight white dudes next. That’s not a precedent I’d like to see set.
Of course, the usual response to this is to have people claim that we’re making a bogus “slippery slope” argument that isn’t there. What they mean is that since you and I can somehow magically work out which assholes deserve to be shut down and which assholes are doing it in pursuit of the larger good, then clearly a company can set in place a policy that works that says “stop just the assholes I don’t like.”
And there are reasons to be sympathetic to such a position. It’s just that we have a couple decades of seeing how this works at scale in actual practice and it doesn’t work. Ratchet up the ability to silence assholes and there are plenty of “false positives” — people getting kicked off for dopey reasons. Go in the other direction and you end up with too many assholes on the platform. As we’ve discussed for years, there is no right level and there is no way to set the controls in a way that works. No matter what decision is made is going to piss off a ton of people. This is why I’ve been pushing for platforms to actually move in a different direction. Rather than taking on all of the policing responsibility themselves, open it up. Let others build services for content moderation, push the power to the ends of the network, and let there actually be competition among the moderation options, rather than having it come from a single, centralized source. There will still be problems with that, but it avoids many of the issues with the mess described here.