Masnick's Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well
from the a-little-philosophy dept
As some people know, I’ve spent a fair bit of time studying economist Kenneth Arrow whose work on endogenous growth theory and information economics influenced a lot of my thinking on the economics of innovation in a digital age. However, Arrow is perhaps most well known for what’s generally referred to as Arrow’s Impossibility Theorem, which could be described most succinctly (if not entirely accurately) as arguing that there is no perfect voting system to adequately reflect the will of the public. No matter which voting system you choose will have some inherent unfairness built into it. The Wikipedia summary (linked above) of it is not the best, but if you want to explore it in more detail, I’d recommend this short description or this much longer description.
I was thinking about that theory recently, in relation to the ever present discussion about content moderation. I’ve argued for years that while many people like to say that content moderation is difficult, that’s misleading. Content moderation at scale is impossible to do well. Importantly, this is not an argument that we should throw up our hands and do nothing. Nor is it an argument that companies can’t do better jobs within their own content moderation efforts. But I do think there’s a huge problem in that many people — including many politicians and journalists — seem to expect that these companies not only can, but should, strive for a level of content moderation that is simply impossible to reach.
And thus, throwing humility to the wind, I’d like to propose Masnick’s Impossibility Theorem, as a sort of play on Arrow’s Impossibility Theorem. Content moderation at scale is impossible to do well. More specifically, it will always end up frustrating very large segments of the population and will always fail to accurately represent the “proper” level of moderation of anyone. While I’m not going to go through the process of formalizing the theorem, a la Arrow’s, I’ll just note a few points on why the argument I’m making is inevitably true.
First, the most obvious one: any moderation is likely to end up pissing off those who are moderated. After all, they posted their content in the first place, and thus thought it belonged wherever it was posted — so will almost certainly disagree with the decision to moderate it. Now, some might argue the obvious response to this is to do no moderation at all, but that fails for the obvious reason that many people would greatly prefer some level of moderation, especially given that any unmoderated area of the internet quickly fills up with spam, not to mention abusive and harassing content. There is the argument (that I regularly advocate) that pushing out the moderation to the ends of the network (i.e., giving more controls to the end users) is better, but that also has some complications in that it puts the burden on end users, and they have neither the time nor inclination to continually tweak their own settings. No matter what path is chosen, it will end up being not ideal for a large segment of the population.
Second, moderation is, inherently, a subjective practice. Despite some people’s desire to have content moderation be more scientific and objective, that’s impossible. By definition, content moderation is always going to rely on judgment calls, and many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly. Indeed, one of the problems of content moderation that we’ve highlighted over the years is that to make good decisions you often need a tremendous amount of context, and there’s simply no way to adequately provide that at scale in a manner that actually works. That is, when doing content moderation at scale, you need to set rules, but rules leave little to no room for understanding context and applying it appropriately. And thus, you get lots of crazy edge cases that end up looking bad.
We’ve seen this directly. Last year, when we turned an entire conference of “content moderation” specialists into content moderators for an hour, we found that there were exactly zero cases where we could get all attendees to agree on what should be done in any of the eight cases we presented.
Third, people truly underestimate the impact that “scale” has on this equation. Getting 99.9% of content moderation decisions at an “acceptable” level probably works fine for situations when you’re dealing with 1,000 moderation decisions per day, but large platforms are dealing with way more than that. If you assume that there are 1 million decisions made every day, even with 99.9% “accuracy” (and, remember, there’s no such thing, given the points above), you’re still going to “miss” 1,000 calls. But 1 million is nothing. On Facebook alone a recent report noted that there are 350 million photos uploaded every single day. And that’s just photos. If there’s a 99.9% accuracy rate, it’s still going to make “mistakes” on 350,000 images. Every. Single. Day. So, add another 350,000 mistakes the next day. And the next. And the next. And so on.
And, even if you could achieve such high “accuracy” and with so many mistakes, it wouldn’t be difficult for, say, a journalist to go searching and find a bunch of those mistakes — and point them out. This will often come attached to a line like “well, if a reporter can find those bad calls, why can’t Facebook?” which leaves out that Facebook DID find that other 99.9%. Obviously, these numbers are just illustrative, but the point stands that when you’re doing content moderation at scale, the scale part means that even if you’re very, very, very, very good, you will still make a ridiculous number of mistakes in absolute numbers every single day.
So while I’m all for exploring different approaches to content moderation, and see no issue with people calling out failures when they (frequently) occur, it’s important to recognize that there is no perfect solution to content moderation, and any company, no matter how thoughtful and deliberate and careful is going to make mistakes. Because that’s Masnick’s Impossibility Theorem — and unless you can disprove it, we’re going to assume it’s true.