from the again-and-again dept
I’ve made it clear that I don’t think much of Elizabeth Warren’s big plan to “break up big tech,” which seemed not particularly well thought out and unlikely to accomplish its actual goals. Even so, I certainly cringed upon hearing the news that Facebook had blocked an ad that Warren’s team had taken to promote the plan. I mean, come on. Here is Warren, talking about how Facebook is too powerful and can potentially influence policy by choosing what it allows and what it doesn’t allow… and Facebook up and hands Warren the most beautiful gift she could ever hope for: blocking her own ad for her policy to break up Facebook. Basically everyone immediately spun the story as Facebook trying to censor this call to break up itself.
It sure looked bad.
Of course, the reality, again, is a lot more nuanced. And, while everyone will ignore this (and I’m sure some people will make bogus accusations in the comments), the reality is that this isn’t proof of Facebook’s nefarious attempts to censor people it doesn’t like or messages it doesn’t like. It’s proof of the impossibility of content moderation at scale. As Facebook explained, the original ad violated a Facebook policy that had nothing to do with the message it was sending: you’re apparently not allowed to use Facebook’s logo in an ad:
?We removed the ads because they violated our policies against use of our corporate logo,” the spokesperson said. “In the interest of allowing robust debate, we are restoring the ads.?
This is, indeed, true. If you look at Facebook’s ad policies it shows the following:
You are, in fact, not allowed to use a Facebook logo in an ad. Warren’s ad violated that. Of course, in this context, it looks really, really bad. As Buzzfeed’s Ryan Mac noted, this policy — “which was ostensibly put in place for good reason, is interpreted without nuance.”
Yup. Except, here’s the thing, as we discussed on our podcast last year, it is literally impossible to invoke nuance when discussing moderation of content at scale. To handle the kind of scale that Facebook and other giant platforms deal with, you need to have thousands (and maybe tens of thousands) of content moderators, and they need to be trained in a manner that they will apply the same rules pretty consistently (which is already an impossible standard). In such a world, there is literally no room for nuance. A system that allows nuance is one that allows arbitrary decision making… leading to just more complaints of inconsistent content moderation.
And, frankly, for all of Warren’s attempt to frame this as evidence that Facebook has “too much power” and is “dominated by a single censor,” what actually played out suggests why that’s inaccurate. These ads weren’t getting much attention. Indeed, they had almost no money behind them. According to Buzzfeed, these ads weren’t designed to reach a wide audience:
Facebook?s ad archive shows that the four ads had less than $100 in backing each, with three garnering fewer than 1,000 impressions and one garnering between 1,000 and 5,000 impressions.
And then what happened? The ads got taken down, and rather being “censored,” the story went crazy viral through other sources, almost as if the Warren campaign maybe found some silly rule to violate just to make this kind of thing happen…. And, of course, the Streisand Effect then guaranteed that for basically a tiny ad spend, a ton more people now became aware of these ads.
I fully expect that the details and nuance here will be ignored by most — and we’ll keep hearing for months (or, possibly, years) about how this somehow “proves” Facebook either “censors critics” or is too dominant and can stifle a message. And, yet, all of the details show something very, very different. Content moderation at scale is impossible to do well, and when Facebook does (for totally different reasons) try to stifle a message (after receiving tons of pressure from people like Elizabeth Warren to better police political ads…), it suddenly became headline news across the political and tech news realms.
Again, there are all sorts of reasons to be concerned about Facebook’s market position. And I’d love to see more competition in the market. But, can we at least not jump on the easy narrative when it’s wrong, even if it “feels” good?