The Good Censor Document Shows Google Struggling With The Challenges Of Content Moderation
from the thoughtful-analysis dept
Last week, the extreme Trump-supporting media sites went positively ballistic when Breitbart released a leaked internal presentation entitled “The Good Censor.” According to Breitbart and the other Trumpkin media, this is somehow “proof” that Google is censoring conservatives, giving up on free speech and planning to silence people like themselves. To put this into a context those sites would understand, this is “fake news.” I finally had the time to read through the 85 page presentation and, uh, it paints a wholly different picture than the one that Breitbart and such sites have been painting.
Instead, it pretty clearly lays out why content moderation is impossible to do well at scale and that it will always result in decisions that upset a lot of people (no matter what they do). It also discusses how “bad actors” have effectively weaponized open platforms to silence people.
It does not, as some sites have suggested, show a Google eager to censor anyone. Indeed, the report repeatedly highlights the difficult choices it faces, and repeatedly highlights how any move towards increased censorship can and will be abused by governments to stamp out dissent. It also is pretty self critical, highlighting how the tech companies themselves have mismanaged all of this to make things worse (here’s just one example of a much more thorough analysis in the document):
The presentation actually spends quite a lot of time talking about the problems of any censorship regime, but also noting that various governments basically are requiring censorship around the globe. It’s also quite obviously not recommending a particular path, but explaining why companies have gotten more aggressive in moderating content of late (and, no, it’s not because “Trump won”). It notes how bad behavior has driven away users, how governments have been increasingly using regulatory and other attacks against tech companies, and how advertisers were being pressured to drop platforms for allowing bad behavior.
The final five slides are also absolutely worth reading. It notes that “The answer is not to ‘find the right amount of censorship’ and stick to it…” because that would never work. It acknowledges that there are no right answers, and then sets up nine principles — in four categories — which make an awful lot of sense.
- Don’t take sides
- Police tone instead of content
- Justify global positions
- Enforce standards and policies clearly
- Explain the technology
- Improve communications
- Take problems seriously
- Positive guidelines
- Better signposts
You can quibble about these, but they’re mostly good general principles (though perhaps hard to put into practice). I also think there are some other ideas that are missing — such as moving control out to the ends of the network, rather than continuing to handle everything at the company level or providing incentives for good behavior — but this is hardly the list of a company saying it’s actively going to censor political speech. Indeed, many of the complaints from people about the moderation is the lack of transparency and consistency in these actions. Though, I’d also argue that this presentation underplays why consistency is so difficult when you rely on thousands of people to make determinations on content with varying shades of gray and no clear answers as to what’s “good” or “bad.”
So while this document is being used to attack Google, it actually is yet another useful tool in showing (1) just how impossible it is to do these things right and (2) how carefully companies are thinking about this issue (rather than just ignoring it, as many insist). I recognize that’s not as “fun” a story as slagging the big bad tech giant for its plan to silence people, but… it’s a more accurate story.