Google Still Says Our Post On Content Moderation Is Dangerous Or Derogatory
from the sure-ok-whatever dept
Back in October, we wrote about how Google had declared — with no details — that an earlier post we had done was “dangerous or derogatory” and that it would no longer allow AdSense ads on that page. The real irony? The original post (which contains nothing dangerous or derogatory) was about the “impossible choices” platforms have to make when moderating speech on their platforms. So, what better example than “moderating” an article about how internet platforms will always be bad at content moderation.
We had requested a “review” of the designation when we first got it, and Google initially rescinded the decision, before reinstating it a few weeks later. We appealed again… and were rejected. That’s when I wrote the article. Soon afterwards, some people from Google reached out to discuss what happened. As I’ve said all along — and as I said directly to people at Google — the company has every right to make these calls however they want. I certainly understand how it’s impossible to craft reasonable rules that can be applied at scale without making “mistakes” (and I still maintain this is a mistake). My one request was that the company be a bit more forthcoming about why we were dinged, so that, at the very least, if there was a real issue, we could make a determination on our own about whether or not we agreed and if there was anything worth changing. I didn’t get a response to that specific request and I can guess why: given how much content needs to be moderated, it would likely add significant overhead that probably isn’t worth it for any “edge cases.”
Either way, we left things alone. If Google doesn’t want to put AdSense on that page, fine. Adsense pays next to nothing anyway. But, what’s weird is that over this past weekend, Google decided to complain to us again about the same damn page. I had simply assumed that once we left things as is that page was on some sort of permanent “bad” list. But, for whatever reason, the company decided that it was urgent to alert us that the page they already (stupidly) called “dangerous and derogatory” was now being declared “dangerous and derogatory” once again. Because we got a new notification, I clicked the appeal button once again, and on Monday morning the company rejected our appeal. Again, that’s Google prerogative, though it looks kinda silly. Why even bother us to tell us that this page you already decided (incorrectly) is a problem is still a problem? We’re not changing anything, so just don’t put ads on it and stop bugging us about it.
One other note on all of this: while the folks at Google (understandably) couldn’t tell us why the story was dinged in the first place, they did note that it might be because of user comments — and pointed me to this post about “managing the risk of user comments.” What struck me as somewhat astounding about that article is that it is Google more or less taking the exact opposite stance it normally takes on intermediary liability. While Google (correctly) fights for intermediary liability protections in government policy around the globe, here it says that if you have any kind of user generated content on your site — such as comments — then you are responsible for that content.
First, understand that as a publisher, you are responsible for ensuring that all comments on your site or app comply with all of our applicable program policies on all of the pages where Google ad code appears. This includes comments that are added to your pages by users, which can sometimes contain hate speech or explicit text.
Knowing this, please read Strategies for managing user-generated content. Make sure you understand how to mitigate risk before you enable comments or other forms of user-generated content. Managing comments on your site pages is your responsibility, so make sure you know what you?re getting into. For example, you?ll need to ensure you review and moderate comments consistently so as to ensure policy compliance so that Google ads can run.
Obviously, there’s some level of difference between being legally liable in court and just having ads taken off of your site. But it’s pretty incredible to see Google using this kind of language when talking to smaller sites, telling them that they are responsible, and that they have to institute certain specific moderation schemes, while at the same time fighting vehemently against any effort by the government to impose similar restrictions on themselves regarding responsibility and content moderation. It feels… a bit hypocritical.
So, it is indeed possible that it’s the comments on our page that keep getting us dinged — there is one in particular that uses some “derogatory” words/phrases (though, incredibly, that comment is using that language in an effort to demonstrate a point about content moderation, rather than using them in a derogatory manner). And yet, we get dinged for it. We won’t remove that comment, because there is no reason to.
But, in a way, this all highlights, again, the very mess we were describing in that original post: content moderation at scale is impossible to do well. You have to write rules that can be consistently applied by a large group of folks who have to review pages very quickly. So it’s likely that somewhere in those rules is a prohibition on putting advertising next to certain “derogatory” words. That seems like a clearly drawn line… until there’s a comment that isn’t using those words in a derogatory manner, but rather to demonstrate questions about content moderation. But there’s no exception written into the rules, and there’s no allowance for taking the context into account (which would be impossible in its own right, because no reviewer is going to have the time to understand all the context).
Of course, it would be nice if Google just explained that to us, rather than just telling us that the page has derogatory content with no other details. But, what are you going to do… other than post another post about it?