Google Moderation Team Decides My Piece About The Impossible Nature Of Content Moderation Is 'Dangerous Or Derogatory'

from the thanks-for-proving-my-point dept

Well, well. A few weeks back I had a big post all about the impossibility of moderating large content platforms at scale. It got a fair bit of attention, and has kicked off multiple discussions that are continuing to this day. However, earlier this week, it appears that Google’s ad content moderation team decided to help prove my point about the impossibility of moderating content at scale when… it decided that post was somehow “dangerous or derogatory.”

If you can’t read that, it says that Google has restricted serving ads on that page because it has determined that the content is “dangerous or derogatory.” And then it has a list of possible ways in which the content is either “dangerous or derogatory.”

Dangerous or derogatory content

As stated in our program policies, Google ads may not be placed on pages that contain content that:

  • Threatens or advocates for harm on oneself or others;
  • Harasses, intimidates or bullies and individual or groups of individuals;
  • Incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or other characteristic that is associated with systemic discrimination or maginalization.

Huh. I’ve gone back and read the post again, and I don’t see how it can possibly fall into any of those categories. Now, if I were a conspiracy theory nutcase, I’d perhaps argue that this was somehow Google trying to “silence” me for calling out its awful moderation practices. Of course, the reality is almost certainly a lot more mundane. Just as the post describes, doing this kind of content moderation at scale is impossible to do well. That doesn’t mean they can’t do better — they can (and the post has some suggestions). But, at this kind of scale, tons of mistakes are going to be made. Even if it’s just as fraction of a percent of content that is wrongly “moderated,” at the scale of content, it’s still going to involve millions of pieces of legitimate content incorrectly flagged. It’s not a conspiracy to silence me (or anyone). It’s just the nature of how impossible this task is.

This is also not the first or second time Google’s weird morality police have dinged us over posts that clearly do not violate any of their policies (at this point, we get these kinds of notices every few months, and we appeal, and the appeal always gets rejected without explanation). I’m just writing about this one because it’s so… fitting.

The fact is these kinds of things happen all the time. Hell, there was a similar story a week ago as well, concerning Google refusing to put ads on a trailer for the documentary film The Cleaners… a film all about the impossibility of content moderation at scale. Coincidentally, I had just been invited to a screening of The Cleaners a week earlier, and it’s a truly fantastic documentary, that does a really amazing job not just highlighting the people who sit in cubicles in the Philippines deciding what content to leave up and what to take down, but also laying out the impossibility of that task, and helping people understand the very subjective nature of these decisions, and how there’s so much gray area that is left in the eye of the beholder (in this case, relatively low wage contract employees in the Philippines).

So those are two examples of moderators deciding (obviously incorrectly) to moderate content that shows the impossibility of moderating content well. While it does serve to reinforce the point of just how impossible this kind of moderation is, it’s pretty obviously done without intent or political bias. It’s just that when you have someone who has 5 seconds to make a decision, and they have to skim a ton of content without context, they’re going to make mistakes. Lots of them.

Now, let’s see if this post gets moderated too…

Filed Under: , , ,
Companies: google

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Google Moderation Team Decides My Piece About The Impossible Nature Of Content Moderation Is 'Dangerous Or Derogatory'”

Subscribe: RSS Leave a comment
E. Zachary Knight (profile) says:

Yeah, Google Ads’ automated moderation tools are garbage. Just a few months ago, they blocked all ads on my site because according to them we were committing copyright infringement. I tried appealing but was denied instantly every single time. I tried removing a few embedded Youtube videos, one of which was copyright claimed by Warner Bros, but that was not the issue. Finally, after no help at all from Google on this, I found out that they considered me linking directly to the mp3 of my own podcast was what they considered copyright infringement. The only way to fix the issue was to remove all links to, again, my own podcast.

Bergman (profile) says:

Re: Re: Re: Re:

Walmart (and others) do this. If you want to print a photo or two (that you took yourself) and it looks ‘too professional’, they’ll refuse to print it because it’s copyrighted.

No amount of arguing or showing that your ID matches the copyright watermark (if you added one) will sway them.

ONLY corporations can produce copyrighted works. Everyone else must be a thief.

Darkness Of Course (profile) says:

Re: Gotcha

Given an intractable problem how is any programmer going to solve the problem regardless of company objectives given, stated or even written down in a requirements document?

The problem is intractable. And the results are subjective. Trying to solve intractable problems mean that the problem scales faster than the solution. Granted Google has a lot of people, machines and tools to harness some of that.

It is a massive problem. It is quite possibly unsolvable.

Sven Yugonnadudat says:

Well, well. Welcome to undeserved "hiding".

First, wouldn’t surprise me that you ASKED for that piece to be "moderated"! So you’d have a hook to continue the "moderation is tough" assertions, and position self as NOT shilling for Google. Internal evidence is above where you raise possibility of "conspiracy" from out of the blue. Don’t know, but you state that Google "sponsors" you, obviously have influence there. Wouldn’t be first "false flag" ploy ever on the net.

You can, however, reverse my opinion by STOPPING on Techdirt the nearly unique technique, used daily if not every topic, of undeserved "hiding" of comments which are within common law. It’s viewpoint discrimination, unproductive for your stated goals, infuriating, and above all, INEFFECTIVE. But you do it to suppress dissent, while you advertise "free speech". In practice that means "subject to approval by site making code that unknown persons can use to hide other viewpoints". It’s just plain civil fraud, and soon to be actionable: you "liberals" just can’t stop ratcheting up the arbitrary anti-American control to unbearable, and will provoke response of very thing you don’t want: new laws.

So here’s your chance to make the Internet a better, more open, more discussive place. I await your decision while having negative expectation due to fact that you haven’t changed for the five or so years that I’ve complained of undeserved "moderation".


lucidrenegade (profile) says:

Re: Well, well. Welcome to undeserved "hiding".

I await your decision while having negative expectation due to fact that you haven’t changed for the five or so years that I’ve complained of undeserved "moderation".

You get a single vote in the process to decide if your posts are deserving of moderation, just like everyone else. Unfortunately for you, more people than not think your posts are garbage.

That One Guy (profile) says:

Re: Re: None so deaf than those that will not hear

for the five or so years that I’ve complained of undeserved "moderation".

What I find particularly funny(and dishonest) is that they continue to play the victim card after, by their own admission, five years, as though anyone will buy it and their claims of ‘undeserved moderation’ when anyone familiar with them knows damn well why they get flagged, something which has been explained to them so many times that they know full well too.

Sayonara Felicia-San (profile) says:

Re: Well, well. Welcome to undeserved "hiding".

Sven you make a lot of sense! This actually has become quite popular around the Internet, specifically with certain YouTube channels which insiders want promoted ahead of channels they don’t approve of…

This completely artificial circus plays out in the following manner:

– Typically the owner of the channel has an ‘in’ with the YouTube and/or Google staff in order to set this up.
– Then while ‘defending’ the scummy trash at Google/YouTube/Alphabet they are “removed” as violating terms of service.
– Then their gullible users become outraged, miraculously, a day or two later, Google, apologizes and puts the back the content.
-The gullible fans of that channel forget all about the larger philosophical issues and/or initial issue.

Since the coordinated deplatforming of Alex Jones. I’ve seen this play out on three separate YouTube channels, which I won’t name because I refuse to give them the traffic they so desperately want.

Anyone who has ever had a question or issue with a Google service or company knows that this is not how Google works. Google staff will never respond directly to users of its services which have issues. Instead they a “community” driven shit show in which other users attempt to explain or help fellow users.

stderric (profile) says:

Now, if I were a conspiracy theory nutcase, I’d perhaps argue that this was somehow Google trying to "silence" me for calling out its awful moderation practices.

I’m just writing about this [case of poor moderation on Google’s part] because it’s so… fitting.

Nah. You’d point out that you’re in league with the Google, teaming up to create false-flag support for your own arguments because Common Law and stuff and things. The government! Potato salad!

Anonymous Coward says:

This one's easy

  • Harasses, intimidates or bullies and individual or groups of individuals;
  • Google’s moderation team is a group of individuals.
  • The cited post calls attention to their imperfection.
  • Being called out for imperfection constitutes harassment and/or intimidation. (Remember, harassment and intimidation are now defined by how the recipient interprets the conduct, not whether the actor intended the conduct to be confrontational.)

Thus, pointing out that they are (necessarily) imperfect at their jobs is harassment and deserves a response.

Stephen T. Stone (profile) says:

Re: Re: Re: Re:

So what? I mean, other than pointing out how Google might have dinged the article for criticizing Google, what happened to Mike is not the same thing as what happens to the troll brigade around here. We can still read their flagged posts if we choose to read them. They can make their posts elsewhere.

If anything, Google’s moderation here is more insidious. It all but tells Techdirt that the site must “clean up its act”—without saying exactly how to clean things up or explaining exactly why the post was moderated—with an implicit “or else” in re: Techdirt’s ability to serve Google ads.

Sayonara Felicia-San (profile) says:

Re: Re: Re:2 Re:

See Mike. This comment is precisely the problem with your readership.

Half your readers appear to be these arrogant millennial college brats, like the above poster, who clearly doesn’t even know what the term ‘schadenfreude’ is or means.

Ugh….I’m sorry but it’s not my job to intellectualize your readership up to my level and I simply refuse to even engage this guy any further.

Anonymous Coward says:

that's what bias is

“I’ve gone back and read the post again, and I don’t see how it can possibly fall into any of those categories.”

This is bias Mike, you do it to us, just like they did it to you, just like we do it to each other.

It is why people absolutely must be able to have freedom of speech. I know you already support freedom of speech for the most part, but there are far more of those like you that are more than willing to crush others over their speech.

So remember the next time you say “I don’t see how”, the point is THEY SAW HOW and you didn’t.

But it sucks when you get hit by your own kind does it not? OUCH!

But do keep up the good work, bias and all, these things too need to be called out!

Zof (profile) says:

"Google isn't biased!"

Open up an incognito browser window. If you are crazy suspicious, use firefox. If you are even more suspicious, connect to a VPN based in America.

Go to

Now, you tell me how unbiased Google is. Because you are going to see a wall of:

Washington Post
New York Times

And the 50,000 media repeaters that just repeat them constantly.

Extra Credit: create a fake profile with your anonymous VPN IP address. Make a gmail account. Go back to Google news. Leverage their “block news source” option.

Marvel as every Washington Post, CNN, or New York Times article you should have made go *poof* by blocking those three sources comes back IMMEDIATELY through media repeaters.

Like they WANT TO MADE DAMN SURE you get those viewpoint in your head. The ones Google wants. The ones their corporate customers pay for.

Anonymous Anonymous Coward (profile) says:

Re: Re: Re: "Google isn't biased!"

WTF do cookies and VPN’s have to do with setting up new Gmail accounts? I have tried. I bought a second tablet, intended for international use, with a different username. Couldn’t get Google to give me one because I would not subject anyone I call friend to receiving the SMS message. Why I want a clean identity to cross boarders is no ones business, except to say, look at what happens to people who cross boarders with electronics.

Also, looking at your comments below, you don’t actually have an argument, you just want to argue. Go argue with yourself.

Thad (profile) says:

Re: "Google isn't biased!"

I do see a lot of stories from the Washington Post, CNN, and the New York Times on Google News.

I also see a lot of stories from Fox News, the Wall Street Journal, Forbes, the Washington Examiner, the New York Post, the Washington Times, the Weekly Standard, and the Daily Mail.

Almost as if Google is prioritizing news sources based on their popularity, not on whether they’re liberal or conservative.

Zof (profile) says:

Captain Reality Check

I love how the “google bros” attack you instantly with “explanations” for their obvious bias when you point it out.

Again, open up an anonymous browser window in your favorite browser.

Go to

Go to Top Stories (the section Google will not allow you to remove. The section with paid news that makes them money)

LOL, I’m sorry. I mean just go to as a new non-registered Google user. and see the Washington Post, New York Times, CNN, and the literally 50,000 media repeaters pretending to be legitimate news sources that just parrot those three news feeds that Google, with no sense of irony, forces you to block if you want to get completely rid of those three news sources. I’ve blocked over 5,000 of these fake Google generated news repeaters and I still get stories from WaPo.

That Anonymous Coward (profile) says:

Somewhere in the fuzzy world where politicians think tech can just tech harder to accomplish what they can’t manage to craft a law to do this makes sense.

There are laws that blame the platform for the actions of others, not acting fast enough when someones whines, & they have real penalties.

Big news flash for everyone – Corporations can’t get any better at this. They are fscked in all directions.

Lawmakers love headlines, not facts or reality.

The Butthurt gonna butthurt & scream louder.

Users who get screwed scream & cause drama but they have no power & have to just accept the monolith response of silence. (Consider in saying nothing they have fewer hassles as people try to apply different responses from different pools & have it make sense.)

Perhaps the answer truly lies in demanding others police our feelings & expect 100% no bad feelz.

If they annoy you, don’t follow them.

If they say things you disagree with, debate them or if it goes no where, block them.

Stop demanding that there is a magical tech that can know you so very very well that it can curate the rest of the world to make sure you never have a sad.

Personal responsibility, this is what we need.

We can kick Alex Jones off a platform, & that is dangerous.
Consider his followers like to send death threats and shoot places up over the fantasies he spins… and to make people happy they painted a target on their back.

If you dislike Mr Jones, block his fscking account & move on.

The flip side people are missing is, if the other side gets enough people complaining, you just created a process where enough bitching gets them booted… And the platform is caught in the position of trying to claim its all fair & even while trying to find a way to not apply their own rules… and the platform gets more & more screwed up.

Lets stop asking them to solve the impossible issues & demand that Twitter leave things in fsck chronological order & stop shuffling our feeds!!!!!

Anonymous Anonymous Coward (profile) says:

Re: Re:

Doesn’t that lead directly to Mike’s original premise, that platforms should not be using algorithms or policy to do their moderating, but using users? Like what happens here on Techdirt? The method of moderation by users might take different forms, blocking a user, flagging them (like here) or just mentally ignoring their posts, or whatever was listed in the original article, or whatever might be thought up in the future.

The point of this piece is that Google decided, for unstated reasons (reasons they won’t own up to) to block advertising revenue to that article (at least we think it’s only that article). What is their purpose in doing so? Were their feelings hurt? Did they think that article denigrated them in some way? Did they perceive that it might cause them to spend some money on changing the way things work on their various sites? Hard to tell when they won’t talk to you.

That some crackpots will take some Internet dialog and spill it over into real world violent actions is not the fault of the Internet or any platform, or any modification system, or lack there of, or of banning some particular entity. It is the fault of the crackpots. Would that there be some method of 100% identifying crackpots before they go shooting up someplace, but there isn’t.

Blaming video games or the Internet isn’t the answer. Better psychology might be, but that is a science that still has a long way to go, and deciding that someone should be ‘closely watched or committed’ is still very subjective. Besides, Reagan got rid of many of our psych wards, what would we do with them? Just putting them in prison won’t help them and might harm them, but then what if they haven’t committed a crime, yet? I don’t like the current answers to my questions, but I haven’t seen any better, yet.

That Anonymous Coward (profile) says:

Re: Re: Re:

They use the algorithms because the task is impossible to hire enough humans.

People have figured out they can remove content they dislike by yelling loud enough, so they keep yelling.

We need to train people to stop demanding & use the tools at their disposal.

Google really needs to get a grip or stop playing monolith & explain why. I am sure there is an exec who claims we can’t tell them why it was bad b/c they will adapt & everyone is a bad actor. (My god its the TSA playbook).

They rely on the machines to tell them whats bad & there is no reason for them to pay a human to review anything. See also Content ID.

They might only get 50% of bad actors & keep trying to improve those numbers but the cost is the 45% of innocents caught in their bullshit maze of trying to divine what they did wrong.

We need to stop allowing corporate wants & government pipe dreams to drive things & stop trying to catch it all & start to rely on people to save themselves.

Jones is making you crazy, why are you seeking him out?
Why do we give them a reward for screaming how horrible he is?
No one forced you to follow him, no one forced you to hear about him, why allow them to force their will on everyone else?

Somewhere in all of this protecting the users, they forgot the users aren’t a monolith. That we are rapidly getting to where no one can talk about steak because a baby can’t eat steak, and PETA demands we not talk about meat, and the users get caught up in appeasing the unappeaseable.

Wendy Cockcroft (user link) says:

Re: Re: Re: Re:

I get what you’re saying, TAC, but there’s a significant number of absolutely horrible people determined to post CP, beheadings, and all sorts of stuff too nasty to mention here. The moderation teams have to sift through that vile crap at great cost to their own mental health (burnout is common) so that all we get is griefer trolls and butthurt whining from the likes of Blue and Hamilton.

So what do we do? Trust The People to sort it out? They’re the ones either posting it or seeking it out. Remember the Orange County cupcake who literally lost her head while out driving Daddy’s car? The most sickening aspect for me wasn’t even the trolls driving the poor family offline to get away from the avalanche of cruelty, it was the story of one woman who demanded that the photo be made available so she could show it to her son, since he had been asking for it.

Basically, there’s a larger number of absolutely trashy people about than we’d like to admit. Either we cede the internet to them or we keep the worst of the horrible stuff out. It is, as I often say about such things, a demand-side issue: people want it. How in the world do we address that?

Anonymous Coward says:

Re: Re: Re:2 Re:

Trust The People to sort it out?

Yes, because what I consider acceptable will be slightly, or even significantly different to what you find acceptable. So why should some people be appointed to a position to decide what is acceptable for everybody? Add in that algorithms make bad decisions, and at Internet scale a company cannot hire enough people to deal with contested decisions by algorithms, and you end up with posts being removed that should be left up.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...