Again, Algorithms Suck At Determining 'Bad' Content, Often To Hilarious Degrees

from the peachy dept

A few weeks back, Mike wrote a post detailing how absolutely shitty algorithms can be at determining what is “bad” or “offensive” or otherwise “undesirable” content. While his post detailed failings in algorithms judging such weighty content as war-crime investigations versus terrorist propaganda, and Nazi hate-speech versus legitimate news reporting, the central thesis in all of this is that relying on platforms to host our speech and content when those platforms employ very, very imperfect algorithms as gatekeepers is a terrible idea. And it leads to undesirable outcomes at levels far below those of Nazis and terrorism.

Take Supper Mario Broth, for instance. SMB is a site dedicated to fun and interesting information about Nintendo and its history. It’s a place that fans go to learn more weird and wonderful information about the gaming company they love. The site also has a Twitter account, which was recently flagged for posting the following tweet.

For the sin of tweeting that image out, the site’s entire account was flagged as “sensitive”, which means anyone visiting the account was greeted with a warning about how filthy it is. What Twitter’s systems thought was offensive about the image, which comes from another video from a costume company that works with Nintendo, is literally anyone’s guess. Nobody seems to be able to figure it out. My working theory is that the Princess Peach’s lips resemble too closely a more private part of the female anatomy and, when coupled with the flesh-colored face surrounding it sent Twitter’s algorithm screaming “Aaah! Vagina!” leading to the flagging of the account. But this is just a guess, because although the “sensitive” flag was eventually removed, SMB never got any response or explanation from Twitter at all.

SMB went as far as to test through dummy accounts whether the image was the entire problem. It was. After posting the image several times from other accounts, each account was flagged within minutes of the posting. It’s an algorithm doing this, in other words, and one which seems ill-suited to its task.

What we have here is two related problems. We have a company designed to let speakers speak employing an algorithm to flag offensive content, which it is doing very, very badly. We also have a company with a staff insufficiently capable to correct the errors of its incapable algorithm. This would be annoying in any context other than current reality, which sees rising calls for internet sites to automagically block “bad” content and do so with literally inhuman speed.

That means algorithms. But the algorithms can’t do the job. And with sites erring on the side of over-blocking to avoid scrutiny from both the public and governments, that means open communication is the loser in all of this. It’s hard to imagine an outcome more anathema to services like Twitter than that.

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Again, Algorithms Suck At Determining 'Bad' Content, Often To Hilarious Degrees”

Subscribe: RSS Leave a comment
Ninja (profile) says:

Of course one has to question why humanity has such a problem with penises and vaginas.

In any case I think we should stop putting content removal in the hands of specific people or algorithms. I’d like to see a decentralized effort where if people enough flagged a determined content it would issue a warning that it was severely flagged but still be visible and after a very high threshold it would be enabled for review by actual humans and possibly removed. Not a perfect system but at least better than what we have now. One can dream, no?

Roger Strong (profile) says:

Re: Re:

Of course one has to question why humanity has such a problem with penises and vaginas.

Sanity self-preservation. If everything about the Stormy Daniels affair and other Donald Trump sexual misconduct allegations were allowed on TV news – from detailed descriptions to infographics to computer animation – this Presidency would resemble Call of Cthulhu.

Anonymous Coward says:

Re: Re: Re: Penises and Vulvas

It’s called stigmatization.

THE goto tactic for all groups to attack another group is to take a smaller not really a big problem and then stigmatize it until its a huge throbbing…. er problem!

This was all brought to you by the Catholic Church over the idea that naked people go to hell and clothed people are saved.

As a Christian, I fully recognize that Christian “institutions” have caused more damage to morals and values than secular ones though they appear to be catching up so they are not outdone.

Once a group of humans institutionalize something regardless of what that something is it becomes are desired point of infection by evil people to work towards their own ends.

PaulT (profile) says:

Re: Re:

“Of course one has to question why humanity has such a problem with penises and vaginas.”

Humanity doesn’t. Western culture seems to have some issues, North American culture more than some others, but less than some others.

As for the rest of your comment, the problem is that there’s a push to make these companies directly and even criminally liable for content that manages to appear on their site even after moderation. A system that would allow questionable material to appear even while moderated would probably fall foul of the laws those people are trying to push through. Similarly, this is why these systems are erring on the side of caution and flagging material is obviously not objectionable – their hand is being forced into making these thing over-zealous rather than let something through.

Anonymous Coward says:

What was the tweet?

The only pictures I see on the Kotaku page are a highly-pixelated image of Princess Peach (like, 50×30 being shown at 10x that size), and a photo of the author. The Twitter photo link says “Sorry, that page doesn’t exist!”. Was the original photo really that pixelated? Or was that just done when reposting, to bypass the filter?

The Wanderer (profile) says:

Re: Re: Re:3 What was the tweet?

I’m reasonably sure that the people who are reporting problems with such images not loading (properly) are using NoScript or similar, are being passive-aggressive about the fact that Websites which don’t have an inherent need to be dynamic should work properly (at a basic level) without JavaScript, and are attempting to hint that giving sites which don’t respect that principle a benefit by linking to them is something which should perhaps be avoided.

I use NoScript myself, and wouldn’t allow scripting from such a site just for the purpose of loading images (although I might manually edit page source in Firebug, or potentially do it less-manually with Greasemonkey, to enable myself to view a particular image) – but I haven’t followed the links in question, and would have been unlikely to speak up here about the matter anyway. (Although knowing that the linked-to site does have that problem is something that’s useful to me.)

Just Passin' Thru says:

Algorithm problem???

So, Twitter has an automatic algorithm that scans and mistakenly flags inoffensive images. When it is later reviewed by a human and deemed to be non-offensive, why isn’t the image added to a database of known, vetted, inoffensive images that the algorithm checks should it ever encounter that image again?

AntiFish03 (profile) says:

Honestly I have reported more than a few posts that rant about hang ing all gun owners. Or that gun owners need to be killed and have their firearms taken by force. etc… Facebook refuses to see that as hate speech but it clearly should be. Its also potentially inciting violence against a group of people whose majority is law abiding.
I have also reported posts that are even more inflammatory and discriminating against a lot of different groups especially the NRA but others as well. FB has given me responses each time that its not hate speech and that if I don’t want to see it then I should just hide it. Yet if I were to post a rant about killing anti-gun people it would be pulled nearly instantly for inciting violence etc. Their algorithm is heavily skewed. I think this more than anything is what Cruz was trying to get at. That the bias of what is allowed leans heavily to liberal view points but is exceedingly heavy handed on conservative viewpoints.

PaulT (profile) says:

Re: Re:

Hate speech is generally defined as being against a protected class. Gun fetishist is not a protected class. It’s no more hate speech than people saying that all Pokemon fans should be shot. It might be offensive, even something that authorities should be aware of, but not hate speech according to the usual guidelines.

“That the bias of what is allowed leans heavily to liberal view points but is exceedingly heavy handed on conservative viewpoints.”

Which conservative viewpoints? Honestly held beliefs about honest conservative values, or the racist, misogynistic, homophobic, etc., viewpoints often espoused by those who claim to be conservative? There’s a big difference, and if someone is blocking those viewpoints it’s not anti-conservative bias if all the racist, homophobic woman haters happen to be on that side.

Give me examples of non-hatred being blocked and I might agree. But, if the majority of hatred is on your “team”, you may wish to examine that “team” rather than complaining about censorship. If 90% of objectionable material is in the hands of one “team”, it’s not unfair if they get 90% of the blocks.

Uriel-238 (profile) says:

"Protected Classes"

It’s entirely off topic, though it is a problem that we have:

Large amounts of accepted hate speech is against classes who aren’t yet (or aren’t quite) protected, whether countercultures, disabled, obese people, or the biggest one, poor people, even though there are clear trends of discrimination against them. (Trump’s new welfare reform serving as a notable recent example.)

It’s a problem that we classify one form of bigotry over another rather than defining specific actions that are universally regarded as acts of hatred. As a result, society’s level of awareness is always many steps behind.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...