Content Moderation Case Study: Facebook's AI Continues To Struggle With Identifying Nudity (2020)
from the ai-is-not-the-answer dept
Summary: Since its inception, Facebook has attempted to be more “family-friendly” than other social media services. Its hardline stance on nudity, however, has often proved problematic, as its AI (and its human moderators) have flagged accounts for harmless images and/or failed to consider context when removing images or locking accounts.
The latest example of Facebook’s AI failing to properly moderate nudity involves garden vegetables. A seed business in Newfoundland, Canada was notified its image of onions had been removed for violating the terms of service. Its picture of onions apparently set off the auto-moderation, which flagged the image for containing “products with overtly sexual positioning.” A follow-up message noted the picture of a handful of onions in a wicker basket was “sexually suggestive.”
Facebook’s nudity policy has been inconsistent since its inception. Male breasts are treated differently than female breasts, resulting in some questionable decisions by the platform. Its policy has also caused problems for definitively non-sexual content, like photos and other content posted by breastfeeding groups and breast cancer awareness videos. In this case, the round shape and flesh tones of the onions appear to have tricked the AI into thinking garden vegetables were overtly sexual content, showing the AI still has a lot to learn about human anatomy and sexual positioning.
Decisions to be made by Facebook:
- Should more automated nudity/sexual content decisions be backstopped by human moderators?
- Is the possibility of over-blocking worth the reduction in labor costs?
- Is over-blocking preferable to under-blocking when it comes to moderating content?
- Is Facebook large enough to comfortably absorb any damage to its reputation or user goodwill when its moderation decisions affect content that doesn’t actually violate its policies?
- Is it even possible for a platform of Facebook’s size to accurately moderate content and/or provide better options for challenging content removals?
Questions and policy implications to consider:
- Is the handling of nudity in accordance with the United States’ more historically Puritianical views really the best way to moderate content submitted by users all over the world?
- Would it be more useful to users is content were hidden — but not deleted — when it appears to violate Facebook’s terms of service, allowing posters and readers to access the content if they choose to after being notified of its potential violation?
- Would a more transparent appeals process allow for quicker reversals of incorrect moderation decisions?
Resolution: The seed company’s ad was reinstated shortly after Facebook moderators were informed of the mistake. A statement from the company raised at least one more question as its spokesperson did not clarify exactly what the AI thought the onions actually were, leaving users to speculate what the spokesperson meant, as well as how the AI would react to future posts it mistook for, “well, you know.”
“We use automated technology to keep nudity off our apps,” wrote Meg Sinclair, Facebook Canada’s head of communications. “But sometimes it doesn’t know a walla walla onion from a, well, you know. We restored the ad and are sorry for the business’ trouble.”
Originally posted at the Trust & Safety Foundation website.
Filed Under: ai, content moderation, nudity
Companies: facebook
Comments on “Content Moderation Case Study: Facebook's AI Continues To Struggle With Identifying Nudity (2020)”
I don't know…
I will say that those onions do turn me on…
*drools like Homer Simpsons*
It is as hard as tumblr
I think the AI thought the onions were breasts. The lighting and shadows caused a false positive on the AI.
Techdirt, TheMysterousMrEnter’s technocracy episode on tumblr may agree with you on this one. Policing the internet at scale in general is downright impossible, even for a big tech industry.
Re: It is as hard as tumblr
Thanks for that. That youtube vid is pure goodness that would feel right at home at TechDirt.
Well, there is a lot s skin to skin contact in the photo.
Re: Re:
There are a lot of layers to that joke.
Those luscious, juicy mounds of flesh looks so sweet, it brings a tear to my eye.
Nudity filters- at a time when men in European style swimsuits are seen as too revealing, I sure am glad Facebook is keeping us from seeing advertisements of Canadian onions. Those flesh-colored breast-shaped vegetables from the North are ruining America’s youth.
Thank god somebody is finally doing something about all this nudity everywhere, why there is literally no other more pressing issues at this time. Good to see that our priorities are in order.
Re: Re:
Well, since they cured covid19, ended world hunger, eliminated war, reversed global warming, stamped out racism and sexism, what else was there to do?
😉
Re: Re:
Nice false dilemma fallacy you’ve got there.
Whether to laugh or cry...
I’m normally of the opinion that real stupidity beats artificial "intelligence" any day…
…but as the OP demonstrates, we’re getting there.
I'm an adult, I should see nudity if I want
Here’s an idea: how about if Facebook treats people like adults and has a "nudity" checkbox when they sign in: check yes if you don’t mind seeing nude images, check no to not see them. Then only show nude images to people who checked the box.
Then there’s no need for AI or automated moderation: if someone reports a nude image and they checked "yes", then Facebook rejects the report because the user opted-in.
Just imagine the kinds of groups that could form if they allowed nudity! And more groups mean more users on the site, which means more user engagement, which means higher ads rates, and so on.
Heck, Facebook could even mine people’s data just by seeing which groups with nudity they join (which they probably do already).
And continuing with this argument, how much money is Facebook leaving on the table by not allowing nudity and adult groups?
I second this
FB should continue their best on this to make it a pleasant social network for users of any age. Now, it is just leaving it to the power of users to report then take action accordingly. The current AI’s OCR is actually very capable and accurate enough. It really is up to them to do so.
On top of, FB should also be more stringent on allowing ‘incredible’ businesses to take up FB Ads. Its crazy to see so many ‘scam’ ads in FB looking to pick up their victims in the world’s largest network. Perhaps while we wait for FB’s solutions, FB users here, please report whenever you see nudity or scam looking ads in FB. Thank You!