Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017)

from the finding-the-monsters-among-us dept

Summary: YouTube offers an endless stream of videos that cater to the preferences of users, no matter their age and has become a go-to content provider for kids and their parents. The market for kid-oriented videos remains wide-open, with new competitors surfacing daily and utilizing repetition, familiarity, and strings of keywords to get their videos in front of kids willing to spend hours clicking on whatever thumbnails pique their interest, and YouTube is leading this market.

Taking advantage of the low expectations of extremely youthful viewers, YouTube videos for kids are filled with low-effort, low-cost content – videos that use familiar songs, bright colors, and pop culture fixtures to attract and hold the attention of children.

Most of this content is innocuous. But a much darker strain of content was exposed by amateur internet sleuths, which was swiftly dubbed “Elsagate,” borrowing the name of the main character of Disney’s massively popular animated hit, Frozen. At the r/ElsaGate subreddit, redditors tracked down videos aimed at children that contained adult themes, sexual activity, or other non-kid-friendly content.

Among the decidedly not-safe-for-kids subject matter listed by r/ElsaGate are injections, gore, suicide, pregnancy, BDSM, assault, rape, murder, cannibalism, and use of alcohol. Most of these acts were performed by animated characters (or actors dressed as the characters), including the titular Elsa as well as Spiderman, Peppa Pig, Paw Patrol, and Mickey Mouse. According to parents, users, and members of the r/Elsagate subreddit, some of this content could be accessed via the YouTube Kids app — a kid-oriented version of YouTube subject to stricter controls and home to curated content meant to steer child users clear of adult subject matter.

Further attention was drawn to the issue by James Bridle’s post on the subject, entitled “Something is Wrong on the Internet.” The post — preceded by numerous content warnings — detailed the considerable amount of disturbing content that was easily finding its way to youthful viewers, mainly thanks to its kid-friendly tags and innocuous thumbnails.

The end result, according to Bridle, was nothing short of horrific:

“To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.”  James Bridle

Elsagate” received more mainstream coverage as well. A New York Times article on the subject wondered what had happened and suggested the videos had eluded YouTube’s algorithms that were meant to ensure content that made its way to its Kids channel was actually appropriate for children. YouTube’s response when asked for comment was that this content was the “extreme needle in the haystack,” perhaps, an immeasurably small percentage of the total amount of content available on YouTube Kids. Needless to say, this answer did not make critics happy, and many suggested the online content giant rely less on automated moderation when dealing with content targeting kids.

Company Considerations:

  • How should content review and moderation be different for content targeting younger YouTube users?
  • How could a verification process be deployed to vet users creating content for children?
  • What processes can be used to make it easier to find and remove/restrict content that appears to be kid-friendly but is actually filled with adult content?
  • When content like what was described in the case study does get through the moderation process, what can be done to restore the trust of users, especially those with younger children?

Issue Considerations:

  • Should a product targeting children be siloed off from the main product to ensure the integrity of the content, as well as make it easier to manage moderation issues?
  • Does creating a product specifically for children increase the chance of direct regulation or intervention by government entities? If so, how can a company prepare itself for this inevitability?
  • If creating a “restricted” product for children, should it require all content be fully and thoroughly vetted? If so, would that become prohibitively costly, making it significantly less likely that companies will create products for children? Is there a way to balance those things?

Resolution: Immediately following these reports, YouTube purged content from YouTube Kids that did not meet its standards. It delisted videos and issued new guidelines for contributors. It added a large number of new human moderators, bringing its total of moderators to 10,000. YouTube also removed the extremely popular “Toy Freaks” channel, which users had suggested contained child abuse, after investigating its content.

YouTube wasn’t the only entity to act after the worldwide exposure of “Elsagate” videos. Many of these videos originated in China, prompting the Chinese government to block certain search keywords to limit local access to the disturbing content, as well as shuttering at least one company involved in the creation of these videos.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , ,
Companies: youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017)”

Subscribe: RSS Leave a comment
11 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

What always surprised me the most about this stuff was the lack of IP-maximalist content houses failing to go batshit crazy over the use of their copyrighted and trademarked characters and stuff.

I mean, they’ll send someone after a guy in a Spiderman outfit at a birthday party.

This comment has been flagged by the community. Click here to show it.

ECA (profile) says:

Re: Re:

"A common carrier offers its services to the general public under license or authority provided by a regulatory body, which has usually been granted "ministerial authority" by the legislation that created it."

What international license is there?
Every nation has its rules, and being able to deal with all of them and require them of Every person on the net??
You wouldnt have an internet.
YOU would have no rights, into another nation using the internet. Build the same walls we have as nations?

anon says:

Are there no parents left?

When I was a kid I watched Rocky and Bullwinkle, where they’d blow up Boris and Natasha on a regular basis. I also watched the coyote fall of cliffs every day. I also watched Mutual of Ohama’s Wild Kingdom and I swear I’ve seen every type of carnivore eat everything with blood and guts everywhere. The best nature photo I’ve seen recently was of a leopard with its head covered in blood after removing it from the guts of its prey.

Did it turn me into a serial killer? No.
Did it turn me into a recluse, scared of everything? No.
Did it desensitize me to violence? No.

Maybe I was just raised properly.

cattress (profile) says:

Re: Are there no parents left?

Being "raised properly" is a pretty loaded statement.
I’m 40, so I grew up everything from My little Ponies, Looney Tunes, TMNT, Ren & Stimpy, and Beevis & Butthead in my tweens. My home was mostly stable, but even if it wasn’t I doubt cartoons would have been a factor in how I turned out.
But I’m also a parent now. And those thumbnails look just like Little Angel/ Little World, and even similar to Cocomelon musical cartoons that my daughter loves. Now I have to pre-approve every video she can see on her Amazon tablet, but not our computers. Some times, I take advantage of the opportunity to poop alone. Or finish something, anything that I started without 35 interruptions. And I’m going to be hella pissed if I come back to my 3 yr old watching a drunken Elsa injecting Spidey babies with something to get them high, in some disturbing version of pregnancy, with Spiderman slapping her around and humping her. It’s a pretty undesirable situation to end up in her imaginative play, though probably harmless on its own, we would still have to address what is wrong in an age appropriate way. But that’s not my concern.
When I was little, my mom overhead me playing with a little girl saying we needed to get money to bail dad out of jail. Her home life was a little rough. So was mine, but I didn’t know about the multiple times my dad had to be bailed out because he was a drunk, it was hidden from me. My mom didn’t want me to play with her after that. Later on, when I turned 8 or 9, we watched Beetlejuice at my slumber party. And another little girl’s mom didn’t want her to play with me anymore. And as uptight as parents are now a days!?! I don’t give a rat’s ass what some parent thinks of me, but I don’t want my kid to lose out on friendships it.
So where are the parents? Right here, busting out butts, trying not to screw up too badly.

Anonymous Coward says:

injections, gore, suicide, pregnancy, BDSM, assault, rape, murder, cannibalism, and use of alcohol

Alcohol and pregnancy really got a bad deal being placed in that group.

I remember watching a lot of Happy Tree Friends as a kid. I can’t imagine it was particularly good for my brain, but damn watching cartoon violence was fun.

cattress (profile) says:

Took away a favorite!

Low and behold, not an hour after I read this, my daughter is trying to watch a favorite Little Angel video on her Fire Tablet, and it says that it has been marked private. This is an older video, or collection of them, and the thumbnail looks just like the examples above, except it’s not Elsa.
The song is "Oops! what’s that sound?", And it’s a baby and mom singing out body sounds in a super innocent and cute way. It’s not vulgar, it’s nothing more than an example of the sound and then a very simple explanation. The worst of it is a short little fart sound, and it’s referred to as a poot, explains that it’s normal , could mean that you need to go potty. I’m an overgrown child so I giggle about it, but it really is very innocent.
The only other song that might be questionable, or picked up on an algorithm is a song about stinky socks, which is very silly, and very innocent as well. My little one doesn’t understand why we can’t watch it any more and she is really upset.
Now I had seen some questionable videos similar to those thumbnails, especially when searching for more girl centered content. And I was also surprised that Little Angel and lookalikes got away with using the likenesses of Disney princesses. But sorta seems like they should have left alone known content creators. I bet that horrid Little Baby Bum video that has the song about a kid putting a cat down a well hasn’t been yanked.(otherwise I like Little Baby Bum). But content moderation at scale is never easy.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow