Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017)
from the finding-the-monsters-among-us dept
Summary: YouTube offers an endless stream of videos that cater to the preferences of users, no matter their age and has become a go-to content provider for kids and their parents. The market for kid-oriented videos remains wide-open, with new competitors surfacing daily and utilizing repetition, familiarity, and strings of keywords to get their videos in front of kids willing to spend hours clicking on whatever thumbnails pique their interest, and YouTube is leading this market.
Taking advantage of the low expectations of extremely youthful viewers, YouTube videos for kids are filled with low-effort, low-cost content – videos that use familiar songs, bright colors, and pop culture fixtures to attract and hold the attention of children.
Most of this content is innocuous. But a much darker strain of content was exposed by amateur internet sleuths, which was swiftly dubbed “Elsagate,” borrowing the name of the main character of Disney’s massively popular animated hit, Frozen. At the r/ElsaGate subreddit, redditors tracked down videos aimed at children that contained adult themes, sexual activity, or other non-kid-friendly content.
Among the decidedly not-safe-for-kids subject matter listed by r/ElsaGate are injections, gore, suicide, pregnancy, BDSM, assault, rape, murder, cannibalism, and use of alcohol. Most of these acts were performed by animated characters (or actors dressed as the characters), including the titular Elsa as well as Spiderman, Peppa Pig, Paw Patrol, and Mickey Mouse. According to parents, users, and members of the r/Elsagate subreddit, some of this content could be accessed via the YouTube Kids app — a kid-oriented version of YouTube subject to stricter controls and home to curated content meant to steer child users clear of adult subject matter.
Further attention was drawn to the issue by James Bridle’s post on the subject, entitled “Something is Wrong on the Internet.” The post — preceded by numerous content warnings — detailed the considerable amount of disturbing content that was easily finding its way to youthful viewers, mainly thanks to its kid-friendly tags and innocuous thumbnails.
The end result, according to Bridle, was nothing short of horrific:
“To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.” — James Bridle
“Elsagate” received more mainstream coverage as well. A New York Times article on the subject wondered what had happened and suggested the videos had eluded YouTube’s algorithms that were meant to ensure content that made its way to its Kids channel was actually appropriate for children. YouTube’s response when asked for comment was that this content was the “extreme needle in the haystack,” perhaps, an immeasurably small percentage of the total amount of content available on YouTube Kids. Needless to say, this answer did not make critics happy, and many suggested the online content giant rely less on automated moderation when dealing with content targeting kids.
- How should content review and moderation be different for content targeting younger YouTube users?
- How could a verification process be deployed to vet users creating content for children?
- What processes can be used to make it easier to find and remove/restrict content that appears to be kid-friendly but is actually filled with adult content?
- When content like what was described in the case study does get through the moderation process, what can be done to restore the trust of users, especially those with younger children?
- Should a product targeting children be siloed off from the main product to ensure the integrity of the content, as well as make it easier to manage moderation issues?
- Does creating a product specifically for children increase the chance of direct regulation or intervention by government entities? If so, how can a company prepare itself for this inevitability?
- If creating a “restricted” product for children, should it require all content be fully and thoroughly vetted? If so, would that become prohibitively costly, making it significantly less likely that companies will create products for children? Is there a way to balance those things?
Resolution: Immediately following these reports, YouTube purged content from YouTube Kids that did not meet its standards. It delisted videos and issued new guidelines for contributors. It added a large number of new human moderators, bringing its total of moderators to 10,000. YouTube also removed the extremely popular “Toy Freaks” channel, which users had suggested contained child abuse, after investigating its content.
YouTube wasn’t the only entity to act after the worldwide exposure of “Elsagate” videos. Many of these videos originated in China, prompting the Chinese government to block certain search keywords to limit local access to the disturbing content, as well as shuttering at least one company involved in the creation of these videos.
Originally posted to the Trust & Safety Foundation website.