Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: YouTube Doubles Down On Questionable 'graphic Content' Enforcement Before Reversing Course (2020)

from the moderation-road-rage dept

Summary:

YouTube creators have frequently complained about the opaque and frustrating nature of the platform’s appeals process for videos that are restricted or removed for violating its Community Guidelines. Beyond simply removing content, these takedowns can be severely damaging to creators, as they can result in “strikes” against a channel. Strikes incur temporary restrictions on the user’s ability to upload content and use other site features, and enough strikes can ultimately lead to permanent channel suspension.

Creators can appeal these strikes, but many complain that the response to appeals is inconsistent, and that rejections are deemed “final” without providing insight into the decision-making process or any further recourse. One such incident in 2020 involving high-profile creators drew widespread attention online and resulted in a rare apology and reversal of course by YouTube.

On August 24, 2020, YouTube creator MoistCr1TiKaL (aka Charlie White, who also uses the handle penguinz0), who at the time had nearly six-million subscribers, posted a video in which he reacted to a viral 2014 clip of a supposed “road rage” incident involving people dressed as popular animated characters. The authenticity of the original video is unverified and many viewers suspect it was staged for comedic purposes, as the supposed “violence” it portrays appears to be fake, and the target of the “attack” appears uninjured. Soon after posting his reaction video, White received a strike for “graphic content with intent to shock” and the video was removed. On September 1, White revealed on Twitter that he had appealed the strike, but the appeal was rejected.

White then posted a video expressing his anger at the situation, and pointed out that another high-profile YouTube creator, Markiplier (aka Mark Fischbach), had posted his own reaction to the same viral video nearly four years earlier but had not received a strike. Fischbach agreed with White and asked YouTube to address the inconsistency. To the surprise of both creators, YouTube responded by issuing a strike to Fischbach’s video as well.

The incident resulted in widespread backlash online, and the proliferation of the #AnswerUsYouTube hashtag on Twitter, with fans of both creators demanding a reversal of the strikes and/or more clarity on how the platform makes these enforcement decisions.

Company considerations:

  • If erroneous strikes are inevitable given the volume of content being moderated, what are the necessary elements of an appeals process to ensure creators have adequate recourse and receive satisfactory explanations for final decisions?
  • What are the conditions under which off-platform attention to a content moderation decision should result in further manual review and potential reversals outside the normal appeals process?
  • How can similar consideration be afforded to creators who face erroneous strikes and rejected appeals, but do not have large audiences who will put off-platform pressure on the company?

Issue considerations:

  • How can companies balance the desire to directly respond to controversies involving highly popular creators with the desire to employ consistent, equitable processes for all creators?
  • How should platforms harmonize their enforcement decisions when they are alerted to clear contradictions between the decisions on similar pieces of content?

Resolution:

On September 2, a few hours after Fischbach announced his strike and White expressed his shock at that decision, the TeamYouTube Twitter account replied to White and to Fischbach with an apology, stating that it had restored both videos and reversed both strikes and calling the initial decision “an over-enforcement of our policies.” Both creators expressed their appreciation for the reversal, while also noting that they hope the company makes changes to prevent similar incidents from occurring in the future. Since such reversals by YouTube are quite rare, and apologies even rarer, the story sparked widespread coverage in a variety of outlets.

Originally posted to the Trust and Safety Foundation website.

Filed Under: , , ,
Companies: youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: YouTube Doubles Down On Questionable 'graphic Content' Enforcement Before Reversing Course (2020)”

Subscribe: RSS Leave a comment
9 Comments
Anonymous Coward says:

Small creators don't have the leverage to put on pressure.

How can similar consideration be afforded to creators who face erroneous strikes and rejected appeals, but do not have large audiences who will put off-platform pressure on the company?

This is the most important thing to consider. If online ser ices hosting user-generated content can’t come up with a solution for this problem, then they should lean toward under-enforcement rather than over-enforcement. The bigger the company is, the more conservative the moderation should be.

Anonymous Coward says:

Re: Small creators don't have the leverage to put on pressure.

How many times have you heard a story where some company was being obstructionist … right up to the point where it became a news story, whereupon all the objections fell away and the company suddenly wanted to do the right thing?

Yeah. That’s how small creators will be able to put on pressure, and no other way. Even big Youtube stars weren’t able to get Youtube to pay attention, until they made it a news story. That is, it wasn’t simply "I’ve got 6 million subscribers and I’d like you to reconsider." It was 6 million subscribes DDOSing their phone bank … er, twitter channel.

Anonymous Coward says:

Re: Small creators don't have the leverage to put on pressure.

I agree with you there, but to be fair, if they did that, a politician / media outlet would scream about how "YouTube doesn’t care" about whatever their pet issue of the day is.

They are in an unenviable position where no matter what they do they’ll be the "bad guy".

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow