Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Suppressing Content To Try To Stop Bullying (2019)

from the not-a-good-solution dept

Summary: TikTok, like many social apps that are mainly used by a younger generation, has long faced issues around how to deal with bullying done via the platform. According to leaked documents revealed by the German site Netzpolitik, one way that the site chose to deal with the problem was through content suppression — but specifically by suppressing the content of those the company felt were more prone to being victims of bullying.

The internal documents showed different ways in which the short video content that TikTok is famous for would be rated for visibility. This could include content that was chosen to be ?featured? (i.e., seen by more people) but also content that was deemed ?Auto R? for a form of suppression. Content rated as such was excluded from the ?for you? feed on Tiktok after reaching a certain number of views. The ?for you? feed is how most people view TikTok videos, so this rating would effectively put a cap on views. The end result was the ?reach? of content categorized as Auto R was significantly limited, and completely prevented from going ?viral? and amassing a large audience or following.

What was somewhat surprising was that TikTok?s policies explicitly suggested putting those who might be bullied in the ?Auto R? category — even saying that those who were disabled, autistic, or with Down Syndrome, should be put in this category to minimize bullying.

According to Netzpolitik, employees at TikTok repeatedly pointed out the problematic nature of this decision, and how it was discriminatory itself and punishing people not for any bad behavior, but because of the belief that their differences might possibly lead to them being bullied. However, they claimed that they were prevented from changing the policies by TikTok?s corporate parent, ByteDance, which dictated the company?s content moderation policies.

Decisions to be made by TikTok:

  • What are the best ways to deal with and prevent bullying done on the platform?
  • What are the real world impacts of suppressing the viral reach of any content based on the type of person making the content?
  • Is it appropriate to effectively prevent those you think will be bullied from getting full access to your platform to prevent the possibility of bullying?
  • What data points are being assessed to justify the assumptions being made about ?Auto R? being an effective anti-bullying tool?

Questions and policy implications to consider:

  • When there are strong pushes from policymakers to platforms that they need to ?stop bullying? will it lead to unintended consequences like the effective minimization of access to these platforms by potential victims of bullying, rather than dealing with the root causes of bullying?
  • Will efforts to prevent a bad behavior be used to really sweep that activity under the rug, rather than looking at how to actually make a platform safer?
  • What is the role of technology intermediaries in preventing bad behavior?

Resolution: TikTok admitted that these rules were a ?blunt instrument? that were put in place rapidly to try to minimize bullying on the platform — but that the company had realized it was the ?wrong? approach and had implemented more nuanced policies:

“Early on, in response to an increase in bullying on the app, we implemented a blunt and temporary policy,” he told the BBC.

“This was never designed to be a long-term solution, and while the intention was good, it became clear that the approach was wrong.

“We have long since removed the policy in favour of more nuanced anti-bullying policies.”

However, the Netzpolitik report suggested that this policy had been in place at least until September of 2019, just three months before its reporting came out in December of 2019. It is unclear exactly when the ?more nuanced? anti-bullying policies were put in place, but it is possible that they came about due to the public exposure and pressure from the reporting on this issue.

Filed Under: , ,
Companies: tiktok

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Suppressing Content To Try To Stop Bullying (2019)”

Subscribe: RSS Leave a comment

This comment has been flagged by the community. Click here to show it.

Hanoi Jane and COVID19 - "God's gift to Techdirt" says:

Gee, right here, Maz, ya MIGHT try to control your rabid fanboys

who are suppressing viewpoints (with your active help of course via the site coding and the wrongly used "report" button which an Admin then approves of the censoring).

You allow your fanboys to bully all dissent, and then wonder why even GOOGLE has to mark you "dangerous and derogatory"!

Stephen T. Stone (profile) says:


We don’t “bully all dissent”, Blueballs — only the dissent that is clearly in bad faith. You know, like “dissent” that concentrates on a decade-old slight that wasn’t even all that offensive (or funny) because you can’t let go of a grudge out of spite for…well, mainly yourself, at this point…or ignores the article at hand in favor of rattling off more grievances than a Donald Trump twitstorm (and with far less coherence than even his bullshit).

Have you met Koby? I think you two would get along well.

This comment has been flagged by the community. Click here to show it.

restless94110 (profile) says:


I can just see Ben Franklin, Madison, Jefferson, and the others going:

We went to a fortune teller and she told us that in 250 years there was going to be this new word invented out of nothing called bullying and it would be used to destroy the 1st Amendment. What should we do?

Ok, we got it. Make it God-given so that it can never be taken away by any government or citizen or business. There that should fix it.

PaulT (profile) says:

Re: 1787

"in 250 years there was going to be this new word invented out of nothing called bullying"

While you are an idiot and you address a fantasy world that has nothing to do with reality, I did wonder if even this tidbit was remotely true. Of course, it’s not, the people you mention would have been familiar with the word.

"Make it God-given"

Also, I’d read the actual document you’re referring to, because it doesn’t say what you’re imagining it does.

That Anonymous Coward (profile) says:

Better solution…
Stop allowing humans to demand corporations do something, corporations are like large intellectually challenged teddy bears who might stumble on a solution after setting everything on fire first.

Once upon a time if someones kid was bullying other kids, it was perfectly acceptable for any adult to slap the bully upside the head. It wasn’t discussed it was how community worked. We expected kids to behave, didn’t force them to like people but they knew going after someone meant your head would hurt & your parents would know about it… and well you might have problems sitting down for a bit.

Now we have parents who will scream at you for looking at their little angel sideways as they run down the aisle ripping stuff off the shelves onto the floor.
How DARE you tell me how to parent my child!!!
Well maybe if you started parenting them I wouldn’t have to say anything.

My parents would have NEVER walked into the school and screamed at a teacher b/c I got a bad grade. I’d get a talking to (and being the defiant child I’d stay my course) but other adults were not disrespected to gain adoration from their kids.

These kids MIGHT get bullied. News flash, those kids already knew this. Parents have blinders on that this never happens & when someone tries to tell them the truth they stop listening b/c their baby would never do that. In the grand scheme of things maybe just maybe lets stop abdicating the hard things we should be doing but won’t to corporations to "take care of" for us.

Imagine if teaching kids empathy mattered as much as having a sportsball season. We have parents demanding their kids get to play in the middle of a global pandemic, b/c their baby getting their moment in the spotlight matters more than you’re kid might end up in the ICU or with long term heart/lung damage.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow