Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Detecting Sarcasm Is Not Easy (2018)

from the kill-me-now dept

Summary: Content moderation becomes even more difficult when you realize that there may be additional meaning to words or phrases beyond their most literal translation. One very clear example of that is the use of sarcasm, in which a word or phrase is used either in the opposite of its literal translation or as a greatly exaggerated way to express humor.

In March of 2018, facing increasing criticism regarding certain content that was appearing on Twitter, the company did a mass purge of accounts, including many popular accounts that were accused of simply copying and retweeting jokes and memes that others had created. Part of the accusation for those that were shut down, was that there was a network of accounts (referred to as ?Tweetdeckers? for the user of the Twitter application Tweetdeck) who would agree to mass retweet some of those jokes and memes. Twitter suggested that these retweet brigades were inauthentic and thus banned from the platform.

In the midst of all of these suspensions, however, there was another set of accounts and content suspended, allegedly for talking about ?self -harm.? Twitter has policies regarding glorifying self-harm which it had just updated a few weeks before this new round of bans.

However, in trying to apply that, Twitter took down a bunch of tweets that had people sarcastically using the phrase ?kill me.? This included suddenly suspending many accounts despite many of those tweets being from many years earlier. It appeared that Twitter may have just done a search on ?kill me? or other similar words and phrases including ?kill myself,? ?cut myself,? ?hang myself,? ?suicide,? or ?I wanna die.?

While some of these may indicate intentions for self-harm, in many other cases they were clearly sarcastic or just people saying odd things, and yet Twitter temporarily suspended many of those accounts and asked the users to delete the tweets. In at least some cases, the messages from Twitter did include some encouraging words, such as ?Please know that there are people out there who care about you, and you are not alone.? But that did not appear to be on all of the messages. That language, at least, suggested a specific response to concerns about self-harm.

Decisions to be made by Twitter:

  • How do you handle situations where users indicate they may engage in self-harm?
  • Should such content be removed or are there other approaches?
  • How do you distinguish between sarcastic phrases and real threats of self-harm?
  • What is the best way to track and monitor claims of self-harm? Does a keyword or key phrase list search help?
  • Does automated tracking of self-harm messages work? Or is it better to rely on user reports?
  • Does it change if the supposed messages regarding self-harm are years old?

Questions and policy implications to consider:

  • Is suspending people for self-harm likely to prevent the harm? Or is it just hiding useful information from friends, family, officials, who might help?
  • Detecting sarcasm creates many challenges; should internet platforms be the arbiters of what counts as reasonable sarcasm? Or must it take all content literally?
  • Automated solutions to detect things like self-harm may cover a wider corpus of material, but is also more likely to misunderstand context. How should these issues be balanced?

Resolution: This continues to be a challenge for various platforms, including Twitter. The company has continued to tweak its policies regarding self-harm over the year, including partnering with suicide prevention groups in various locations to seek to help those who indicate that they are considering self-harm.

Filed Under: , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Detecting Sarcasm Is Not Easy (2018)”

Subscribe: RSS Leave a comment
GHB (profile) says:

Re: What

I mean, even without the bot, the moderator has to be etiquettely trained. Sadly though, the internet is a lot harder to police than in real life, this video highlights not just tumblr’s issue, but in general on that methods to police might even be impossible:

The internet is a good place, and it is also a mess.

Anonymous Coward says:

Is suspending people for self-harm likely to prevent the harm? Or is it just hiding useful information from friends, family, officials, who might help?

The important thing is hiding it from people who might think it a good idea to be loudly upset about it, regardless what actually happens to anyone afflicted with ideation or compulsions to self-harm.

This comment has been deemed insightful by the community.
PaulT (profile) says:

There’s also this type of thing: asshole

A person might intend something seriously, but then claim that they were being sarcastic if they face any pushback for what they said. So, even if you had some kind of system that accurately detected it all the time, that would not lead to clear resolutions.

David says:


Content moderation is not for the sake of the writer, it is for the sake of the reader. The whole point of sarcasm is that it causes a double-take, and not everyone reliably reaches second base.

If a moderation bot or human is dumb as a brick or has the attention span of a squirrel in a nuthouse, that is representative of at least some of the audience some of the time.

There is no point in trying to discern the author’s intent since the author is not the party purportedly protected by moderation.

Thad (profile) says:

Re: So?

If a moderation bot or human is dumb as a brick or has the attention span of a squirrel in a nuthouse, that is representative of at least some of the audience some of the time.

I’m not sure what you’re arguing here. Are you suggesting that if any one poster, even if they’re "dumb as a brick", fails to detect sarcasm, that’s a representative sign that the post should be moderated?

TasMot (profile) says:

Just Fishing

So a filter for words or phrases would suspend my account if I posted a tweet and said "I caught a perch yesterday and cut myself. That filet is going to have a little extra bloody flavor. Band-aid should fix it right up."

Based on a word filter, I am a person at risk and my account should be suspended. In real life, I need somebody nearby with some anti-biotic and a band-aid.

If I actually meant to hurt myself, I need somebody to see the tweet and respond that they are there to help or will get help.

On the other hand, Twitter seems to want to follow the "head in sand" method of dealing with it and hiding the problem, not resolving the problem.

Another Kevin (profile) says:

I wish that I knew what context was like

"I’m from Context, but I don’t know what it’s like, because I’m so often taken out of it." – Norton Juster (quote may be inexact, can’t be bothered to look it up)

I don’t tweet, but I checked my blog for the last mention of suicide.

Paraphrasing: Alberto, Catherine and I turned back only about 500 feet below the summit, because the day was getting warmer and the ice was unsound. It was slushy and separating from the rock in spots, and crampons weren’t holding well. To continue in those conditions would be suicide. The mountain will still be there another day.

Now, I don’t know, some people might indeed consider winter mountaineering to be self-harm. but others think it can be done responsibly.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow