Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021)

from the memphis-memphis-memphis-memphis dept

Summary: Twitter users who made the mistake of tweeting out an innocuous word — ‘Memphis — found themselves suspended from the service for 12 hours for apparently violating the terms of use.

According to messages sent to suspended users, the use of the Tennessee city name violated prohibitions on posting personal information.

The inadvertent damage quickly spread across Twitter as users trolled each other, trying to get unsuspecting accounts to tweet the suddenly-forbidden word. The apparent flaw in the auto-moderation system went unaddressed for several hours as more and more users found themselves temporarily prevented from using the service. Although some users noticed certain accounts (mainly verified ones) weren’t being hit with bans, it affected enough users that the ripple effect was not only noticeable, but covered by many mainstream media outlets.

The bans were lifted several hours later with no explanation from Twitter other than that an unspecified “bug” had resulted in tweets containing the word “Memphis” being removed and features limited for those accounts.

That explanation was not entirely clear. Given the “Memphis” bug’s link to alleged violations of Twitter’s policies against posting other people’s personal information, it was speculated the ban on a single city name may have been the result of an erroneously-completed form on the moderation side. Systems security professional SwiftOnSecurity took a plausible stab at the possible root cause of this improbable series of moderation events.

What’s possible is a Twitter staffer tried to block a street address, but the postal syntax acted as an escape sequence, or the original was multi-line and they only pasted the city.

If so, every use of the word “Memphis” was considered to be a post containing a full address Twitter had targeted for removal under its personal information policy.

Decisions to be made by Twitter:

  • Should moderation of the posting of personal information be handled with automation, given the potential for errors to scale and compound?
  • Should a better stop-gap process be put in place to head off future events like these?
  • Should users be given better explanations when moderation-at-scale results in features being limited for users affected by moderation bugs? 

Questions and policy implications to consider:

  • Should a stop-gap measure be put in place to prevent errors like this from becoming multi-hour failures?
  • Have Twitter’s moderation efforts resulted in a noticeable limitation of the spread of personal info? 
  • Is all publication of personal information considered a violation of policy? Or are exceptions in place for information considered to be of public interest?

Resolution: Twitter restored accounts and tweets targeted by its “Memphis” bug within hours of its emergence. However, the company’s moderation team has yet to explain exactly went wrong or what Twitter has done to prevent its recurrence.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021)”

Subscribe: RSS Leave a comment
10 Comments
Darkness Of Course (profile) says:

How you can explain ...

What you do not understand?

Twitter Nerds are now ML Nerds. You all know Machine Learning, it’s referred to as its impossible counterpart, Artificial Intelligence – which it ain’t.

Take a bunch of computers. Fill some tables up with numbers, run a zillion tweets through them and release the Kraken on Memphis!

There is no possible explanation. Well, beyond Natural Stupidity.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow