Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems

from the content-moderation-at-the-fleeting-level dept

Summary: In its 15 years as a micro-blogging service, Twitter has given users more characters per tweet, reaction GIFs, multiple UI options, and the occasional random resorting of their timelines.

The most recent offering was to give users the option to create posts designed to be swept away by the digital sands of time. Early in 2020, Twitter announced it would be rolling out “Fleets” — self-deleting tweets with a lifespan of only 24 hours. This put Twitter on equal footing with Instagram’s “Stories” feature, which allows users to post content with a built-in expiration date.

In the initial, limited rollout of Fleets, Twitter reported that the feature showed advantages over the platform’s standard offering. Twitter Comms tweeted that initial testing looked promising, stating that it was seeing “less abuse with Fleets” with only a “small percentage” of Fleets being reported each day.

Whether this early indicator was a symptom of the limited rollout or users viewing self-deleting abuse as a problem that solves itself, the wider rollout wasn’t nearly as easy as earlier indicators nor was it relatively abuse free. Fleet’s full debut arrived in the wake of an incredibly contentious U.S. presidential election — one marred by election interference accusations and a constant barrage of misinformation. The full rollout also came after nearly a year of a worldwide pandemic, which resulted in a constant flow of misinformation across multiple social media platforms globally.

While amplification of misinformation contained in Fleets was somewhat tempered by their innate ephemerality, as well as very limited interaction options, it seemed unclear how — or how well — Twitter was handling moderating misinformation spread by the new communication option. Extremism researcher Marc-Andre Argentino was able to send out a series of “fleets” containing misinformation and banned URLS, noting that Twitter only flagged one that asserted a link between the virus and cell phone towers.

Samantha Cole reported other Fleet moderation issues. Writing for Motherboard, Cole noted that apparent glitches were allowing users to see Fleets from people they had blocked, as well as Fleets from people who had blocked them. Failing to maintain settings that users set up to block or mute others created more avenues for abuse. Cole also pointed out that users weren’t being notified when their tweets were added to Fleets, providing abusive users with another option to harass while the targets of abuse remain unaware.

Company Considerations:

  • How can Twitter prevent new features from duplicating existing moderation problems?
  • How can companies test a feature’s initial rollout to better detect possible abuses, and therefore resulting in less moderation needs in the wider rollout? 
  • How does ephemeral content affect moderation efforts and moderation response time? 
  • If issues remain unsolved or poorly-addressed, who has the power to shut down or temporarily disable a new feature? 
  • How much time should moderation teams be given to adjust to new responsibilities and new inputs when a new feature is rolled out? What metrics would be useful to determine whether moderation responses are successfully addressing new abuses and problems?

Issue Considerations:

  • What processes should companies have in place to mitigate damage if a feature doesn’t perform in the expected way and/or creates unforeseen problems?
  • Does “fleeting” content have the potential to cause moderators to view abusive posts as problems that will solve themselves? How can this mindset be discouraged or counteracted?

Resolution: Twitter’s immediate response to the issues during the full rollout was to temporarily slow the deployment of the feature to users. While the issues that impacted moderation never really dissipated, the feature itself did. Twitter noted that Fleets did not have the uptake it expected. Although Fleets was supposed to encourage more engagement from Twitter users who lurked more than posted, observers noted that the feature appeared to be used mostly by users who were already heavily-engaged with the platform.

With the feature never being much more than a novelty for Twitter die-hards, Twitter killed off the feature on August 3, 2021, taking with it the moderation problems the self-killing Fleets had created.

Originally posted to the Trust & Safety Foundation website

Filed Under: , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems”

Subscribe: RSS Leave a comment
2 Comments

This comment has been flagged by the community. Click here to show it.

Darkness Of Course (profile) says:

If only we knew how it would work/fail

Okay, the problem is old as the web/internet. As old as big business having servers.

To solve it you need two things: A big test environment, that is **not connected to the production environment.

Now, this part is a bit busy: You must give test/beta access to, and only to, this list of testers; Engineers. Race car drivers/mechanics/engineers. Parents of teens, and a separate group for pre-teens (or your youngest perceived audience).

After those people have finished trashing it, then you can let actual teens in as well. They will trash it further.

Why these people? Engineers and the Racers are always working with not enough resources, and too many rules (which are limitations). They spend a lot of energy on circumventing restrictions.

Parents, do I really have to explain what a teenagers and preteens are capable of? Breaking the rules without breaking them completely is their forte.

Of course, a lot of expense could be spared if they would just read; It is impossible to moderate at scale, by Mike Masnick. But companies are driven to make fools of themselves and reading will be given to a subordinate to do, and give a two bullet summary next exec meeting.

Leave a Reply to forbesdull421 Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow