Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Moderating An Anonymous Social Network (2015)

from the anonymity-challenge dept

Summary: Between around 2013 and 2015 there was a sudden popularity of so-called ?anonymous? social networks. A few had existed before, but suddenly the market was full of them: Whisper, Secret and Yik Yak received the most attention. All of them argued that by allowing people to post content anonymously, they were helping people, allowing them to express their true thoughts rather than repress them. Whisper and Secret both worked by letting people anonymously post short text, which would be shown over an image in the background.

In practice, many of the apps filled up with harassment, bullying, hateful content and the like. Whisper, as one of the bigger companies in the space, invested heavily in content moderation from early on, saying that it had set up an outsourced team (via TaskUs in the Philippines) to handle moderation. However, questions of scalability became an issue, and the company also built software to help with content moderation, called ?The Arbiter.? In press reports, Whisper employees suggested ?The Arbiter? was almost perfect:

On Whisper, ?the golden rule is don?t be mean, don?t be gross, and don?t use Whisper to break the law,? says the company?s chief data officer, Ulas Bardak, who spearheaded development of the Arbiter along with data scientist Nick Stucky-Mack. That?s not a philosophy that you can boil down to a simple list of banned words. The Arbiter is smart enough to deal with an array of situations, and even knows when it?s not sure if a particular item meets the service?s guidelines.

However, even with The Arbiter, the company insisted that it needed humans, since Arbiter learned from the human moderators.

In its first few months of operation, the Arbiter has had a huge impact on how Whisper moderates itself. But even though there?s plenty of opportunity to fine-tune it over time, Whisper has no plans to eliminate the human touch in moderation altogether. After all, the only reason the Arbiter is effective is because it bases its decisions on those of human moderators. Which is why the company is continuing to shovel data from human-moderated Whispers into the software?s knowledge bank.

There?s always going to be a hybrid approach,? says Heyward. ?The truth is, the way we use people today is very different from the way we used them a year ago or six months ago.? With the Arbiter humming along and handling much of the grunt work, the humans can focus more on the material that isn?t an easy call. And maybe Whisper will be able to pull off the not-so-easy feat of improving the quality of its content even as its community continues to grow.

Another article about Whisper?s approach to content moderation detailed how humans and the software work together.

Moderators look at Whispers surfaced by both machines and people: Users flag inappropriate posts and algorithms analyze text and images for anything that might have slipped through the cracks. That way, the company is less likely to miss cyberbullying, sex, and suicide messages. Moderators delete the bad stuff, shuffle cyberbullies into a ?posts-must-be-approved-before-publishing? category, and stamp suicide Whispers with a ?watermark? ? the number for the National Suicide Hotline.

As you might imagine, the man power and operational systems required for that execution are huge. Whisper?s content moderation manual is nearly 30 pages. The standards get into the nitty-gritty, specifying minutia like whether a picture of a man shirtless outside is appropriate, but a selfie shirtless indoors is not.

When the TaskUs team comes across physical threats, it escalates the message to Whisper itself. ?If someone posts, ?I killed her and buried her in the backyard,? then that?s a piece of content the company will report to the authorities,? TaskUs CEO Bryce Maddock says. ?They?re going to pull the UID on your cell phone from Verizon or AT&T and the FBI and local police will show up at your door. It happens quite a bit.?

Even so there was significant controversy over how Whisper handled bullying and hateful content on its site, as well as how it maintained actual anonymity for its users. There were concerns raised that the app was not actually anonymous, and tracked its users. Whisper disputed some of these reports and claimed that some of the tracking was done both with permission and for good reasons (such as to do research on how to decrease suicide rates).

Decisions to be made by Whisper:

  • How do you keep an anonymous social network from becoming abusive?
  • How aggressive should you be in moderating content on an anonymous social network?
  • What are the tradeoffs between tracking users to prevent bad behavior and providing true anonymity?
  • Can an algorithm successfully determine and block detrimental content on a platform like Whisper?

Questions and policy implications to consider:

  • Is an anonymous social media network net positive or net negative?
  • Does anonymity make content moderation more difficult?
  • How do you protect users on an anonymous social network?

Resolution: One interesting aspect of having an anonymous social media application is that users might not even realize if their content is restricted. An academic paper from 2014 that explored Whisper?s content moderation features suggested that the app deleted significantly more content than other forms of social media.

Anonymity facilitates free speech, but also inevitably fosters abusive content and behavior. Like other anonymous communities, Whisper faces the same challenge of dealing with abusive content (e.g., nudity, pornography or obscenity) in their network.

In addition to a crowdsourcing-based user reporting mechanism, Whisper also has dedicated employees to moderate whispers. Our basic measurements… also suggest this has a significant impact on the system, as we observed a large volume of whispers (>1.7 million) has been deleted during the 3 months of our study. The ratio of Whisper?s deleted content (18%) is much higher than traditional social networks like Twitter (<4%)

The research dug into what kinds of content was deleted and from what types of users. Part of what it found is that people with deleted content often try to repost it (and frequently get the reposts blocked as well).

Finally, we take a closer look at the authors of deleted whispers to check for signs of suspicious behavior. In total, 263K users (25.4%) out of all users in our dataset have at least one deleted whisper. The distribution of deleted whispers is highly skewed across these users: 24% of users are responsible for 80% of all deleted whispers. The worst offender is a user who had 1230 whisper deleted during the time period of our study, while roughly half of the users only have a single deletion?.

We observed anecdotal evidence of duplicate whispers in the set of deleted whispers. We find that frequently reposted duplicate whispers are highly likely to be deleted. Among our 263K users with at least 1 deleted whisper, we find 25K users have posted duplicate whispers?.

As for Whisper itself, the company has gone through many changes and problems. It?s biggest competitors, Secret and YikYak, both shut down, but Whisper remains in business — though not without problems. Whisper laid off a significant amount of its staff and all of its large institutional investors quit the board in 2017.

In the spring of 2020, security researchers discovered that nearly all of Whisper?s content was available for download via an unsecured database, allowing researchers to search through all of the content posted on the site. While the company insisted that the only data that was in the database was the same as what was publicly available through the app, they conceded that within the app, you did not have the ability to run queries on the database. Even years after the app was popular, it seems that concerns about anonymity and privacy remain.

Filed Under: , , , , ,
Companies: whisper

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Moderating An Anonymous Social Network (2015)”

Subscribe: RSS Leave a comment
3 Comments
This comment has been deemed insightful by the community.
PaulT (profile) says:

"In total, 263K users (25.4%) out of all users in our dataset have at least one deleted whisper. The distribution of deleted whispers is highly skewed across these users: 24% of users are responsible for 80% of all deleted whispers. The worst offender is a user who had 1230 whisper deleted during the time period of our study, while roughly half of the users only have a single deletion"

Obviously, I don’t have figures for other networks, but I wouldn’t be surprised if this describes a typical breakdown for other platforms. You have a majority of people rarely, if ever, posting anything objectionable. A minority of users regularly being offensive for whatever reason, be that deliberate trolling or just having behaviour that others find toxic, then one outright asshole trying to ruin it for everyone.

Extrapolated to Facebook or Twitter, this would be in line with what I would assume to be true there – most people are going about their day, but you have deliberate dickheads and the occasional obsessive person trying to derail the entire site. So, it’s better for everyone if the lone dickhead is kicked off and the others who are disrupting the site have their content moderated to improve the functionality for everyone.

"Among our 263K users with at least 1 deleted whisper, we find 25K users have posted duplicate whispers…."

Again, this would indicate a similarity with other platforms, where the people who get moderated aren’t posting their own thoughts but sharing memes or retweeting something dumb. This could be one reason why we find certain groups of people complaining that they are being targeted – when one of them post something offensive, they all just copy each other to such a degree that they all get moderated even though there was really only one piece of offensive content posted.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow