Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Chatroulette Leverages New AI To Combat Unwanted Nudity (2020)

from the as-opposed-to-the-wanted-kind dept

Summary: Chatroulette rose to fame shortly after its creation in late 2009. The platform offered a new take on video chat, pairing users with other random users with each spin of the virtual wheel.

The novelty of the experience soon wore off when it became apparent Chatroulette was host to a large assortment of pranksters and exhibitionists. Users hoping to luck into some scintillating video chat were instead greeted with exposed penises and other body parts. (But mostly penises.)

This especially unsavory aspect of the service was assumed to be its legacy — one that would see it resigned to the junkheap of failed social platforms. Chatroulette attempted to handle its content problem by giving users the power to flag other users and deployed a rudimentary AI to block possibly-offensive users.

The site soldiered on, partially supported by a premium service that paired users with other users in their area or who shared the same interests. Then something unexpected happened that drove a whole new set of users to Chatroulette: the COVID-19 pandemic. More people than ever were trapped at home and starved for human interaction. Very few of those were hoping to see an assortment of penises.

Faced with an influx of users and content to moderate, Chatroulette brought in AI moderation specialist Hive, the same company that currently moderates content on Reddit. With Chatroulette experiencing a resurgence, the company is hoping a system capable of processing millions of frames of chat video will keep its channels clear of unwanted content.

Decisions to be made by Chatroulette:

  • Is it possible to filter live content quickly and accurately enough to prevent a return to the “old” Chatroulette?
  • Is the cost of moderation AI affordable given the site’s seeming inability to attract or sustain a large user base?
  • If user growth continues, will it still be possible to backstop AI moderation with human moderators? 
  • What metrics should Chatroulette consider as measures of success here?

Questions and policy implications to consider:

  • Is over-moderation a foreseeable problem, given the challenges of moderating live video streams?
  • Is it possible to attract a more-dedicated user base while still respecting their apparent desire for anonymity?
  • Is it wise to maintain an unmoderated channel given the historical issues the site has had with unsolicited nudity and its exposure to/of minors?

Resolution: Chatroulette’s new moderation efforts appear to be successfully distancing it from its inauspicious beginnings. However, the site’s operating team still wryly acknowledges the reputation for nudity that defined the service for much of the last decade. But the site also points out that only a little more than 3% of its millions of monthly interactions contain explicit material, indicating Chatroulette feels the way forward is offering something that might have seemed boring ten years ago: a predictable and safe random chat experience.

Originally published on the Trust & Safety Foundation website.

Filed Under: , ,
Companies: chatroulette, hive

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Chatroulette Leverages New AI To Combat Unwanted Nudity (2020)”

Subscribe: RSS Leave a comment
10 Comments
That Anonymous Coward (profile) says:

Penii are a goto thing for online ‘pranks’… I mean they are right at hand.

CheezBurger at one point had chatroulette memes & funny screen shots…
The Eye of Sauron connecting with Frodo

2 jokers showing up in the same virtual teddy bear masking

Response to someones sign show me your tits & I’ll show you my dick being met with someone holding up a penis shaped bong asking if that will do

Steve Kardynal who lipsynced several popular songs with dramatic sets & staging…

https://www.youtube.com/watch?v=InYvRyX2Fu4

As with many things people focused only on a few who ran around showing off their penii & none of the other things happening.

As the meme says…

https://imgur.com/gallery/x5UgAMi?nc=1

That Anonymous Coward (profile) says:

Re: Re:

There is no possible way got technology to catch every penis.
They can’t hire half the planet to monitor the other half of the planet to make sure a penis is never seen.

People think technology can do the impossible & refuse to accept it can’t be done.

We spent millions training TSA agents to look at people & spot terrorists, not heard of any caught… but we did have a bunch of TSA agents helping smuggle drugs & weapons that went undetected.

The government is ‘accidentally’ collecting all of our data & despite the claims of ‘going dark’ they completely dropped the ball on several high profile attacks including Jan 6, despite claiming the had to have the data. (One would mention here that ‘Thin Thread’ detected the 911 hijackers when fed the data that was available before the attack, the giant machinery we have now can’t detect anything.)

If you use Chatroullete you might see a penis/breasts/vagina/butts and 300 other bad things… if you are that upset by the chance of seeing them… don’t use the platform.

Do not expect that a platform can protect you, they can’t.

Know the risks and act accordingly, but instead we demand a completely risk free world where nothing is ever our fault. (See also: Well if they had used enough tech to detect they were in a moving car & blocked texting then my child speeding down the freeway texting would never have killed that family of 5 so we are suing apple.)

Y’all can hit your head in the shower, fall down face down in like 3/4 inch water and die… should we sue cities for providing water, or tub makers, or god for not giving us gills?? We could pass a law requiring everyone have a nonslip mat in all tubs… it’ll work out just as well as the drunk driving laws work.

Scary Devil Monastery (profile) says:

Re: Re: Re:

"There is no possible way got technology to catch every penis.
They can’t hire half the planet to monitor the other half of the planet to make sure a penis is never seen."

There’s a Cellmate(TM) joke in there somewhere. If you don’t want something to be seen, just lock it up and hand the access key to the relevant authorities…

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow