Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Roblox Moderators Combat In-Game Reenactments Of Mass Shootings (2021)

from the modern-moderation dept

Online game platform Roblox has gone from a niche offering to a cultural phenomenon over its 15 years of existence. Rivalling Minecraft in its ability to attract young users, Roblox is played by over half of American children and has a user base of 164 million active users.

Roblox also gives players access to a robust set of creation tools, allowing users to create and craft their own experiences, as well as enjoy those created by others. 

A surge in users during the COVID-19 pandemic created problems Roblox’s automated moderation systems — as well as its human moderators — are still attempting to solve. Roblox employs 1,600 human moderators who not only handle content flowing through in-game chat features but content created and shared with other users utilizing Roblox’s creation tools. 

Users embraced the creation tools, some in healthier ways than others. If it happened in the real world, someone will try to approximate it online. Users have used a kid-focused game to create virtual red light districts where players can gather to engage in simulated sex with other players — an activity that tends to avoid moderation by utilizing out-of-game chat platforms like Discord to provide direct links to this content. 

Perhaps more disturbingly, players are recreating mass shootings — many of them containing a racial element — inside the game, and inviting players to step into the shoes of mass murderers. Anti-Defamation League researcher Daniel Kelley was easily able to find recreations of the Christchurch Mosque shooting that occurred in New Zealand in 2019

While Roblox proactively polices the platform for “terrorist content,” the continual resurfacing of content like this remains a problem without an immediate solution. As Russell Brandom of The Verge points out, 40 million daily users generate more content than can be manually-reviewed by human moderators. And the use of a keyword blocklist would result in users being unable to discuss (or recreate) the New Zealand town. 

Company considerations:

  • How does catering to a younger user base affect moderation efforts?
  • What steps can be taken to limit access to or creation of content when users utilize communication channels the company cannot directly monitor? 
  • What measures can be put in place to limit unintentional interaction with potentially harmful content by younger users? What tools can be used to curate content to provide “safer” areas for younger users to explore and interact with?

Issue considerations:

  • How should companies respond to users who wish to discuss or otherwise interact with each other with content that involves newsworthy, but violent, events? 
  • How much can a more robust reporting process ease the load on human and AI moderation?
  • Can direct monitoring of users and their interactions create additional legal risks when most users are minors? How can companies whose user bases are mostly children address potential legal risks while still giving users freedom to create and communicate on the platform?

Resolution:

Roblox updated its Community Standards to let users know this sort of content was prohibited. It also said it would engage in “proactive detection” that would put human eyes on content related to terms like this, allowing geographic references but not depictions of the mosque shooting. 

Originally posted to the Trust and Safety Foundation website.

Filed Under: , ,
Companies: roblox

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Roblox Moderators Combat In-Game Reenactments Of Mass Shootings (2021)”

Subscribe: RSS Leave a comment
10 Comments
ECA (profile) says:

Re: REALITY SUCKS.

But its reality.
Truth hurts? only if you love the lies and cant deal with REAL.
Now a few restrictions MIGHT be a nice thing. Like a 18+ section. But how are you going to monitor/sort that? How can you really tell who is who and the age? There was a suggestion awhile back, about listing AGE of those signed in and restricting Access by AGE.

Building constructions that mimic real life ISNT BAD. It could show how things could have been better, if/maybe/choices were made. Simulating things Isnt bad. It can show kids WHAT they could have/may have been able to do to SAVE themselves.

Anonymous Coward says:

Not related, but automatic moderation isn’t much better. There’s stories making the rounds of Reddit about how Nintendo’s automatic detection software for Super Smash Bros Ultimate online is issuing warnings and bans over normal gameplay because its algorithms are incorrectly detecting "unsporting" gameplay:

https://www.reddit.com/r/SmashBrosUltimate/comments/s9b6l1/when_you_buy_the_game_at_xmas_and_have_never/

Users are complaining about receiving bans as a result:

"I got banned for getting a Blade Beam spamming Cloud. I caught the rhythm and just stood there parrying every beam.

I got the message that it’s unsporting behavior to just stand still and do nothing."

Raymondjoype (user link) says:

Что такое влог и блог В чём их отлич

Sensitive touch rasprekrasnoy girls will flow through your body, dipping in depth boundless the ocean pleasure. In the quiet slip, donating your skin kisses, prelestress envelops the warmth of one's body. You will be surprised at, which sea bliss today it is possible to feel fromnude massage in Midtown.
Systematically visiting the four hands massage for clients, you guarantee himself excellent sexual relaxation.
Dear gentlemen!
And while, french massage and not violates practically any prohibitions, for the reason it's not about sexual contact.
In school sensual massage women will hold erotic 4hands massage. Similar swedish massage, as in principle, and relaxation, influences on some area human body, this give a chance male gain strength.

<a href=https://sites.google.com/view/eroticoutcallmassage>Список более 40 качественных блогов которые принимают</a>
[—-]

SomeDude says:

Wrong

A lot of what is in here is very, very wrong indeed. Roblox only employs about 30 moderators total, not 1600, as I have personally confirmed. A lot of what’s reported or submitted for use on roblox is run through a third-party filter. Roblox staff never oversee anything, they just sit back and let the bots take care of things. This is why all these ‘mosque shooting’ and related terrorism-centered games are still up on the site; staff simply do not care.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow