Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Nintendo Blocks Players From Discussing COVID, Other Subjects (2020)

from the moderation-is-difficult dept

Summary: Nintendo has long striven to be the most family-friendly of game consoles. Its user base tends to skew younger and its attempts to ensure its offerings are welcoming and non-offensive have produced a long string of moderation decisions that have mostly, to this point, only affected game content. Many of these changes were made to make games less offensive to users outside of Nintendo’s native Japan. 

Nintendo’s most infamous content moderation involved a port of the fighting game Mortal Kombat. While other Sega (Nintendo’s main rival at that point) console owners were treated to the original red blood found in the arcades, Nintendo users had to make do with a gray colored “sweat” — a moderation move that greatly cemented Nintendo’s reputation as a console for kids. 

Nintendo still has final say on content that can be included in its self-produced products, leading to contributors finding their additions have been stripped out of games if Nintendo’s moderators feel they are possibly offensive. While Nintendo has backed off from demanding too many alterations from third-party game developers, it still wields a heavy hand when it comes to keeping its own titles clean and family-friendly.

With the shift to online gaming, came new moderation challenges for Nintendo to address. Multiple players interacting in shared spaces controlled by the company produced some friction between what players wanted to do and what the company would allow. The first challenges arrived nearly a decade ago with the Wii, which featured online spaces where players could interact with each other using text or voice messages. This was all handled by moderators who apparently reviewed content three times before allowing it to arrive at its destination, something that could result in an “acceptable” thirty minute delay between the message’s sending and its arrival.

Thirty minutes is no longer an acceptable delay, considering the instantaneous communications allowed by other consoles. And there are more players online than ever, thanks to popular titles like Animal Crossing, a game with social aspects that are a large part of its appeal

While it’s expected Nintendo would shut down offensive and sexual language, given its perception of the desire of its target market, the company’s desire to steer users clear of controversial subjects extended to a worldwide pandemic and the Black Lives Matter movement in the United States.

Here’s what gaming site Polygon discovered after Nintendo issued a patch for Animal Crossing in September 2020:

According to Nintendo modder and tinkerer OatmealDomeVer. 10.2.0 expands the number of banned words on the platform, including terms such as KKK, slave, nazi, and ACAB. The ban list also includes terms such as coronavirus and COVID. Polygon tested these words out while making a new user on a Nintendo Switch lite and found that while they resulted in a warning message, the acronym BLM was allowed by the system. Most of these words seem to be a response to the current political moment in America.

Patricia Hernandez, Polygon

As this report from the Electronic Frontier Foundation notes, Nintendo often steers clear of political issues, even going so far as to ban the use of any of its online games for “political advocacy,” which resulted in the Prime Minister of Japan having to cancel a planned Animal Crossing in-game campaign event.

Company considerations:

  • How does limiting discussion of current/controversial events improve user experience? How does it adversely affect players seeking to interact?
  • How should companies respond to users who find creative ways to circumvent keyword blocking? 
  • How does a company decide which issues/terms should be blocked/muted when it comes to current events?

Issue considerations:

  • How should companies approach controversial issues that are of interest to some players, but may make other players uncomfortable? 
  • How can suppressing speech involving controversial topics adversely affect companies and their user bases?
  • How can Nintendo avoid being used by governments to control speech related to local controversies, given its willingness to preemptively moderate speech related to issues of great interest to its user base?

Resolution: Nintendo continues its blocking of these terms, apparently hoping to steer clear of controversial issues. While this may be at odds with what players expect to be able to discuss with their online friends, it remains Nintendo’s playground where it gets to set the rules.

But, as the EFF discovered, moderation could be easily avoided by using variations that had yet to end up on Nintendo’s keyword blocklist.

Originally posted to the Trust and Safety Foundation website.

Filed Under: , , , ,
Companies: nintendo

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Nintendo Blocks Players From Discussing COVID, Other Subjects (2020)”

Subscribe: RSS Leave a comment
12 Comments
Anonymous Coward says:

Re: Well...

I met a Take-Two rep at a PAX, long years ago. I mentioned that I really did not like Steam (for reasons). She replied, "well, we have a right to protect our product [through steam DRM], don’t we?"

My reply to her is much the same as my reply to "don’t they have the right to moderate what is on it?"

You sure do. And you "protect" people right out of buying it in the first place.

James Burkhardt (profile) says:

Re: Well...

Awesome I get to break out this discussion of language again.

Nintendo has the legal right to moderate content on their platform. To say it simply, Nintendo can moderate mentions of BLM or Covid.

The case study doesn’t challenge that. Instead, as a case study, the article presents a situation where legal obligations are clear and instead asks whether Nintendo should. If you’ve spent any time here you should be familiar with the idea. Does this moderation really serve their goals? How does this moderation choice affect future engagement on the platform?

Limiting the discussion to Nintendo’s legal rights is disingenuous and in bad faith.

Anonymous Coward says:

Re: Re: Well...

Limiting the discussion to Nintendo’s legal rights is disingenuous and in bad faith.

Considering the average response that is given for a commentator’s opinion on this site when it comes to this subject matter, one that chides them for not choosing their words carefully enough, tries to intentionally construe the comment as a statement of law instead of opinion, ignores sarcasm, and in some cases projects bad faith for an opinion they rejected outright, the sarcasm is warranted.

The case study doesn’t challenge that.

Indeed it doesn’t. In fact it’s just about as bad as a mainstream media spin: "Well company X banned communication of subject A but voice ending every statement as a question and showing off a curled eyebrow Is it the best way? What if someone more equal than others is offended ? Could some other nefarious actor abuse it?"

the article presents a situation where legal obligations are clear and instead asks whether Nintendo should.

It’s painfully obvious as to what the site and it’s regulars opinion is when it comes to "free speech on the internet." No-one who’s even remotely seen one of these articles could walk away from one still ignorant. Nor does the comment section ever play out any differently than the previous article’s. Yet, this site constantly drums up more articles like it. Asking the same tired questions again and again. As if the site needs someone to reaffirm it’s confirmation bias.

TL;DR: If you want people to give you their opinion, don’t intentionally construe it as a statement of fact because you disagree with it.

PaulT (profile) says:

Re: Re: Re: Well...

"Considering the average response that is given for a commentator’s opinion on this site"

The average response is one of fact – Nintendo and other companies like it have every right to do things in a particular manner, but people are also free to criticise them and suggest that a different approach would be more constructive and successful in the long term.

I regularly see people whining or pretending that saying that Nintendo do something negative to their own business translates to people saying the don’t have the right to do such things, but I rarely see a valid counterargument. They do something that’s dumb or harmful to their own brand, other people tell them such.

"It’s painfully obvious as to what the site and it’s regulars opinion is when it comes to "free speech on the internet."

It is, but there’s a few people who try to pretend that what’s said is something other than what’s intended.

"Nor does the comment section ever play out any differently than the previous article’s. Yet, this site constantly drums up more articles like it"

The funny thing about fact-based responses is that they rarely change unless different facts are introduced, and only then when the facts are verifiable. If no such facts are introduced, then the reaction to the existing known facts won’t change, as the facts themselves haven’t changed.

"TL;DR: If you want people to give you their opinion, don’t intentionally construe it as a statement of fact because you disagree with it."

If you’re a regular reader then you should know that there’s a fairly dedicated group of individuals who will lie about the very words stated in one of these threads to pretend that something other than the intended opinions are being presented. The OP here seems to be making the false claim that there’s some hypocrisy between the regular pro-230 idea that a platform can moderate its own property and people criticising a platform for making the wrong moderation decision, but that’s not actually true.

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow