Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Pinterest's Moderation Efforts Still Leave Potentially Illegal Content Where Users Can Find It (July 2020)

from the pin-this dept

Summary: Researchers at OneZero have been following and monitoring Pinterest’s content moderation efforts for several months. The “inspiration board” website hosts millions of images and other content uploaded by users.

Pinterest’s moderation efforts are somewhat unique. Very little content is actually removed, even when it might violate the site’s guidelines. Instead, as OneZero researchers discovered, Pinterest has chosen to prevent the content from surfacing by blocking certain keywords for generating search results.

The problem, as OneZero noted, is that hiding content and blocking keywords doesn’t actually prevent users from finding questionable content. Some of this content includes images that sexually exploit children.

While normal users may never see this using Pinterest’s built-in search tools, users more familiar with how search functions work can still access content Pinterest feels violates its guidelines, but hasn’t actually removed from its platform. By navigating to a user’s page, logged-out users can perform searches that seem to bypass Pinterest’s keyword-blocking. Using Google to search the site — instead of the site’s own search engine — can also surface content hidden by Pinterest.

Pinterest’s content moderation policy appears to be mostly hands-off. Users can upload nearly anything they want to with the company only deleting (and reporting) clearly illegal content. For everything else that’s questionable (or potentially harms other users), Pinterest opts for suppression, rather than deletion.

?Generally speaking, we limit the distribution of or remove hateful content and content and accounts that promote hateful activities, false or misleading content that may harm Pinterest users or the public?s well-being, safety or trust, and content and accounts that encourage, praise, promote, or provide aid to dangerous actors or groups and their activities,? Pinterest?s spokesperson said of the company?s guidelines.

Unfortunately, users who manage to bypass keyword filters or otherwise stumble across buried content will likely find themselves directed to other buried content. Pinterest’s algorithms surface content related to whatever users are currently viewing, potentially leading users even deeper into the site’s “hidden” content.

Decisions to be made by Pinterest:

  • Is hiding content effective in steering users away from subject matter/content Pinterest would rather they didn’t access?
  • Would deletion — rather than hiding — result in affected users leaving the platform?
  • Is questionable content a severe enough problem the company should rethink its moderation protocols?
  • Should “related content” algorithms be altered to prevent the surfacing of hidden content?

Questions and policy implications to consider:

  • Does hiding — rather than removing — content potentially encourage users to use this invisibility to engage in surreptitious distribution of questionable or illegal content?
  • Does the possibility of hidden content resurfacing steer ad buyers away from the platform?
  • Will this approach to moderation — hidden vs. deletion — remain feasible as pressure for sites to aggressively police misinformation and “fake news” continues to mount?

Resolution: Pinterest’s content moderation strategy remains mostly unchanged. As the site’s spokesperson stated, the site appears to feel the hiding of content addresses most raised concerns, even if it does allow more determined site users to locate content the site would rather they never saw.

Filed Under: ,
Companies: pinterest

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Pinterest's Moderation Efforts Still Leave Potentially Illegal Content Where Users Can Find It (July 2020)”

Subscribe: RSS Leave a comment
6 Comments
Tanner Andrews (profile) says:

secret code words

Evidently they have secret code words for otherwise objectionable content. So, you search using "5g" for unreliable health information, or "we go all" to learn about the undesirability of darker-complected persons. And there is a different secret keyword, not mentioned in the article, for potentially illegal dirty pictures.

It seems a fair trade off to me. If you want that content, you can have it. And if not, well.

The problem comes up when you get suppressed content because it is related to your actual search. You go looking for information about car phones, and that leads to "5g", which leads to the unreliable health information, or, worse, information about the over-hyping and under-delivery of phone service.

Perhaps a bug fix is in order. Unless you expressly ask for the suppressable information, normal exploring does not bring it up. Getting this “right” is what computer scientists call an NP-hairy problem.

Bruce C. says:

Re: secret code words

The underlying problem is the "illegal" part. That means either a) geo-blocking (with its own limitations at least as bad as moderation), b) conforming to the requirements of the most restrictive governments (which probably conflict) or c) having a take-down policy.

On the other hand, for "merely objectionable" content, Pinterest’s model will probably hold up for most scenarios short of active trolling. Things could easily fall apart if some genius decides to start using politicians’ names as code words for extreme adult content.

Anonymous Coward says:

Re: Re: secret code words

Sounds like the old April fool’s day joke about setting the "evil bit" for Malware and Spam.

A cousin "naughty bit" could potentially work merely for "poor taste/advertiser unfriendly content" that lets you post say dead baby jokes without it being included being included in the general searches nit for people who know what they are looking for but such coexisting tends to work poorly with "moralists" who feel such content shouldn’t be there and disagreement with who falls on what line would be controversial being other. We already saw that with "gay or lesbian" being considered adult content just from porn search term collisions and the rough consensus is it offensive to call families not family friendly for having two moms or dads.

I wonder if in a silly political asscovering move Pintrest would be better off allowing explicit and transparent as possible curation algorithim weight codes based upon an account’s training and the ability to copy, revert to default, or paste one from others. That way if people start passing around their own curator to be some unholy misinformation bubble they could wash their hands of it as a user generated algorithim and bubble and not theirs.

Empowring users to try to avoid any potential claims of responsibility certainly wouldn’t be appreciated by detractors even if it removed a "their algorithim is deliberately causing people to do bad things/they have too much power!" shallow talking points arguments.

My Tikkit says:

Apparently they have mystery code words for something else shocking substance. So, you look utilizing "5g" for questionable wellbeing data, or "we go all" to memorize approximately the undesirability of darker-complected people. And there’s a diverse mystery watchword, not specified within the article, for possibly unlawful grimy pictures. It appears a fair exchange off to me. In case you need that substance, you’ll have it. And in case not, well. The issue comes up after you get stifled substance since it is related to your real look. You go searching for data around car phones, which leads to "5g", which leads to the questionable wellbeing data, or, more awful, data around the over-hyping and under-delivery of phone service. Perhaps a bug settle is in arrange. Unless you explicitly inquire for the suppressable data, typical investigating does not bring it up. Getting this “right” is what computer researchers call an NP-hairy issue view more on https://www.pinterest.com/Bertramgonzalez654564

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow