Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Facebook Targets Misinformation Spread By The Philippines Government (2020)

from the misinformation-challenges dept

Summary: Philippines president Rodrigo Duterte’s rise to power was greatly aided by Facebook and its overwhelming popularity within the country. An estimated 97% of Filipinos have Facebook accounts and the company itself co-sponsored a Q&A session with local journalists that was broadcast on 200 radio and television stations and livestreamed on the platform. Questions were crowdsourced from Facebook users, helping propel the mayor of Davao to the highest office in the country.

Duterte’s run for office was also directly assisted by Facebook, which flew a team of reps in to help the candidate’s campaign staff maximize the platform’s potential. As his campaign gathered critical mass, he and his team began weaponizing the tools handed to him by Facebook, spreading misinformation about other candidates and directly targeting opponents and their supporters with harassment and threats of violence.

Not much has changed since Duterte took office in 2016. Facebook continues to be his preferred social media outlet. But Facebook’s latest attempt to tackle the spread of misinformation on its platform may prompt Duterte to find another outlet to weaponize. In September 2020, Facebook’s moderation team announced they had removed a “network” linked to the Philippines government for violating its rules against “coordinated inauthentic behavior.”

We also removed 64 Facebook accounts, 32 Pages and 33 Instagram accounts for violating our policy against foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This network originated in the Philippines and focused on domestic audiences. (Updated on October 12, 2020 at 6:35PM PT to reflect the latest enforcement numbers.)

Facebook’s removal of this content prompted immediate comments from President Duterte. The president’s response to Facebook’s moderation efforts was a reminder from Duterte that he has the power to shut the platform down in his country if he believes he’s being treated unfairly.

?I allow you to operate here,? Mr. Duterte said. ?You cannot bar or prevent me from espousing the objectives of government. Is there life after Facebook? I don?t know. But we need to talk.?

Questions and policy implications to consider:

  • Does targeting official government accounts increase the risk of the platform being banned or blocked in targeted countries?
  • Does the possible loss of market share affect moderation decisions targeting governments?
  • Should Facebook be directly involved in setting up social media campaigns for political figures/government entities?
  • Does Facebook have any contingency plans in place to mitigate collateral damage to citizens in countries where the platform has been subjected to retaliatory actions by governments whose content/accounts have been removed?

Resolution: Facebook’s Philippines office offered no comment in response to Duterte’s threats and the targeted accounts remain deactivated. Facebook is still operating in the country and, although Duterte threatened harsher regulation may be in the works, nothing has surfaced to date.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Facebook Targets Misinformation Spread By The Philippines Government (2020)”

Subscribe: RSS Leave a comment
10 Comments
Anonymous Coward says:

"Summary: Philippines president Rodrigo Duterte’s rise to power was greatly aided by Facebook and its overwhelming popularity within the country. An estimated 97% of Filipinos have Facebook accounts and the company itself co-sponsored a Q&A session with local journalists that was broadcast on 200 radio and television stations and livestreamed on the platform. Questions were crowdsourced from Facebook users, helping propel the mayor of Davao to the highest office in the country. "

Should be a hint that the site isn’t exactly as private as people want to make it out to be.

Anonymous Coward says:

Re: Re:

But in all seriousness, while it’s good they’re removing troll accounts; I don’t think anyone should trust Facebook to the be arbiter of misinformation, either. The fact 97% of the country have a facebook account shows that it’s a fixture in their electoral system whether people want to waffle on whether it’s a private company that can do whatever it wants or not.

I’m sure this will be the norm in coming elections.

Anonymous Coward says:

Re: Re: Re:

Anyone who trusts a single outlet for news or otherwise is being foolish.

The statistic regarding facebook accounts vs population is not indicative as to whether said accounts use FB exclusively for their news source.

Forcing FB to regulate content IAW your opinions is not going to produce the desired affect(s).

Stephen T. Stone (profile) says:

Re: Re:

The fact 97% of the country have a facebook account shows that it’s a fixture in their electoral system whether people want to waffle on whether it’s a private company that can do whatever it wants or not.

Your point is…what, exactly? Unless Facebook is receiving money from the government of the Philippines to act as an arm of the government, it isn’t a government actor. Wanting the opposite conclusion to be a fact doesn’t make it a fact.

Yes, the societal power wielded by Facebook is at least worrisome. That said: The people who use Facebook for their top (and possibly only) news source have to take responsibility for that choice. Facebook isn’t forcing people to do that.

Anonymous Coward says:

Re: Re: Re: Re:

The people saying Facebook is responsible for the Rhogani genocide and that it is worrisome that they can exercise control, would you please make up your minds already!

Is the problem that they exercise too much control or not enough of it? Is the issue that they are stronger than governments or that they cave to authoritarian regimes? There can be fertile debates about which stance is better but only madness results from askinf to do two contradictory actions at once.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow