Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Facebook Knew About Deceptive Advertising Practices By A Group That Was Later Banned For Operating A Troll Farm (2018-2020)

from the election-advertising-moderation dept

Summary:

In the lead-up to the 2018 midterm elections in the United States, progressive voters in seven competitive races in the Midwest were targeted with a series of Facebook ads urging them to vote for Green Party candidates. The ads, which came from a group called America Progress Now, included images of and quotes from prominent progressive Democrats including Bernie Sanders and Alexandria Ocasio-Cortez with the implication that these politicians supported voting for third parties. 

The campaign raised eyebrows for a variety of reasons: two of the featured candidates stated that they did not approve the ads, nor did they say or write the supposed quotes that were run alongside their photos, and six of the candidates stated that they had no connection with the group. The office of Senator Sanders asked Facebook to remove the campaign, calling it “clearly a malicious attempt to deceive voters.” Most notably, an investigation by ProPublica and VICE News revealed that America Progress Now was not registered with the Federal Election Commission nor was any such organization present at the address listed on its Facebook page.

In response to Senator Sanders’ office, and in a further statement to ProPublica and VICE, Facebook stated that it had investigated the group and found no violation of its advertising policies or community standards.

Two years later, during the lead-up to the 2020 presidential election, an investigation by the Washington Post revealed a “troll farm”-type operation directed by Rally Forge, a digital marketing firm with connections to Turning Point Action (an affiliate of the conservative youth group Turning Point USA), in which multiple teenagers were recruited and directed to post pro-Trump comments using false identities on both Facebook and Twitter. This revelation resulted in multiple accounts being removed by both companies, and Rally Forge was permanently banned from Facebook.

As it turned out, these two apparently separate incidents were in fact closely connected: an investigation by The Guardian in June of 2021, aided in part by Facebook whistleblower Sophie Zhang, discovered that Rally Forge had been behind the America Progress Now ads in 2018. Moreover, Facebook had been aware of the source of the ads and their deceptive nature, and of Rally Forge’s connection to Turning Point, when it determined that the ads did not violate its policies. The company did not disclose these findings at the time. Internal Facebook documents, seen by The Guardian, recorded concerns raised by a member of Facebook’s civic integrity team, noting that the ads were “very inauthentic” and “very sketchy.” In the Guardian article, Zhang asserted that “the fact that Rally Forge later went on to conduct coordinated inauthentic behavior with troll farms reminiscent of Russia should be taken as an indication that Facebook’s leniency led to more risk-taking behavior.”

Company considerations:

  • What is the best way to address political ads that are known to be intentionally deceptive but do not violate specific advertising policies?
  • What disclosure policies should be in place for internal investigations that reveal the questionable provenance of apparently deceptive political ad campaigns?
  • When a group is known to have engaged in deceptive practices that do not violate policy, what additional measures should be taken to monitor the group in case future actions involve escalations of deceptive and manipulative tactics?

Issue considerations:

  • How important should the source and intent of political ads be when determining whether or not they should be allowed to remain on a platform, as compared to the content of the ads themselves?
  • At what point should apparent connections between a group that violates platform policies and a group that did not directly engage in the prohibited activity result in enforcement actions against the latter group?

Resolution:

A Facebook spokesperson told The Guardian that the company had “strengthened our policies related to election interference and political ad transparency” in the time since the 2018 investigation, which revealed no violations by America Progress Now. The company also introduced a new policy aiming to increase transparency regarding the operators of networks of Facebook Pages.

Rally Forge and one of its page administrators remain permanently banned from Facebook following the 2020 troll farm investigation, while Turning Point USA and Turning Point Action deny any involvement in the specifics of either campaign, and Facebook has taken no direct enforcement action against those groups.

Originally posted to the Trust and Safety Foundation website.

Filed Under: , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Facebook Knew About Deceptive Advertising Practices By A Group That Was Later Banned For Operating A Troll Farm (2018-2020)”

Subscribe: RSS Leave a comment
2 Comments
That One Guy (profile) says:

'We need to be HEAVILY regulated.' -Mark Zuckerburg

I can’t help but suspect that if the group had been using pics of Facebook execs and attributing dishonest quotes to them in favor of some political position that the company didn’t agree with that would have been enough for them to find something to nail them on, but since it was just a bunch of liars trying to convince people not to vote democrat it was seen as no big deal.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow