How Facebook's Racial Segmentation Is Helping Trump Campaign Try To Suppress African American Voting
from the where-are-the-adults dept
In her remarkable opinion, Judge Motz strongly suggests that North Carolina’s law was indeed racist. The day following the release of Shelby County, she noted, a GOP leader in the state legislature announced his intention to write a law that the feds would have no authority to vet before it went into effect. Like laws in other Republican states, the North Carolina bill imposed a tough new photo-ID requirement. But it did much more: the law eliminated same-day voter registration and pre-registration for high-school students about to turn 18, curtailed early voting by one week and banned out-of-precinct voting.That, alone, was pretty stunning, but they still tried to pretend in public that the law wasn't about suppressing the vote. However, when put with a Bloomberg reporter, the Trump campaign flat out brags about its efforts to suppress the vote among African Americans. And they're using extreme targeting on Facebook to do so:
Each of these new rules disproportionately impacted black voters seeking to exercise the franchise, as legislators in North Carolina were well aware. “[P]rior to enactment” of the law, the Fourth Circuit explained, “the legislature requested and received racial data as to usage of the practices changed by the proposed law.” Released from the obligation to clear their law with the Justice department and “with race data in hand, the legislature amended the bill to exclude many of the alternative photo IDs used by African Americans.” Photo IDs used more often by black voters, including public assistance IDs, were removed from the list of acceptable identification, while IDs issued by the Department of Motor Vehicles—which blacks are less likely to have—were retained. Cutting the first week of early voting came in reaction to data showing that the first seven days were used by large numbers of black voters, nixing one Sunday on which churches would bus “souls-to-the-polls”. Banning same-day registration, too, had an outsize effect on blacks, as did the prohibition on out-of-precinct voting: both changes made voting harder for people who had recently moved, and blacks are more itinerant than whites.
Instead of expanding the electorate, Bannon and his team are trying to shrink it. “We have three major voter suppression operations under way,” says a senior official. They’re aimed at three groups Clinton needs to win overwhelmingly: idealistic white liberals, young women, and African Americans. Trump’s invocation at the debate of Clinton’s WikiLeaks e-mails and support for the Trans-Pacific Partnership was designed to turn off Sanders supporters. The parade of women who say they were sexually assaulted by Bill Clinton and harassed or threatened by Hillary is meant to undermine her appeal to young women. And her 1996 suggestion that some African American males are “super predators” is the basis of a below-the-radar effort to discourage infrequent black voters from showing up at the polls—particularly in Florida.Now that's... interesting (and ridiculous, but we'll leave that aside for the moment). Of course, every election cycle involves a ton of targeted "negative advertising" that is designed to suppress overall interest in a candidate. But the two things newsworthy here are (1) the fact that the Trump campaign is directly admitting to the intention behind that strategy here, rather than hiding it and (2) the ability to use Facebook to target these kinds of campaigns to a level previously not available.
On Oct. 24, Trump’s team began placing spots on select African American radio stations. In San Antonio, a young staffer showed off a South Park-style animation he’d created of Clinton delivering the “super predator” line (using audio from her original 1996 sound bite), as cartoon text popped up around her: “Hillary Thinks African Americans are Super Predators.” The animation will be delivered to certain African American voters through Facebook “dark posts”—nonpublic posts whose viewership the campaign controls so that, as Parscale puts it, “only the people we want to see it, see it.” The aim is to depress Clinton’s vote total. “We know because we’ve modeled this,” says the official. “It will dramatically affect her ability to turn these people out.”
Facebook, somewhat famously, allows extraordinarily targeted advertising. We've played around with it ourselves, and it's really quite incredible how granular you can go in trying to target your ads. Basically any trait or interest or demographic group that you can think of, you can put into an ad target group. At times, as you dig through the options, it almost feels like it's just Facebook showing off just how much data and insight it has into its users. It's a data nerd's dream, where you can slice and dice billions of people by basically anything.
Of course, it's somewhat ironic that the Trump campaign is using Facebook to suppress the vote, at the same time that Facebook is patting itself on the back for helping to get out the vote with its voter registration campaign, and, in the past has directly experimented with changing newsfeeds to encourage more voter turnout. Platforms like Facebook can be used for both good and evil.
Either way, sometimes the data nerds (and the advertising folks) have to be reminded of the law. Pro Publica has a pretty damning report out today about the fact that Facebook's slicing and dicing of targeted advertising also means that you can exclude people by race. They don't discuss the recent revelations about the Trump campaign's targeting, but it's pretty clear that this is how they're doing that suppression campaign described above. But it also presents potentially serious legal problem in areas where it is illegal to discriminate based on race, such as hiring or housing. And yet, Facebook's current set up allows users to do just that: a housing ad that discriminated based on race.
The Propbulica article quotes a civil rights lawyer who is reasonably horrified by this. But there are some big legal questions. From the data geek side of things, you can easily see how Facebook reached this point, continually slicing up data in more and more ways, without necessarily considering the consequences. But does that make Facebook legally liable for, say, violating the Fair Housing Act? That's... a much tougher question.
Facebook argues (1) that it's policies say that advertisers cannot discriminate in illegal ways, and anyone caught doing so will face punishment. (2) Facebook is likely protected by Section 230 of the CDA on this. I say "likely" instead of "definitely" because one of the few cases that cut through the CDA 230 protections is the famous Roommates.com case, which was explicitly about racial discrimination on housing based on Roommates.com ads that violated the Fair Housing Act. However, Facebook has a much stronger argument than Roommates in that case, because part of the issue is that Roommates directly asked users for a racial preference, making it content they had designed, rather than content that the user created. Facebook can (reasonably) argue that it was just offering up millions of ways to slice and dice the data, rather than explicitly calling out racial preference. (3) Facebook says the rules are not based on "race" but "racial affinity." This is a dumb argument and Facebook should not say it ever again, and possibly apologize for even bringing up such a lame argument in the first place.
Separately, Facebook argues -- correctly -- that there are lots of cases where advertisers have perfectly legitimate reasons for targeting based on race.
Satterfield said it’s important for advertisers to have the ability to both include and exclude groups as they test how their marketing performs. For instance, he said, an advertiser “might run one campaign in English that excludes the Hispanic affinity group to see how well the campaign performs against running that ad campaign in Spanish. This is a common practice in the industry.”That said, there's simply no reason that Facebook couldn't put in a system to recognize ads that are in a protected category in which discrimination may be an issue, and either block such usage or at least put a strong warning for the user (and alert the Facebook team to review the ad more carefully -- since all ads are reviewed before being put live). It's not clear that there's a legal mandate to do so, but it just seems like a good practice in general. I've seen lots of people commenting on this story in which they are rightfully horrified about the potential abuse of such a tool, and they're quick to blame Facebook's "negligence." It does seem more like carelessness than negligence, in that you can see how the company got here, as it contined to alow greater and greater targeting attributes, which advertisers really appreciate.