from the it's-not-that-it's-difficult,-it's-impossible dept
We’ve been saying for ages now that content moderation at scale is literally impossible to do well. It’s not “difficult.” It’s impossible. That does not mean that companies shouldn’t try to get better at it. They should and they are. But every choice involves real tradeoffs, and those tradeoffs can be significant and will upset some contingent who will have legitimate complaints. Too many people think that content moderation is so easy that just having a a single person dedicated to reviewing content can solve the problem. That’s not at all how it works.
Professor Kate Klonick, who has done much of the seminal research into content moderation on large tech platforms, was given the opportunity to go behind the scenes and look at how Facebook dealt with the Christchurch shooting — an event the company was widely criticized over, with many arguing that they took too long to react, and let too many copies of the video slip through. As we wrote in our own analysis, it actually looked like Facebook did a pretty impressive job given the challenges involved.
Klonick, however, got to find out much more from the people actually involved, and has written up an incredible behind the scenes look at how Facebook dealt with the video for the New Yorker. The entire thing is worth reading, but I did want to highlight a few key points. The article details how Facebook has teams of people around the globe who are ready to respond and deal with any such “crisis,” but that doesn’t make the decisions they have to make any easier. One thing that’s interesting, is that Facebook does have a policy that they should gather as much information as possible before making a call — because sometimes what you see at first may not tell the whole story:
The moderators have a three-step crisis-management protocol; in the first phase, ?understand,? they spend as much as an hour gathering information before making any decisions. Jay learned that the shooter seemed to be trying to make the massacre go viral: he had posted links to a seventy-three-page manifesto, in which he espoused white-supremacist beliefs, and live-streamed one of the shootings on Facebook, in a video that lasted seventeen minutes and then remained on his profile. Jay forced himself to watch the video, and then to watch it again. ?It?s not something I would ask others to do without having to watch it myself,? he said.
If you think it’s crazy to think that it might take up to an hour (I should note, this doesn’t mean they always wait an hour — just that it may take that long to gather the necessary info), Klonick demonstrates how the same basic fact pattern may present very different situations when understood in context. For example, you might think that a Facebook live video of one man shooting and killing another probably shouldn’t be shown. But, context matters. A lot.
Understanding context is one of the most difficult aspects of content moderation. Sometimes, a post seems clearly destructive. In April, 2017, Steve William Stephens, a vocational specialist, shot and killed Robert Godwin, Sr., an elderly black man who was walking on the sidewalk near his home in Cleveland. Stephens said, bafflingly, that he had decided to kill someone because he was mad at his ex-girlfriend, and posted a video of the killing on Facebook, where it remained for two hours before the company removed it. People were horrified by how long it stayed up….
The fact pattern there is straightforward. A black man was shot on Facebook live. Facebook should take it down, right? But…
But disturbing videos may not always be damaging. In July, 2016, Philando Castile, a black school-nutrition supervisor, was shot seven times by a police officer during a traffic stop in Minnesota. Castile?s girlfriend, Diamond Reynolds, live-streamed the aftermath, as Castile bled from his wounds and died after twenty minutes. The footage arrived amid a series of videos depicting police violence against black men but was striking because it was streamed live, which exempted it from claims that it had been edited by activists or the police department before it was released.
If the “rules” say no live video of a shooting, you block the first one… but also the latter. Indeed, for a time, Facebook did block the latter, but that resulted in a lot of (reasonable) complaints, and Facebook changed its mind. Even though the basic fact patterns are the same.
Facebook initially removed the video, but then reinstated it with a content warning. To moderators looking at both, the videos might look similar?a grisly shooting of a black man in America?but the company eventually determined that the intentions behind the videos gave them distinct meaning: keeping up Reynolds?s video brought awareness to the systemic racism of the criminal-justice system, while taking down Stephens?s video silenced a murderer?s deranged homage to his ex-girlfriend.
In short: context matters a ton, and you don’t always get the context right away. Indeed, sometimes it’s very difficult to get the context. And, the same video in different contexts can be quite different. Indeed, this turned out to be some of the problem with the Christchurch video. Klonick details how just removing all copies of the video raised some questions about why some people were posting it:
This created an ethical tangle. While obvious bad actors were pushing the video on the site to spread extremist content or to thumb their noses at authority, many more posted it to condemn the attacks, to express sympathy for the victims, or because of the video?s newsworthiness. For consistency, and in deference to a request from the New Zealand government, the team deleted even these posts. The situation was a no-win for Facebook. Politicians were quick to condemn the company for the spread of extremism, and users who had posted the video in good faith felt unreasonably censored.
In other words, there are tradeoffs, and it’s a no win situation. No matter which choice you make, some people are going to be (perhaps totally reasonably) upset about that decision.
And, of course, there was technical difficulties involved as well, though Facebook did move to try to minimize those:
By the time the handling of the Christchurch video switched to teams in the United States, some twelve hours after the shooting, moderators discovered a problem that they hadn?t encountered before at such a scale. When they tried to create a hash databank for the shooter?s video, users began purposefully or accidentally manipulating the video, creating slightly blurred or cropped versions that obscured the hash and could make it past Facebook?s firewall. Ahmed decided to try a new kind of hash technology that took a fingerprint from a vector of the video?its audio?which was likely to remain the same across different versions. This technique, combined with others, worked: in the first twenty-four hours, one and a half million copies of the video were removed from the site, with 1.2 million of those removed at the point of upload.
In short, there are lots of good reasons to complain about Facebook and to hate on the company. And it often does a bad job with its moderation efforts (though, they have gotten much better). But part of the problem is that when you’re doing moderation at that scale, e mistakes are going to be made — and some of those mistakes are going to be a big deal — and some may be because of a lack of context.
Assuming that there’s some magic wand that can be waved (as Australia, the UK, and the EU have suggested in recent days — not to mention some US politicians) suggests a world that does not exist. It is not helpful to demand that companies magically do something that is impossible and that is driven by the fact that human beings aren’t always good people. A more serious look at the issues of people doing bad stuff online should start with the bad people and what they’re doing, not on blaming social media for being used as a tool to broadcast the bad things.
Filed Under: christchurch, content moderation, context
Companies: facebook