Behind The Scenes Look At How Facebook Dealt With Christchurch Shooting Demonstrates The Impossible Task Of Content Moderation

from the it's-not-that-it's-difficult,-it's-impossible dept

We’ve been saying for ages now that content moderation at scale is literally impossible to do well. It’s not “difficult.” It’s impossible. That does not mean that companies shouldn’t try to get better at it. They should and they are. But every choice involves real tradeoffs, and those tradeoffs can be significant and will upset some contingent who will have legitimate complaints. Too many people think that content moderation is so easy that just having a a single person dedicated to reviewing content can solve the problem. That’s not at all how it works.

Professor Kate Klonick, who has done much of the seminal research into content moderation on large tech platforms, was given the opportunity to go behind the scenes and look at how Facebook dealt with the Christchurch shooting — an event the company was widely criticized over, with many arguing that they took too long to react, and let too many copies of the video slip through. As we wrote in our own analysis, it actually looked like Facebook did a pretty impressive job given the challenges involved.

Klonick, however, got to find out much more from the people actually involved, and has written up an incredible behind the scenes look at how Facebook dealt with the video for the New Yorker. The entire thing is worth reading, but I did want to highlight a few key points. The article details how Facebook has teams of people around the globe who are ready to respond and deal with any such “crisis,” but that doesn’t make the decisions they have to make any easier. One thing that’s interesting, is that Facebook does have a policy that they should gather as much information as possible before making a call — because sometimes what you see at first may not tell the whole story:

The moderators have a three-step crisis-management protocol; in the first phase, ?understand,? they spend as much as an hour gathering information before making any decisions. Jay learned that the shooter seemed to be trying to make the massacre go viral: he had posted links to a seventy-three-page manifesto, in which he espoused white-supremacist beliefs, and live-streamed one of the shootings on Facebook, in a video that lasted seventeen minutes and then remained on his profile. Jay forced himself to watch the video, and then to watch it again. ?It?s not something I would ask others to do without having to watch it myself,? he said.

If you think it’s crazy to think that it might take up to an hour (I should note, this doesn’t mean they always wait an hour — just that it may take that long to gather the necessary info), Klonick demonstrates how the same basic fact pattern may present very different situations when understood in context. For example, you might think that a Facebook live video of one man shooting and killing another probably shouldn’t be shown. But, context matters. A lot.

Understanding context is one of the most difficult aspects of content moderation. Sometimes, a post seems clearly destructive. In April, 2017, Steve William Stephens, a vocational specialist, shot and killed Robert Godwin, Sr., an elderly black man who was walking on the sidewalk near his home in Cleveland. Stephens said, bafflingly, that he had decided to kill someone because he was mad at his ex-girlfriend, and posted a video of the killing on Facebook, where it remained for two hours before the company removed it. People were horrified by how long it stayed up….

The fact pattern there is straightforward. A black man was shot on Facebook live. Facebook should take it down, right? But…

But disturbing videos may not always be damaging. In July, 2016, Philando Castile, a black school-nutrition supervisor, was shot seven times by a police officer during a traffic stop in Minnesota. Castile?s girlfriend, Diamond Reynolds, live-streamed the aftermath, as Castile bled from his wounds and died after twenty minutes. The footage arrived amid a series of videos depicting police violence against black men but was striking because it was streamed live, which exempted it from claims that it had been edited by activists or the police department before it was released.

If the “rules” say no live video of a shooting, you block the first one… but also the latter. Indeed, for a time, Facebook did block the latter, but that resulted in a lot of (reasonable) complaints, and Facebook changed its mind. Even though the basic fact patterns are the same.

Facebook initially removed the video, but then reinstated it with a content warning. To moderators looking at both, the videos might look similar?a grisly shooting of a black man in America?but the company eventually determined that the intentions behind the videos gave them distinct meaning: keeping up Reynolds?s video brought awareness to the systemic racism of the criminal-justice system, while taking down Stephens?s video silenced a murderer?s deranged homage to his ex-girlfriend.

In short: context matters a ton, and you don’t always get the context right away. Indeed, sometimes it’s very difficult to get the context. And, the same video in different contexts can be quite different. Indeed, this turned out to be some of the problem with the Christchurch video. Klonick details how just removing all copies of the video raised some questions about why some people were posting it:

This created an ethical tangle. While obvious bad actors were pushing the video on the site to spread extremist content or to thumb their noses at authority, many more posted it to condemn the attacks, to express sympathy for the victims, or because of the video?s newsworthiness. For consistency, and in deference to a request from the New Zealand government, the team deleted even these posts. The situation was a no-win for Facebook. Politicians were quick to condemn the company for the spread of extremism, and users who had posted the video in good faith felt unreasonably censored.

In other words, there are tradeoffs, and it’s a no win situation. No matter which choice you make, some people are going to be (perhaps totally reasonably) upset about that decision.

And, of course, there was technical difficulties involved as well, though Facebook did move to try to minimize those:

By the time the handling of the Christchurch video switched to teams in the United States, some twelve hours after the shooting, moderators discovered a problem that they hadn?t encountered before at such a scale. When they tried to create a hash databank for the shooter?s video, users began purposefully or accidentally manipulating the video, creating slightly blurred or cropped versions that obscured the hash and could make it past Facebook?s firewall. Ahmed decided to try a new kind of hash technology that took a fingerprint from a vector of the video?its audio?which was likely to remain the same across different versions. This technique, combined with others, worked: in the first twenty-four hours, one and a half million copies of the video were removed from the site, with 1.2 million of those removed at the point of upload.

In short, there are lots of good reasons to complain about Facebook and to hate on the company. And it often does a bad job with its moderation efforts (though, they have gotten much better). But part of the problem is that when you’re doing moderation at that scale, e mistakes are going to be made — and some of those mistakes are going to be a big deal — and some may be because of a lack of context.

Assuming that there’s some magic wand that can be waved (as Australia, the UK, and the EU have suggested in recent days — not to mention some US politicians) suggests a world that does not exist. It is not helpful to demand that companies magically do something that is impossible and that is driven by the fact that human beings aren’t always good people. A more serious look at the issues of people doing bad stuff online should start with the bad people and what they’re doing, not on blaming social media for being used as a tool to broadcast the bad things.

Filed Under: , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Behind The Scenes Look At How Facebook Dealt With Christchurch Shooting Demonstrates The Impossible Task Of Content Moderation”

Subscribe: RSS Leave a comment
72 Comments
Mason Wheeler (profile) says:

from the things-i-never-thought-i-would-ever-say dept.

One thing that’s interesting, is that Facebook does have a policy that they should gather as much information as possible before making a call — because sometimes what you see at first may not tell the whole story

Which, surprisingly, makes Facebook significantly better in the handling of controversial videos than the Washington Post.

PaulT (profile) says:

Re: Re:

"Twitter can’t even distinguish white supremacists from elected GOP officials"

Erm, we might need some names, because there is no difference in some cases since they are white supremacists.

Sure, there’s probably some who aren’t who "accidentally" happen to belong to some KKK-style groups and/or share their material, but there’s a no-zero number of GOP officials who are exactly that themselves.

Anonymous Coward says:

Re: Re: Re:

That was rather the point being made. Something along the lines of "If we moderated white supremacist content the same way we moderate ISIS content, then we’d have to take down the content of a lot of mainstream Republican politicians in the USA (for example), and certainly a lot of activity from their supporters, because white supremacy is mainstream US politics." And they can’t do that because the Republicans will shout and scream about their tweets being taken down.

It’s an interesting problem…

Woody Bosk says:

Re: Re: Re:2 No, the left should be asking why they're Communists.

Whereas, they probably should be asking "why am I politically aligned with neo-Nazis?"

Lood Gord, that’s crude and feeble smear.

You (the left) set yourself up as possessed of all moral virtue and divide society into two classes, but of course since it’s YOU and your Establishment indoctrination, can’t see that you’re no different from Brown Shirts except in small details.

Oh, and you’re one of those odious "out of the blue" elitists who PAY to pur your notions above others.

Anonymous Coward says:

Re: Re: Re:3 No, the left should be asking why they're Communists.

So you’re saying it’s virtuous to align oneself with the Neo-Nazis? If "the left" believes otherwise and they’re not the virtuous camp then by extension the white supremacist politicians are virtuous. You are, in effect, saying that it’s ok to be white supremacist and spread racist hate. If you really believe that and that anyone who disagrees with you is wrong, well… Humanity would like to have a word with you.

And you don’t have to be a "leftist" to think <insert color here> supremacism is wrong.

Stephen T. Stone (profile) says:

Re: Re: Re:3

Last time I checked, actual White supremacists endorsed Republicans far more than they endorse Democrats; in return, Republicans allow racists and xenophobes such as Steve King, Stephen Miller, and Steve Bannon (wow that’s a lot of Steves) to enter the party and influence its policies. Democrats are not angels by any stretch of the imagination, but they are also not linked to White supremacists on a regular basis, nor do their policies endear them to White supremacists.

Also:

you’re one of those odious "out of the blue" elitists who PAY to pur your notions above others

You can keep trying this smear, Blue Balls, but it will never take. We all know what “out of the blue” refers to, and it is neither the First/Last Word nor the Debbie Gibson song.

Anonymous Coward says:

Re: Re: Re:5 Maybe take a break from being a cop holster and read a book?

Something he called “The greatest mistake I ever made.” And “adding, “I know now I was wrong. Intolerance had no place in America. I apologized a thousand times … and I don’t mind apologizing over and over again”

So maybe you might want to take that bit back a read a bit of history bro.

PaulT (profile) says:

Re: Re: Re:3 No, the left should be asking why they're Communists.

Actually, I’d be mocking the left-wingers who whine because their groups espousing Pol Pot were taken down as well. But, I’m not seeing those, only the people complaining that literal Nazis bing taken down is an affront again them.

But, hey, when you’re dumb enough to believe there’s only 2 points on the politics spectrum, I can forgive your comments reflecting that stupidity.

Anonymous Coward says:

Re: Re: Re:3 No, the left should be asking why they're Communists.

"You (the left) set yourself up as possessed of all moral virtue and divide society into two classes, but of course since it’s YOU and your Establishment indoctrination, can’t see that you’re no different from Brown Shirts except in small details."

"Lood Gord, that’s crude and feeble smear."

  • Funny how you accurately replied to your own comment.

Project much?

Stephen T. Stone (profile) says:

Re: Re: Re:

To quote Splinter News:

Putting aside Twitter’s obvious ass-covering, let’s get to the real problem here: There is a critical mass of GOP politicians who are spout content virtually indistinguishable than what’s coming from white supremacists. … Put more simply: Twitter might have a white supremacist problem, but so does American politics.

That One Guy (profile) says:

Re: Or we could not do that

Congrats, you just creates hundreds of millions of small, non-connected(because what’s allowed on one may not be allowed on others, hence you need to silo) platforms, where maybe a few dozen people can post in each one, about as useful as pre-internet social gatherings.

Good luck finding the funding to pay for all those platforms, not to mention the hundreds of millions of moderators willing to deal with them.

That One Guy (profile) says:

'If you REALLY tried/cared, you could make 2+2 equal 27'

Assuming that there’s some magic wand that can be waved (as Australia, the UK, and the EU have suggested in recent days — not to mention some US politicians) suggests a world that does not exist

As I’ve noted in the past, ‘Everything is easy when you don’t have to do it’.

It costs the politicians nothing to claim that it can be done, because they know that they aren’t the ones who will actually have to put their (impossible) demands into practice, and when the attempts fail they can simply double-down on blaming the companies for not trying hard enough, as all the while they get to preen and brag to the gullible about how they’re Doing Something.

Anonymous Coward says:

do you think that any politician is actually interested in how impossible it is to moderate content, regardless of what that content is? once the ‘Grandstanding’ starts, all every politician is after is the number of points they can gain. it’s even worse when the real intention is to take control of the Internet, stopping the spread of ‘news’ about what they and their mega rich friends are up to while ensuring that they know everything about the ordinary people, despite which country they are in. what is happening, it seems to me, is what was wanted to happen in WWII, ie, dominate the Planet but not fire a shot!!

Anonymous Coward says:

Of course it’s possible to moderate content. It’s impossible to do a perfect job of it, but they can do a better job than they are now.

For example, they do a pretty good job of keeping child porn off of Facebook. You can definitely find examples that slip through occasionally, but by and large, you don’t encounter it regularly.

Woody Bosk says:

The original video forgotten, now to defend the corporation.

1) Don’t need "context": there’s no fine points and no loss if taken down before seen all the way through. Facebook employees dithering over this is ridiculous. — IF ever found to be worthwhile news, it could be restored. The premise that must be allowed "live" is just stupid. Facebook thereby created much problem they now congratulate themselves for fixing.

2) We have only the story of highly-paid Facebook employees long after events, as doubtless coached by highly-paid PR and lawyer fiends, made to a friendly (no doubt somehow rewarded too) academic clearly intending to put out favorable PR. Not a single bit of the story can be trusted. It’s just PR.

3) Facebook may have done this clean-up at urging of NZ gov’t which wanted it VERY MUCH, as proved by outlawing with ten-year sentence. So Facebook due no accolades.

4) Again Masnick is saying that video of terrorist murders should stay up, but Alex Jones is too dangerous and should be "deplatformed" everywhere and every way. Complete contradiction: political speech within common law versus meaningless actual gore that’s always been prohibited.

Gary (profile) says:

Re: Nonsense speech

Complete contradiction: political speech within common law versus meaningless actual gore that’s always been prohibited.

Good morning Blue-Balls!! And thanks for your silly words of Common Law.

Since I know you stay up late reading every post I make, let me take this opportunity to point out that you are unable to cite any free speech sites that meet your criteria for your Command Law.

Also, your love Corporations is obvious because you get a hardon everytime a copyright case comes up and corporate censorship wins.

So cite or suck it up. You use the word "proof" in almost every post, but you still don’t seem to understand that means either.

Woody Bosk says:

You're saying "Nerd Harder" can make the bad idea work.

I premise that:

A) First sheerly mechanical: it’s a bad idea for every yahoo and "terrorist" being able to quickly gain world-wide publicity. It’s fundamental. You cannot fix with review, context or not, only with "prior restraint" as society has long prohibited and regulated in every prior medium.

B) There is of course underlying and over-arching problems that moral restraints are eroded everywhere. This too cannot be fixed after the fact but requires the "prior restraint" of fairly unified societal opinion. The best way to control all excesses is to start with The Rich: keep constraints on their income (especially to limit where they profit from destroying civil society) and ability to influence legislators, else the present mess is what happens.

Masnick has conflicting premises:

1) That social media and more communication are inherently good, without drawbacks.

2) Being an Extreme Libertarian (at the least) he actually assumes that there are no bad actors anywhere in economics or society. No amount of actual experience can sway him from this, even though here he directly states that there are such, because to him premise 1 is overwhelming. — His stating that here is only to take the heat off social media.

3) For of course his key constant: MONEY! Sex and violence bring highest audience to advertising for highest income, therefore must be no restraints on corporations.

Clearly social media is now proven a bad idea. It required from start new law such as CDA Section 230 to create immunity that print publishers never had.

(Again I wonder how did "Communications Decency Act" become permission to host EVERY type of previously prohibited content? Evidently was intended that the good parts be found Un-Constitutional while the corporate-empowering part was kept. At very least, that’s the bad result and why it needs changed.)

Stephen T. Stone (profile) says:

Re:

Engaging in some form of prior restraint would ruin the usability of a service such as Twitter. No one would bother posting anything on Twitter if it had to be reviewed before publication, even if the post is something innocuous and otherwise unremarkable like “going to see [movie title]”. Implementing prior restraint of any kind would also turn the platform into an honest-to-God arbiter of speech like you claim they already are; most tweets are punished after being posted, but prior restraint would punish them before they are posted. And while “moral restraints” may be “eroded” (which is not a universal fact that applies to all peoples), prior restraint of speech deemed “unpopular” or “immoral” would do us no favors — after all, defending a gay person’s right to exist and participate in society is still “unpopular” and “immoral” among a not-zero number of people.

That social media and more communication are inherently good, without drawbacks.

The fundamental premise of social interaction networks, regardless of form, is neither good nor evil in general terms. SINs can be used to spread positivity or negativity; they can be designed for optimal user experiences, capitalist bullshit, or anything in between. The key is how people use/design a SIN — which makes people, not the SIN or its underlying technology, the real “problem”.

47 U.S.C. § 230 was implemented because lawmakers foresaw how people could, would, and eventually did find ways to abuse Internet communications. Publishers did not need (and still do not need) 230 protections because they did not offer services such as Twitter or Facebook. They publish words after holding them for “prior restraint” (i.e., fact-checking and editing), and they do not open up their publications in a way where anyone could contribute anything. SINs act as a form of “instant” communication; to engage in prior restraint with thousands — millions! — of posts every day is both an exercise in futility and a surefire way to piss off large swaths of the userbase at large.

Oh, and 230 applies to corporations, but it also applies to regular jackoffs like me and you. If you host an Internet forum and a third party posts a death threat against a celebrity or politician, 230 gives you the right to both delete that threat, ban the user, and report them to the authorities without fear of legal liability for both the threat and the moderation thereof. That corporations make more use of 230 because they are bigger targets with bigger legal teams and bigger bank accounts is irrelevant.

230 does not need changing and SINs do not need to implement prior restraint. If you can explain exactly why 230 is an overall bad law that needs repealing and how a SIN could implement prior restraint without destroying itself in the process, you would be the first.

That One Guy (profile) says:

Re: Re: Re:

What’s particularly funny about them in particular supporting the idea of pre-screening content is that they’ve regularly lost their minds when their comments get caught in the spam filter and/or flagged by the community, making clear how much of a hypocrite they are in demanding that everyone else deal with what they themselves aren’t willing to accept.

Cdaragorn (profile) says:

Re: You're saying "Nerd Harder" can make the bad idea work.

So you’ve based your entire argument on 3 premises that are easily proven false by this very article, much less all the wealth of speech Masnick has made on this site. There’s literally no point in even trying to talk with you about any conclusion you’ve made based on those.

Mike Masnick (profile) says:

Re: You're saying "Nerd Harder" can make the bad idea work.

Masnick has conflicting premises:

Oh boy, oh boy, oh boy. Can’t wait to find out what these are.

1) That social media and more communication are inherently good, without drawbacks.

Have never argued this, nor do I believe it. I believe — as I’ve said many times — that social media has many drawbacks, but also many benefits. There are significant tradeoffs.

2) Being an Extreme Libertarian (at the least) he actually assumes that there are no bad actors anywhere in economics or society.

Wait. Don’t you keep calling me a leftist? Now you’re saying I’m an extreme libertarian? Which is it?

And, uh, no, I believe that there are many bad actors, and that’s everywhere: on social media, in economics and society.

3) For of course his key constant: MONEY! Sex and violence bring highest audience to advertising for highest income, therefore must be no restraints on corporations.

Man. Strike three. You didn’t get a single one right. I think that money as the sole driving force of corporate success is a huge problem (and at some point was planning a post on the evils of "fiduciary duty to investors"). And I’ve never said there should be no restraints on corporations — though I’m wary of restraints that lead to worse outcomes — that harm the public or competition.

You misrepresent nearly everything I actually believe in. Maybe stop responding to the strawman in your head of me and deal with what I actually believe.

Bamboo Harvester (profile) says:

Intent?

Ridiculous.

Ban posting of videos showing actual killings and there’s no "ethical tangle".

And the whole some may have posted it in protest over the killing is ludicrous as well. Chop the audio and any "commentary" from the vid and it’s the same damned thing – someone dying.

Doesn’t matter if it’s a black guy being shot by a cop or a white child being raped to death by black muslims.

It’s a killing. "Intent" of the poster has zero bearing.

Cdaragorn (profile) says:

Re: Intent?

So it’s never of any value whatsoever to see proof of someone committing a terrible act. That’s an incredibly stupid and narrow minded view of anything, but especially of speech (which video of something most definitely is).

The fact that you don’t agree with seeing it does not magically make the value others see in having it visible wrong. In fact you haven’t even bothered to explain why you think it’s wrong for it to be there. You seem to have jumped to the conclusion that everyone should "obviously" agree with you.

You’re assumption was not correct.

Bamboo Harvester (profile) says:

Re: Re: Intent?

Your words, not mine.

But if "intent" is to be the basis, if the shooter streamed it to his imam and the imam posted it to Facebook "in protest" because "not all muslims are like this", it’d be ok by you?

Helluva way for a family to be notified their kid or parent was killed – live streamed on Facebook…

Bamboo Harvester (profile) says:

Re: Re: Intent?

Nice diversion.

Facebook considered "ethical reasons" and "intent of posting" as part of their "should we delete it" policy.

So if I post a video of a person being shot in the head without comment or audio, is that ok?

If I leave the audio on and the victim is screaming racial epithets at the shooter, is that ok?

If I leave the audio on and the shooter is screaming racial epithets at the victim, is that ok?

Oh, sorry. YOU don’t get to make that decision. Facebook makes it FOR you.

And now whines that "it’s so hard to determine reason or intent…."

Anonymous Coward says:

Ahmed decided to try a new kind of hash technology that took a fingerprint from a vector of the video—its audio—which was likely to remain the same across different versions. This technique, combined with others, worked: in the first twenty-four hours, one and a half million copies of the video were removed from the site, with 1.2 million of those removed at the point of upload.

Oh look, exactly what I suggested to do the morning after.

Anyway yeah, it’s impossible to automatically and proactively prevent content like this from being streamed but, audible gunshots and screaming must be worth a yellow flag. As long as Facebook (and other social media giants) demonstrate some of the most basic of their countermeasures to the public they probably would get less flak about this.

Leave a Reply to Cdaragorn Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...