Content Moderation Case Study: Facebook Responds To A Live-streamed Mass Shooting (March 2019)
from the live-content-moderation dept
Summary: On March 15, 2019, the unimaginable happened. A Facebook user — utilizing the platform’s live-streaming option — filmed himself shooting mosque attendees in Christchurch, New Zealand.
By the end of the shooting, the shooter had killed 51 people and injured 49. Only the first shooting was live-streamed, but Facebook was unable to end the stream before it had been viewed by a few hundred users and shared by a few thousand more.
The stream was removed by Facebook almost an hour after it appeared, thanks to user reports. The moderation team began working immediately to find and delete re-uploads by other users. Violent content is generally a clear violation of Facebook’s terms of service, but context does matter. Not every video of violent content merits removal, but Facebook felt this one did.
The delay in response was partly due to limitations in Facebook’s automated moderation efforts. As Facebook admitted roughly a month after the shooting, the shooter’s use of a head-mounted camera made it much more difficult for its AI to make a judgment call on the content of the footage.
Facebook’s efforts to keep this footage off the platform continue to this day. The footage has migrated to other platforms and file-sharing sites — an inevitability in the digital age. Even with moderators knowing exactly what they’re looking for, platform users are still finding ways to post the shooter’s video to Facebook. Some of this is due to the sheer number of uploads moderators are dealing with. The Verge reported the video was re-uploaded 1.5 million times in the 48 hours following the shooting, with 1.2 million of those automatically blocked by moderation AI.
Decisions to be made by Facebook:
- Should the moderation of live-streamed content involve more humans if algorithms aren’t up to the task?
- When live-streamed content is reported by users, are automated steps in place to reduce visibility or sharing until a determination can be made on deletion?
- Will making AI moderation of livestreams more aggressive result in over-blocking and unhappy users?
- Do the risks of allowing content that can’t be moderated prior to posting outweigh the benefits Facebook gains from giving users this option?
- Is it realistic to “draft” Facebook users into the moderation effort by giving certain users additional moderation powers to deploy against marginal content?
Questions and policy implications to consider:
- Given the number of local laws Facebook attempts to abide by, is allowing questionable content to stay “live” still an option?
- Does newsworthiness outweigh local legal demands (laws, takedown requests) when making judgment calls on deletion?
- Does the identity of the perpetrator of violent acts change the moderation calculus (for instance, a police officer shooting a citizen, rather than a member of the public shooting other people)?
- Can Facebook realistically speed up moderation efforts without sacrificing the ability to make nuanced calls on content?
Resolution: Facebook reacted quickly to user reports and terminated the livestream and the user’s account. It then began the never-ending work of taking down uploads of the recording by other users. It also changed its rules governing livestreams in hopes of deterring future incidents. The new guidelines provide for temporary and permanent bans of users who livestream content that violates Facebook’s terms of service, as well as prevent these accounts from buying ads. The company also continues to invest in improving its automated moderation efforts in hopes of preventing streams like this from appearing on users’ timelines.
Filed Under: case study, christchurch, content moderation, live streaming, new zealand, shooting
Companies: facebook
Comments on “Content Moderation Case Study: Facebook Responds To A Live-streamed Mass Shooting (March 2019)”
This comment has been flagged by the community. Click here to show it.
Unimaginable because it’s in New Zealand, or what? Mass shootings happen several times a year in the USA, and it doesn’t take a lot of imagination to think of streaming them.
It’s terrible, but I’m sure I’ve seen several fictional movies with plots like this, not to mention the real-life antecedents. The Wikipedia "Live streaming crime" page shows 5 incidents in 2017 (including murder, suicide, and gang rape).
They’re psychopaths, not idiots. They don’t respect rules, and I don’t imagine they have many plans for the future.
This seems like the 1970s snuff-film moral panic all over again.
Re: Re:
"Unimaginable because it’s in New Zealand, or what?"
Yes. Gun crime is very rare in some countries, even if it’s background noise where you are. Most New Zealanders would never have dreamed of it happening there.
"They’re psychopaths, not idiots."
They’re referring to the idiots who restream the acts, not the people perpetrating them.
"This seems like the 1970s snuff-film moral panic all over again."
No, unlike snuff films these actually exist, and there’s fairly good evidence that the 8chan types who do these things are in part encouraged by the extra exposure they get from such activity.
This comment has been flagged by the community. Click here to show it.
Authoritarian Apologist?
ALL these posts about the futility of Content Moderation are tiresome, and REEK of extremist apologisia. Is the poster a member of a group planing murderous violence? It seems so. Or has the poster been banned from Twitter too many times to count? Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous.
Re: Authoritarian Apologist?
If you have followed this site for any length of time you would know that the majority support moderation, and it is mainly the extreme right who are having problems pushing their racist viewpoint that object to any moderation.
Besides which, just how do you stop thing like the live stream under discussion appearing without eliminating live streaming, and requiring that all content is pre-moderated. Doing both will silence the majority on the Internet and destroy useful services like zoom.
In other words, just how do you propose to successfully moderate all the conversation of the human race?
Re: Re: Authoritarian Apologist?
Without a universally accepted set of societal norms, this is an impossible task. The best one can accomplish is the moderation of individual social groups.
Beyond that, there are moral and ethical concerns dealing with demanding the silencing of others not directly involved in the conversation. One such concern is the limitation of human progress by forbidding certain modes of thought. Another is the risk of a grievance becoming a criminal act against society due to society’s unwillingness to listen. Of course the one concern people are most familiar with is the destruction of political discourse and the detrimental effect it has on a society "of the people" when such discourse is limited to only approved talking points.
Different societies have different norms and what’s acceptable discourse to some is unheard of and offensive to others. Trying to apply pervasive moderation to the entire species, when that species hasn’t yet agreed on a set of norms, will very likely prohibit that species from ever doing so. Even if a species does have universally accepted norms, applying pervasive moderation may very well lead that species to ruin.
Re: Re: Re: Authoritarian Apologist?
"Without a universally accepted set of societal norms, this is an impossible task. The best one can accomplish is the moderation of individual social groups"
This is exactly the point. When people say "well, just hire more people", the number of people who would be hired would have to be, by definition, a group from every religious, social economic and cultural background. Letting them individually come up with the moderation criteria would never have any level of consistency, so you have to come up with some centralised neutral criteria. This would never be acceptable to everyone.
OK, so automate it. Then, you have the problem that algorithms can never understand subjective information, of which the majority of what they are moderating are. So, you double your problems, not only do you have the central "neutral" criteria, but you have a moderator incapable of understanding why, say, a gory shot from Evil Dead 2 or Monty Python is funny but a real life dismemberment is unacceptable, and there’s a lot more subtle disagreements than those.
There’s no easy answer, but I fear that people as dense as the AC above believe that it is, so people who understand reality will long be in conflict with people who believe in magic.
Re: Authoritarian Apologist?
"ALL these posts about the futility of Content Moderation are tiresome"
Not anywhere near as tiresome as the idiots who think that there’s a magic wand that will perfectly moderate content without collateral damage.
"Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous."
Good thing nobody he opposes it, then. It’s just noted that it’s impossible to do perfectly at scale, especially with something like streamed live video.
Re: Authoritarian Apologist?
Pardon?
These case studies are designed to be extremely neutral. We are outlining what happened. Companies face these decisions every day, and they are often challenging, raise complex questions, trigger unforeseen side effects, or just don’t go well. We’re documenting these kinds of incidents to help understand the challenges of content moderation and highlight the difficult tradeoffs, so it can be done better – not to make the case that it’s "futile".
Content moderation is never going to be easy or simple. These case studies aim to help people navigate it.
Re: Authoritarian Apologist?
ALL these posts about the futility of Content Moderation are tiresome, and REEK of extremist apologisia.
Huh?
Is the poster a member of a group planing murderous violence? It seems so.
What?!?
Or has the poster been banned from Twitter too many times to count?
Nope.
Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous.
Neutrally written case studies on how different content moderation challenges were handled, highlighting some of the trade offs and key issues… written to help people better understand content moderation… makes you think that we’re arguing AGAINST content moderation?
Also, have you ever read this site?
Your reading comprehension filters are in need of a reboot, buddy.
Re: Authoritarian Apologist?
Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous.
Wait, who’s the authoritarian here?
Re: Re: Authoritarian Apologist?
Plus that statement is literally self contradictory and impossible thus nonsensical. Quick! Wear a hat fully on your head and don’t have a hat touching your head at the same time!
nice
https://www.al7addad.com/
جمييل
thank you
nice
how to know if a chinese girl likes you
Why online dating service Over 50s Doesn
The past decade has witnessed an explosion in the amount of online dating websites around the Earth, And the amount of people using them. Based on some estimates, there are other than 8,000 online dating sites websites globally, and more than 2,500 from the particular alone. of course, That is only the number of different websites; It is no surprise that many people find online dating overpowering!
Why online dating over 50 doesn workThe past decade has witnessed an explosion in the amount of online dating websites around the Earth, And the amount of men and women using them. Based on some shows,
in the usa, Online dating is among the most 2nd most frequent method for heterosexual couples to fulfill (Behind introductions through buddies ).
It is crazy once you think about it.
After countless years of human engineering, And tens of thousands of years of their growth of human culture, People had depended on the notion that in person interactions during fun, Face to face societal actions were a great way to satisfy new men and women.
Rather than meeting people in an amazing social environment, And utilizing all of the social tools we must work out in order someone business, Technology came to assist you in making a decision about somebody without even needing to fulfill them in person.
And with one of these alluring promise, It is clear why international dating took off so fast.
Suddenly there were other means to find a spouse, One who promised virtually unlimited opportunity, In which an investigation might find one of that the"desirable" Individual devoid of having to do the challenging work of actually talking to them in person. And if you can’t enjoy what you see, You could always click on the next profile there always another candidate only inevitable!
effortlessly, Online dating would not be so hot if it did not finish the job for so a lot of men and women. Based on some quotations, More than a third of unions <a href=https://www.love-sites.com/10-simple-rules-of-dating-shy-asian-brides/>Asian dating</a> in the usa are currently from couples who met online. (remarkably, That term"fitting online" Comprises more than merely online dating websites, And contains all kinds of social networks and internet conversation.)
And this is also true for elderly adults.
If you are aged 50 or more than finding a partner with online dating in your fifties on the web is even more complex. You are not searching for the identical items you were when you young: near someone typically seeking to settle down and have children, related to! Your motives for locating somebody tend to be wider and more varied; you possibly will not even be very certain if it the love you looking for whatsoever.
If a few folks are finding love through international dating websites, Why does this fail a lot of others?
to take, Let us take a peek at a few of the chief reasons international dating does not work.
And I let you know what you are able do about it!
They learned that only over 84,440 men and women in great britain match the normal individual needs, By an adult number of 47 million.
That is exactly the identical as 1 at 562.
To put it distinct, Applying the typical individual filters involved in finding a suitable mate provides you less than a 1 in 500 chance of becoming successful.
And it gets worse the further prescriptive you about your own things.
Some websites take this to an extreme amount and permit you to go nuts specifying the features that you may need: special history, belief, income for such, race, own customs, Even pet tendencies!
What they do not ever make apparent is that every filter you add decreases your chances of finding a compatible spouse even further.
1 found on 562, You might be speaking about 1 in a thousand.
The guarantee of making it quicker to locate your"good" Companion by permitting you to feature filters to hone in on particular needs has actually had the reverse effect, Decreasing your pool into the stage it gets very difficult to find anybody!
Before online dating existed, obtaining a harmonious fit was much less clinical; You would meet somebody in real life, And if you liked their business that you should on a different date, extra. You at least speak to someone before you would go anywhere near finding out so, what their pet tastes were. And you’d probably probably then use your judgment about if you enjoyed them or not.
There growing indicators that, In personally meetings, We subconsciously picking up hints regarding the suitability of prospective spouses based on a vast array of non technical info.
all the same, Once you staring at the profiles of different folks, It easy to overlook this principle applies to them, at times. You determine what you are seeing is not a true representation of these, But get wasted block you from judging them anyway.
[—-]