Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Facebook Responds To A Live-streamed Mass Shooting (March 2019)

from the live-content-moderation dept

Summary: On March 15, 2019, the unimaginable happened. A Facebook user — utilizing the platform’s live-streaming option — filmed himself shooting mosque attendees in Christchurch, New Zealand.

By the end of the shooting, the shooter had killed 51 people and injured 49. Only the first shooting was live-streamed, but Facebook was unable to end the stream before it had been viewed by a few hundred users and shared by a few thousand more.

The stream was removed by Facebook almost an hour after it appeared, thanks to user reports. The moderation team began working immediately to find and delete re-uploads by other users. Violent content is generally a clear violation of Facebook’s terms of service, but context does matter. Not every video of violent content merits removal, but Facebook felt this one did.

The delay in response was partly due to limitations in Facebook’s automated moderation efforts. As Facebook admitted roughly a month after the shooting, the shooter’s use of a head-mounted camera made it much more difficult for its AI to make a judgment call on the content of the footage.

Facebook’s efforts to keep this footage off the platform continue to this day. The footage has migrated to other platforms and file-sharing sites — an inevitability in the digital age. Even with moderators knowing exactly what they’re looking for, platform users are still finding ways to post the shooter’s video to Facebook. Some of this is due to the sheer number of uploads moderators are dealing with. The Verge reported the video was re-uploaded 1.5 million times in the 48 hours following the shooting, with 1.2 million of those automatically blocked by moderation AI.

Decisions to be made by Facebook:

  • Should the moderation of live-streamed content involve more humans if algorithms aren’t up to the task?
  • When live-streamed content is reported by users, are automated steps in place to reduce visibility or sharing until a determination can be made on deletion?
  • Will making AI moderation of livestreams more aggressive result in over-blocking and unhappy users?
  • Do the risks of allowing content that can’t be moderated prior to posting outweigh the benefits Facebook gains from giving users this option?
  • Is it realistic to “draft” Facebook users into the moderation effort by giving certain users additional moderation powers to deploy against marginal content?

Questions and policy implications to consider:

  • Given the number of local laws Facebook attempts to abide by, is allowing questionable content to stay “live” still an option?
  • Does newsworthiness outweigh local legal demands (laws, takedown requests) when making judgment calls on deletion?
  • Does the identity of the perpetrator of violent acts change the moderation calculus (for instance, a police officer shooting a citizen, rather than a member of the public shooting other people)?
  • Can Facebook realistically speed up moderation efforts without sacrificing the ability to make nuanced calls on content?

Resolution: Facebook reacted quickly to user reports and terminated the livestream and the user’s account. It then began the never-ending work of taking down uploads of the recording by other users. It also changed its rules governing livestreams in hopes of deterring future incidents. The new guidelines provide for temporary and permanent bans of users who livestream content that violates Facebook’s terms of service, as well as prevent these accounts from buying ads. The company also continues to invest in improving its automated moderation efforts in hopes of preventing streams like this from appearing on users’ timelines.

Filed Under: , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Facebook Responds To A Live-streamed Mass Shooting (March 2019)”

Subscribe: RSS Leave a comment
14 Comments

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

On March 15, 2019, the unimaginable happened. A Facebook user — utilizing the platform’s live-streaming option — filmed himself shooting mosque attendees in Christchurch, New Zealand.

Unimaginable because it’s in New Zealand, or what? Mass shootings happen several times a year in the USA, and it doesn’t take a lot of imagination to think of streaming them.

It’s terrible, but I’m sure I’ve seen several fictional movies with plots like this, not to mention the real-life antecedents. The Wikipedia "Live streaming crime" page shows 5 incidents in 2017 (including murder, suicide, and gang rape).

It also changed its rules governing livestreams in hopes of deterring future incidents. The new guidelines provide for temporary and permanent bans of users who livestream content that violates Facebook’s terms of service

They’re psychopaths, not idiots. They don’t respect rules, and I don’t imagine they have many plans for the future.

This seems like the 1970s snuff-film moral panic all over again.

PaulT (profile) says:

Re: Re:

"Unimaginable because it’s in New Zealand, or what?"

Yes. Gun crime is very rare in some countries, even if it’s background noise where you are. Most New Zealanders would never have dreamed of it happening there.

"They’re psychopaths, not idiots."

They’re referring to the idiots who restream the acts, not the people perpetrating them.

"This seems like the 1970s snuff-film moral panic all over again."

No, unlike snuff films these actually exist, and there’s fairly good evidence that the 8chan types who do these things are in part encouraged by the extra exposure they get from such activity.

This comment has been flagged by the community. Click here to show it.

ObserverInPA says:

Authoritarian Apologist?

ALL these posts about the futility of Content Moderation are tiresome, and REEK of extremist apologisia. Is the poster a member of a group planing murderous violence? It seems so. Or has the poster been banned from Twitter too many times to count? Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous.

Anonymous Coward says:

Re: Authoritarian Apologist?

If you have followed this site for any length of time you would know that the majority support moderation, and it is mainly the extreme right who are having problems pushing their racist viewpoint that object to any moderation.

Besides which, just how do you stop thing like the live stream under discussion appearing without eliminating live streaming, and requiring that all content is pre-moderated. Doing both will silence the majority on the Internet and destroy useful services like zoom.

In other words, just how do you propose to successfully moderate all the conversation of the human race?

Anonymous Coward says:

Re: Re: Authoritarian Apologist?

In other words, just how do you propose to successfully moderate all the conversation of the human race?

Without a universally accepted set of societal norms, this is an impossible task. The best one can accomplish is the moderation of individual social groups.

Beyond that, there are moral and ethical concerns dealing with demanding the silencing of others not directly involved in the conversation. One such concern is the limitation of human progress by forbidding certain modes of thought. Another is the risk of a grievance becoming a criminal act against society due to society’s unwillingness to listen. Of course the one concern people are most familiar with is the destruction of political discourse and the detrimental effect it has on a society "of the people" when such discourse is limited to only approved talking points.

Different societies have different norms and what’s acceptable discourse to some is unheard of and offensive to others. Trying to apply pervasive moderation to the entire species, when that species hasn’t yet agreed on a set of norms, will very likely prohibit that species from ever doing so. Even if a species does have universally accepted norms, applying pervasive moderation may very well lead that species to ruin.

PaulT (profile) says:

Re: Re: Re: Authoritarian Apologist?

"Without a universally accepted set of societal norms, this is an impossible task. The best one can accomplish is the moderation of individual social groups"

This is exactly the point. When people say "well, just hire more people", the number of people who would be hired would have to be, by definition, a group from every religious, social economic and cultural background. Letting them individually come up with the moderation criteria would never have any level of consistency, so you have to come up with some centralised neutral criteria. This would never be acceptable to everyone.

OK, so automate it. Then, you have the problem that algorithms can never understand subjective information, of which the majority of what they are moderating are. So, you double your problems, not only do you have the central "neutral" criteria, but you have a moderator incapable of understanding why, say, a gory shot from Evil Dead 2 or Monty Python is funny but a real life dismemberment is unacceptable, and there’s a lot more subtle disagreements than those.

There’s no easy answer, but I fear that people as dense as the AC above believe that it is, so people who understand reality will long be in conflict with people who believe in magic.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Authoritarian Apologist?

"ALL these posts about the futility of Content Moderation are tiresome"

Not anywhere near as tiresome as the idiots who think that there’s a magic wand that will perfectly moderate content without collateral damage.

"Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous."

Good thing nobody he opposes it, then. It’s just noted that it’s impossible to do perfectly at scale, especially with something like streamed live video.

This comment has been deemed insightful by the community.
Leigh Beadon (profile) says:

Re: Authoritarian Apologist?

Pardon?

These case studies are designed to be extremely neutral. We are outlining what happened. Companies face these decisions every day, and they are often challenging, raise complex questions, trigger unforeseen side effects, or just don’t go well. We’re documenting these kinds of incidents to help understand the challenges of content moderation and highlight the difficult tradeoffs, so it can be done better – not to make the case that it’s "futile".

Content moderation is never going to be easy or simple. These case studies aim to help people navigate it.

Mike Masnick (profile) says:

Re: Authoritarian Apologist?

ALL these posts about the futility of Content Moderation are tiresome, and REEK of extremist apologisia.

Huh?

Is the poster a member of a group planing murderous violence? It seems so.

What?!?

Or has the poster been banned from Twitter too many times to count?

Nope.

Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous.

Neutrally written case studies on how different content moderation challenges were handled, highlighting some of the trade offs and key issues… written to help people better understand content moderation… makes you think that we’re arguing AGAINST content moderation?

Also, have you ever read this site?

Your reading comprehension filters are in need of a reboot, buddy.

Rishxae (user link) says:

how to know if a chinese girl likes you

Why online dating service Over 50s Doesn

The past decade has witnessed an explosion in the amount of online dating websites around the Earth, And the amount of people using them. Based on some estimates, there are other than 8,000 online dating sites websites globally, and more than 2,500 from the particular alone. of course, That is only the number of different websites; It is no surprise that many people find online dating overpowering!

Why online dating over 50 doesn workThe past decade has witnessed an explosion in the amount of online dating websites around the Earth, And the amount of men and women using them. Based on some shows,

in the usa, Online dating is among the most 2nd most frequent method for heterosexual couples to fulfill (Behind introductions through buddies ).

It is crazy once you think about it.

After countless years of human engineering, And tens of thousands of years of their growth of human culture, People had depended on the notion that in person interactions during fun, Face to face societal actions were a great way to satisfy new men and women.

Rather than meeting people in an amazing social environment, And utilizing all of the social tools we must work out in order someone business, Technology came to assist you in making a decision about somebody without even needing to fulfill them in person.

And with one of these alluring promise, It is clear why international dating took off so fast.

Suddenly there were other means to find a spouse, One who promised virtually unlimited opportunity, In which an investigation might find one of that the"desirable" Individual devoid of having to do the challenging work of actually talking to them in person. And if you can’t enjoy what you see, You could always click on the next profile there always another candidate only inevitable!

effortlessly, Online dating would not be so hot if it did not finish the job for so a lot of men and women. Based on some quotations, More than a third of unions <a href=https://www.love-sites.com/10-simple-rules-of-dating-shy-asian-brides/>Asian dating</a> in the usa are currently from couples who met online. (remarkably, That term"fitting online" Comprises more than merely online dating websites, And contains all kinds of social networks and internet conversation.)

And this is also true for elderly adults.

If you are aged 50 or more than finding a partner with online dating in your fifties on the web is even more complex. You are not searching for the identical items you were when you young: near someone typically seeking to settle down and have children, related to! Your motives for locating somebody tend to be wider and more varied; you possibly will not even be very certain if it the love you looking for whatsoever.

If a few folks are finding love through international dating websites, Why does this fail a lot of others?

to take, Let us take a peek at a few of the chief reasons international dating does not work.

And I let you know what you are able do about it!

  1. Filters are the enemy Researchers in the united states recently calculated that the likelihood of finding a suitable mate if they employed the typical individual requirements (on the subject of desirable age, physiological requirements, lay, etc,for example ).

They learned that only over 84,440 men and women in great britain match the normal individual needs, By an adult number of 47 million.

That is exactly the identical as 1 at 562.

To put it distinct, Applying the typical individual filters involved in finding a suitable mate provides you less than a 1 in 500 chance of becoming successful.

And it gets worse the further prescriptive you about your own things.

Some websites take this to an extreme amount and permit you to go nuts specifying the features that you may need: special history, belief, income for such, race, own customs, Even pet tendencies!

What they do not ever make apparent is that every filter you add decreases your chances of finding a compatible spouse even further.

1 found on 562, You might be speaking about 1 in a thousand.

The guarantee of making it quicker to locate your"good" Companion by permitting you to feature filters to hone in on particular needs has actually had the reverse effect, Decreasing your pool into the stage it gets very difficult to find anybody!

Before online dating existed, obtaining a harmonious fit was much less clinical; You would meet somebody in real life, And if you liked their business that you should on a different date, extra. You at least speak to someone before you would go anywhere near finding out so, what their pet tastes were. And you’d probably probably then use your judgment about if you enjoyed them or not.

There growing indicators that, In personally meetings, We subconsciously picking up hints regarding the suitability of prospective spouses based on a vast array of non technical info.

  1. A profile Isn a person if you have ever made an internet dating profile on your own, You are aware it merely scratches the surface of all you could are like.

all the same, Once you staring at the profiles of different folks, It easy to overlook this principle applies to them, at times. You determine what you are seeing is not a true representation of these, But get wasted block you from judging them anyway.
[—-]

Leave a Reply to PaulT Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow