Moral Panics: Don't Blame Facebook Because Some Guy Posted His Murder Video There
from the get-a-grip dept
As you've probably heard by now, on Sunday a horrific act of violence happened when a clearly disturbed individual apparently decided to (1) randomly murder an elderly man walking down the street, (2) film the entire process from searching for the guy, approaching him, talking to him and then shooting him, and (3) upload it to Facebook for people to see. The police initially reported that he streamed the murder live, but it was later clarified that, while he had streamed some other commentary live earlier in the day, the murder was filmed separately and then uploaded. Still, as happens all too often in these situations, people are immediately jumping to the moral panic stage and asking, as Wired did quickly after, what kind of responsibility Facebook should take. The title of the article says that Facebook "must now face itself" for streaming the murder -- but then seems to have trouble explaining just what it needs to face (perhaps because... there isn't anything for it to face).
And when the manhunt is over, and the grieving begins, so too will Facebook’s soul-searching.
Facebook is not the first media company to struggle with the prospect of unwittingly broadcasting violence shortly after being uploaded. When news anchor Christine Chubbuck killed herself on live TV in 1974, the station was unable to stop the event from airing, but never showed the footage again. The number of viewers who actually saw the event was minimal. Facebook has taken similar steps, pulling Stephens’ video shortly after it was posted. “This is a horrific crime and we do not allow this kind of content on Facebook,” the company said in a statement. “We work hard to keep a safe environment on Facebook, and are in touch with law enforcement in emergencies when there are direct threats to physical safety.”
Uh, right. So... what else does anyone expect Facebook to do? It's not like it can magically stop murders. Or stop people from initially uploading or streaming a murder video. Yes, it can (and does) take those down, and it can (and does) block re-uploading. But to pin this on Facebook seems... really, really weird. It's almost as if whenever there's a murder people want to find someone or something else to blame other than the person doing the killing.
The article kind of admits, later on, that expecting Facebook to do anything is impossible... but that just raises the question of why write a whole article asking what Facebook should do if the answer is "uh, it can't and shouldn't do anything."
Facebook, of course, is a decentralized system, with millions of freelance “reporters” with unfettered access to the public. By the time the company removed the video, thousands had already watched it, and it lives on in other corners of the internet. Meanwhile, the company has resisted calls to use its algorithms to censor videos like this before they are ever posted–not just because it does not want to be accused of violating speech rights, but also because training computers to identify real-time or recent murder is hard. Facebook has long relied on an army of humans to scour videos uploaded to its site. With videos, and especially Live videos, that job goes from hard to impossible—not even Facebook employees can watch a video before it posts.
Currently, Facebook relies on other Facebook users to flag videos that need to be taken down. But that means that someone has to watch the horror before others can be spared it. The onus falls to the viewers, not the company, to determine what is appropriate, what should be shared, and what should be flagged for removal. Traditional media companies have finely-wrought guidelines and policies to help them make these decisions, but Facebook depends on us to do it.
But even after basically admitting that this is an impossibility, the article still then says:
And now it might very well be time for the company to roll up its own sleeves and get to work.
And get to work doing what exactly? Again, Facebook isn't going to stop a murder. And I don't care how good the AI gets, it's unlikely any time soon to say "hey, that video is some person killing another person, don't stream that." There is no sleeve rolling to do on the Facebook side of the equation and even exploring this question seems silly. Yes, senseless murders and violence lead people to go searching for answers, but sometimes there are no answers. And demanding answers from a random tool that was peripherally used connected to the senseless violence doesn't seem helpful at all.