The Internet Giant's Dilemma: Preventing Suicide Is Good; Invading People's Private Lives... Not So Much

from the you-make-the-call dept

We've talked a lot in the past about the impossibility of doing content moderation well at scale, but it's sometimes difficult for people to fathom just what we mean by "impossible," with them often assuming -- incorrectly -- that we're just saying it's difficult to do well. But it goes way beyond that. The point is that no matter what choices are made, it will lead to some seriously negative outcomes. And that includes doing no moderation at all. In short there are serious trade-offs to every single choice.

Probably without meaning to, the NY Times recently had a pretty good article somewhat exploring this issue in looking at what Facebook is trying to do prevent suicides. We had actually touched on this subject a year ago, when there were reports that Facebook might stop trying to prevent suicides, as it had the potential to violate the GDPR.

However, as the NY Times article makes clear, Facebook really is in a damned if you do, damned if you don't position on this. As the Times points out, Facebook "ramped up" its efforts to prevent suicides after a few people streamed their suicides live on Facebook. Of course, what that underplays significantly is how much crap Facebook got because these suicides were appearing on its platform. Tabloids, like the Sun in the UK, had entire lists of people who died while streaming on Facebook and demanded to know "what Mark Zuckerberg will do" to respond. When the NY Post wrote about one man committing suicide streamed online... it also asked for a comment from Facebook (I'm curious if reporters ask Ford for a comment when someone commits suicide by leaving their car engine on in a garage?). Then there were the various studies, which the press used to suggest social media leads to suicides (even if that's not what the studies actually said). Or there were the articles that merely "asked the question" of whether or not social media "is to blame" for suicides. If every new study leads to reports asking if social media is to blame for suicides, and every story about suicides streamed online demands comments from Facebook, the company is clearly put under pressure to "do something."

And that "do something" has been to hire a ton of people and point its AI chops at trying to spot people who are potentially suicidal, and then trying to do something about it. But, of course, as the NY Times piece notes, that decision is also fraught with all sorts of huge challenges:

But other mental health experts said Facebook’s calls to the police could also cause harm — such as unintentionally precipitating suicide, compelling nonsuicidal people to undergo psychiatric evaluations, or prompting arrests or shootings.

And, they said, it is unclear whether the company’s approach is accurate, effective or safe. Facebook said that, for privacy reasons, it did not track the outcomes of its calls to the police. And it has not disclosed exactly how its reviewers decide whether to call emergency responders. Facebook, critics said, has assumed the authority of a public health agency while protecting its process as if it were a corporate secret.

And... that's also true and also problematic. As with so many things, context is key. We've seen how in some cases, police respond to calls of possible suicidal ideation by showing up with guns drawn, or even helping the process along. And yet, how is Facebook supposed to know -- even if someone is suicidal -- whether or not it's appropriate to call the police in that particular circumstance (this would be helped a lot if the police didn't respond to so many things by shooting people, but... that's a tangent).

The concerns in the NY Times piece are perfectly on point. We should be concerned when a large company is suddenly thrust into the role of being a public health agency. But, at the same time, we should recognize that this is exactly what tons of people were demanding when they were blaming Facebook for any suicides that were announced/streamed on its platform. And, at the same time, if Facebook actually can help prevent a suicide, hopefully most people recognize that's a good thing.

The end result here is that there aren't any easy answers -- and there are massive (life altering) trade offs involved in each of these decisions or non-decisions. Facebook could continue to do nothing, and then lots of people (and reporters and politicians) would certainly scream about how it's enabling suicides and not caring about the lives of people at risk. Or, it can do what it is doing and try to spot suicidal ideation on its platform, and reach out to officials to try to get help to the right place... and receive criticism for taking on a public health role as a private company.

“While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible,” Emily Cain, a Facebook spokeswoman, said in a statement.

The article also has details of a bunch of attempts by Facebook to alert police to suicide attempts streaming on its platform with fairly mixed results. Sometimes the police were able to prevent it, and in other cases, they arrived too late. Oh, and for what it's worth, the article does note in an aside that Facebook does not provide this service in the EU... thanks to the GDPR.

In the end, this really does demonstrate one aspect of the damned if you do, damned if you don't situation that Facebook and other platforms are put into on a wide range of issues. If users do something bad via your platform, people immediately want to blame the platform for it and demand "action." But what kind of "action" then leads to all sorts of other questions and huge trade-offs, leading to more criticism (sometimes from the same people). This is why expecting any platform to magically "stop all bad stuff" is a fool's errand that will only create more problems. We should recognize that these are nearly impossible challenges. Yes, everyone should work to improve the overall results, but expecting perfection is silly because there is no perfection and every choice will have some negative consequences. Understanding what they actually are and being able to discuss them openly without being shouted down would be helpful.

Filed Under: content moderation, dilemma, privacy, public health, social media, suicide, suicide prevention
Companies: facebook

Reader Comments

Subscribe: RSS

View by: Time | Thread

  1. This comment has been flagged by the community. Click here to show it
    Anonymous Coward, 4 Jan 2019 @ 9:47pm

    Re: Re: Re: Re: Re: Re: Re:

    He and many others appear to believe that any AC comment that criticizes Mike and others for the stances they've taken in the defense of large tech companies are actually the same person; a prolific troll who's been bothering the site for years about copyright, calling the users and article authors pirates, and what-not.

    I agree with what you've said. Mike and others continue to give Facebook a chance despite the fact that the company has placed profit above privacy at every turn. There seems to be a stark refusal on their part to admit that the utter size and scale of these social media companies, combined with their ad and engagement-driven revenue models as well as the the fact that their platforms are often the only Internet access that people can get in some places, has created real harm to the ability for truth, facts, and democracy to win and produce positive societal outcomes. Facebook doesn't care that fascists were out in the street cheering that Facebook and Whatsapp's complete and utter dominance allowed them to wage a successful disinformation campaign to elect their choice of dictator, because they got paid either way.

    In Hideo Kojima's 2001 video game 'Metal Gear Solid 2', there's a segment where an AI discusses why it was created and what its main purpose is. Many of the AI's observations of this fictional world's Internet and "digital society" are horrifyingly accurate to Internet as we know it today. Facebook and social media played a key role in making Kojima's predictions come true. Facebook and many other social media companies intentionally designed their platforms to be psychologically addictive in order to drive engagement, and thus the harvesting of more data, enabling the selling of more lucrative targeted ads, and so on. Such as system, as the AI Colonel puts it, "furthers human flaws and rewards the development of convenient half-truths." They intentionally created systems that value endless spews of trivial garbage and falsities over quality content and facts.

    So yes, I agree with you. Facebook acts in a malicious manner in the pursuit of profit. There are no good people at Facebook. Anybody who works there is a collaborator. The only good people are those who have the courage to walk out and hopefully move on to other companies that care about basic principles of democracy and truth. Techdirt hits the nail on the head in countless other places like Net Neutrality, police abusing their power, etc. but the neverending benefit of the doubt they give to social media companies is completely and utterly disgusting.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Special Affiliate Offer

Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Report this ad  |  Hide Techdirt ads
Recent Stories
Report this ad  |  Hide Techdirt ads


Email This

This feature is only available to registered users. Register or sign in to use it.