The Internet Giant's Dilemma: Preventing Suicide Is Good; Invading People's Private Lives… Not So Much

from the you-make-the-call dept

We’ve talked a lot in the past about the impossibility of doing content moderation well at scale, but it’s sometimes difficult for people to fathom just what we mean by “impossible,” with them often assuming — incorrectly — that we’re just saying it’s difficult to do well. But it goes way beyond that. The point is that no matter what choices are made, it will lead to some seriously negative outcomes. And that includes doing no moderation at all. In short there are serious trade-offs to every single choice.

Probably without meaning to, the NY Times recently had a pretty good article somewhat exploring this issue in looking at what Facebook is trying to do prevent suicides. We had actually touched on this subject a year ago, when there were reports that Facebook might stop trying to prevent suicides, as it had the potential to violate the GDPR.

However, as the NY Times article makes clear, Facebook really is in a damned if you do, damned if you don’t position on this. As the Times points out, Facebook “ramped up” its efforts to prevent suicides after a few people streamed their suicides live on Facebook. Of course, what that underplays significantly is how much crap Facebook got because these suicides were appearing on its platform. Tabloids, like the Sun in the UK, had entire lists of people who died while streaming on Facebook and demanded to know “what Mark Zuckerberg will do” to respond. When the NY Post wrote about one man committing suicide streamed online… it also asked for a comment from Facebook (I’m curious if reporters ask Ford for a comment when someone commits suicide by leaving their car engine on in a garage?). Then there were the various studies, which the press used to suggest social media leads to suicides (even if that’s not what the studies actually said). Or there were the articles that merely “asked the question” of whether or not social media “is to blame” for suicides. If every new study leads to reports asking if social media is to blame for suicides, and every story about suicides streamed online demands comments from Facebook, the company is clearly put under pressure to “do something.”

And that “do something” has been to hire a ton of people and point its AI chops at trying to spot people who are potentially suicidal, and then trying to do something about it. But, of course, as the NY Times piece notes, that decision is also fraught with all sorts of huge challenges:

But other mental health experts said Facebook?s calls to the police could also cause harm ? such as unintentionally precipitating suicide, compelling nonsuicidal people to undergo psychiatric evaluations, or prompting arrests or shootings.

And, they said, it is unclear whether the company?s approach is accurate, effective or safe. Facebook said that, for privacy reasons, it did not track the outcomes of its calls to the police. And it has not disclosed exactly how its reviewers decide whether to call emergency responders. Facebook, critics said, has assumed the authority of a public health agency while protecting its process as if it were a corporate secret.

And… that’s also true and also problematic. As with so many things, context is key. We’ve seen how in some cases, police respond to calls of possible suicidal ideation by showing up with guns drawn, or even helping the process along. And yet, how is Facebook supposed to know — even if someone is suicidal — whether or not it’s appropriate to call the police in that particular circumstance (this would be helped a lot if the police didn’t respond to so many things by shooting people, but… that’s a tangent).

The concerns in the NY Times piece are perfectly on point. We should be concerned when a large company is suddenly thrust into the role of being a public health agency. But, at the same time, we should recognize that this is exactly what tons of people were demanding when they were blaming Facebook for any suicides that were announced/streamed on its platform. And, at the same time, if Facebook actually can help prevent a suicide, hopefully most people recognize that’s a good thing.

The end result here is that there aren’t any easy answers — and there are massive (life altering) trade offs involved in each of these decisions or non-decisions. Facebook could continue to do nothing, and then lots of people (and reporters and politicians) would certainly scream about how it’s enabling suicides and not caring about the lives of people at risk. Or, it can do what it is doing and try to spot suicidal ideation on its platform, and reach out to officials to try to get help to the right place… and receive criticism for taking on a public health role as a private company.

?While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible,? Emily Cain, a Facebook spokeswoman, said in a statement.

The article also has details of a bunch of attempts by Facebook to alert police to suicide attempts streaming on its platform with fairly mixed results. Sometimes the police were able to prevent it, and in other cases, they arrived too late. Oh, and for what it’s worth, the article does note in an aside that Facebook does not provide this service in the EU… thanks to the GDPR.

In the end, this really does demonstrate one aspect of the damned if you do, damned if you don’t situation that Facebook and other platforms are put into on a wide range of issues. If users do something bad via your platform, people immediately want to blame the platform for it and demand “action.” But what kind of “action” then leads to all sorts of other questions and huge trade-offs, leading to more criticism (sometimes from the same people). This is why expecting any platform to magically “stop all bad stuff” is a fool’s errand that will only create more problems. We should recognize that these are nearly impossible challenges. Yes, everyone should work to improve the overall results, but expecting perfection is silly because there is no perfection and every choice will have some negative consequences. Understanding what they actually are and being able to discuss them openly without being shouted down would be helpful.

Filed Under: , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Internet Giant's Dilemma: Preventing Suicide Is Good; Invading People's Private Lives… Not So Much”

Subscribe: RSS Leave a comment
65 Comments
Henry F Choke says:

Well, they're firm on silencing political opponents.

You’re as ever focused on anomalies, not the everyday problems of mega-corporation "platforms" illegally silencing people who are entirely within common law terms for expressing political views.

This is another of your "gee it’s tough to do right" arguments intended to support globalist corporations in their drive for controlling all speech, by exampling a very minor part of it that raises troubling emotions. It’s sheer ploy.

Facebook really is in a damned if you do, damned if you don’t position

I agree that Facebook should be damned. And broken up too. There’s no "must" to allowing corporations have such overwhelming effect, amplifying suicides to the whole world, and we’ll all better off if cut it down to size with anti-trust and steeply progressive income tax rates. — And not allow tax havens as Google is today in the news for.

Nathan F (profile) says:

Re: Well, they're firm on silencing political opponents.

You’re as ever focused on anomalies, not the everyday problems of mega-corporation "platforms" illegally silencing people who are entirely within common law terms for expressing political views.

If this was GovernmentBook, owned and operated by the US Government, then yes. It would be illegal to silence people for expressing their political views. Facebook however is a privately owned company and you may use their product only in a manner that they have written rules for. If you violate their rules they are perfectly within their rights to revoke your access, even if that rule has something to do with political views.

Please remember that the First Amendment says Congress shall make no law, as in the government. It says nothing about a private corporation making up rules regarding it.

Anonymous Coward says:

Re: Re: Well, they're firm on silencing political opponents.

…but then the telephone companies can pull your service based on what you want to talk about since they are also a private corporation.

The Left doesn’t care about corporate rights anymore than they care about free speech. It’s all instrumental for them. Whatever hurts people on their enemies list is what their disposable principles are.

Anonymous Coward says:

Re: Well, they're firm on silencing political opponents.

cut it down to size with anti-trust and steeply progressive income tax rates

There are smart people who make this argument clearly and convincingly, mostly with regards to corporations much larger and more immediately dangerous than Facebook or Google.

Please stop making their job harder.

Matthew Cline (profile) says:

Re: Well, they're firm on silencing political opponents.

I agree that Facebook should be damned. And broken up too.

Broken up how? Create some smaller companies, and randomly distribute the users amongst them?

… amplifying suicides to the whole world, …

Are you implying that anti-trust would have the effect of reducing the size of the audience of any individual user, and also that this would be a good thing rather than an unfortunate side effect?

Mason Wheeler (profile) says:

Re: Re: Well, they're firm on silencing political opponents.

Are you implying that anti-trust would have the effect of reducing the size of the audience of any individual user,

That should be obvious, yes.

and also that this would be a good thing rather than an unfortunate side effect?

In most cases, yes. In the case of suicides, definitely. (Especially in the cases where not having an audience causes the person to not end up killing themselves in the first place!)

Matthew Cline (profile) says:

Re: Re: Re: Well, they're firm on silencing political opponents.

So, what, makes laws that limit the membership size of social sites? If a person wants to gain an Internet audience larger than that, they’d have to create their own private site and grows its audience on their own?

And if you put some limit on the size of social sites, would that apply to sites like Wikipedia?

Mike Masnick (profile) says:

Re: Well, they're firm on silencing political opponents.

Well, it’s a new year, and so I’ll try something different. Despite all evidence to the contrary, let’s assume you’re seriously this confused and I’ll respond to your points.

You’re as ever focused on anomalies

Can you explain to me what is an "anomaly" in a program that is regularly reporting possible suicide risks and is hiring thousands of people to monitor such? Doesn’t sound like an anomaly.

not the everyday problems of mega-corporation "platforms" illegally silencing people who are entirely within common law terms for expressing political views.

This is not what "common law" means. Common law is the law as determined by the courts — case law is another way of putting it. And case law… says the exact opposite of what you do (as does written law). It is not illegal for a platform to deny access to anyone (unless on a very narrow set of protected classes — and "expressing political views" is not one of them.) If you have a cite to an actual ruling, we could discuss the specifics, but wild confused generalizations claiming it is illegal to remove content from a platform is not the law — neither in regulations nor in "common law."

Of course this has been explained to you dozens of time and you have yet to respond to the fact that your analysis is literally wrong.

This is another of your "gee it’s tough to do right" arguments

No. As stated in the post that you clearly did not actually read, my argument is that idiots who says I’m saying "gee it’s tough to do right" are the ones misrepresenting things. I’m not saying it’s tough. I’m saying it’s literally impossible.

intended to support globalist corporations in their drive for controlling all speech

If you don’t want "globalist corporations to control all speech" you support CDA 230. Without it, those platforms would be responsible for policing all that speech. And yet, you don’t seem to support CDA 230. As with your analysis of "common law," your legal analysis is not just faulty, it’s backwards.

I agree that Facebook should be damned. And broken up too.

You are not alone in that viewpoint. But what no one who supports that position has done is presented a credible, reasonable plan for what that means. Personally, I’d like to see Facebook (and Google and anyone else) flop through competition from more distributed services. But a general claim of "break them up" is meaningless without a plan. Do you break them up by saying only certain people can connect with others? That takes away the network effects that people value. Do you break them up by separating out other parts of their business (Instagram/Whatsapp?) That might work, but… wouldn’t solve any of the concerns people are raising.

So if you have a serious plan that is anything but "BAH, FACEBOOK BAD, WE SMASH FACEBOOK" it is difficult to take you seriously.

we’ll all better off if cut it down to size with anti-trust and steeply progressive income tax rates

That’s one approach, though it’s bizarre given that it comes from you, who regularly spouts Donald Trump talking points. He’s, uh, not a supporter of steeply progressive income tax rates.

And not allow tax havens as Google is today in the news for.

Sure. I’m all for that as well. Won’t have much of an impact though on the issue at hand. So, you have a good suggestion for a minor fix to a side problem that has nothing to do with the issue we’re talking about in the post, and the rest of your comment is mostly filled with nonsense and bullshit.

It’s no wonder people keep telling you to stop trolling.

Uriel-238 (profile) says:

The problem with invading private lives...

…is that it’s too tempting to use the information so gained in unethical ways, from telling companies that a private life might like their products to distributing their nudes (and affairs) though internet gossip channels.

The original idea behind Google was to create a reservoir of private data that would never be looked at directly, but could be used for statistical analysis. Sadly, between state and market forces, they were tempted to break their own rules.

If it were possible to create a system in which data invasion could be handled ethically, there are plenty of medical, social and state interests that would be facilitated by such information.

The problem is that it may be as much of a moral trap as appointing someone dictator-for-life. It’s a level of power very hard not to abuse.

Mason Wheeler (profile) says:

Two thoughts come to mind amid all this:

  1. People are streaming suicides on Facebook because of its immense reach. If Facebook wasn’t so enormous, capable of broadcasting to so many people, they almost certainly wouldn’t do it. (Afterall, you never heard of people broadcasting suicides on the Internet before Facebook, now did you?) It’s being done by people who want to do something shocking to get attention. (The fact that it’s a suicide doesn’t contradict this point, however irrational it may seem, as it’s generally agreed upon that people don’t take their own lives while in their right minds.)
  2. The very existence of this article proves that this tactic is working. They’re getting lots of attention over it!

In light of this, consider the start of the article:

We’ve talked a lot in the past about the impossibility of doing content moderation well at scale, but it’s sometimes difficult for people to fathom just what we mean by "impossible," with them often assuming — incorrectly — that we’re just saying it’s difficult to do well. But it goes way beyond that. The point is that no matter what choices are made, it will lead to some seriously negative outcomes. And that includes doing no moderation at all. In short there are serious trade-offs to every single choice.

When the cause of the problem is Facebook’s enormous scale, and the reason it’s impossible for them to deal with the problem effectively is that very same scale, then the conclusion is obvious.

At this point we’ve seen enough serious scale-related problems that it’s worth taking a serious look at the notion that "too big to succeed is too big to exist."

Gary (profile) says:

Re: Re:

i can’t see this as a compelling argument to shut down facebook.

Around 40,000 people in the US died in cars last year. There are too many cars on the road to make them completely safe. But the cars are a direct cause of the deaths. If those cars weren’t there, every single one of them would have lived.

Facebook is not the immediate cause of death in suicides. Perhaps some of them wouldn’t have killed themselves if they couldn’t broadcast it live. But the vast majority of suicides are done privately. Why is the existence of Facebook a problem here? Should they disable cams?

live.me has been used to livestream deaths. Should they be shut down as well? They certainly aren’t a big company. Should they have to monitor their users streams to prevent suicide?

Are you saying that any service that is too big to monitor all live events should be banned? (And conversely, that all live events should be pre-monitored?)

Mason Wheeler (profile) says:

Re: Re: Re:

But the cars are a direct cause of the deaths.

No, generally speaking cars are not a direct cause of the deaths. Virtually every car on the road today is ridiculously safe; we’re not living in the age of the Pinto anymore. In almost every case, the direct cause of the death was a human being doing something stupid, either driving recklessly, driving while intoxicated, or (in some rare cases) someone who was not driving who carelessly stepped out into the path of a moving vehicle that was close enough that the driver didn’t have time to react.

Are you saying that any service that is too big to [strawman strawman strawman]?

No, I’m not recommending any specific policies. I’m saying that this is a principle that is worthy of serious consideration in light of past and current experience.

Ninja (profile) says:

Re: Re: Re: Re:

“In almost every case, the direct cause of the death was a human being doing something stupid,”

In all cases of suicide streamed in FB it’s a human with a psychological and/or psychiatric problem. You are contradicting yourself. Instead of blaming FB why don’t we look at how well mental care is faring?

And it’s amusing how worried you are about Facebook when it’s already showing clear signs of going Orkut.

I do agree that we could make it easier for new entrants (ie: less taxes on them and regulating data relocating) and incentivize decentralized services. But regulate how big a service may get? That’s a no-no.

Mason Wheeler (profile) says:

Re: Re: Re:2 Re:

In all cases of suicide streamed in FB it’s a human with a psychological and/or psychiatric problem.

Yes. I acknowledged this. I also pointed out that in at least some of the cases, it’s attention-seeking behavior that would not happen if there was not a way to get an audience through a giant social network.

You are contradicting yourself.

I’m not contradicting myself at all; you’re not reading what I’m actually saying.

Instead of blaming FB why don’t we look at how well mental care is faring?

Because for various societal reasons which are beyond the scope of this discussion, we’ve made it very easy for someone with severe psychological problems to not get treatment, so how much good would that actually do?

Leigh Beadon (profile) says:

Re: Re: Re:3 Re:

  • at least some of the cases, it’s attention-seeking behavior that would not happen if there was not a way to get an audience through a giant social network*

This is also true of some cases of suicide by jumping off a roof, which wouldn’t happen if we didn’t allow such tall buildings in such visible public places.

Wendy Cockcroft (profile) says:

Re: Re: Re:4 Re:

People who seek attention while committing suicide are usually registering a protest. At heart, then, they don’t want to die, they want the thing that makes them not want to be alive any more to go away. You may find that the attention-seeking starts well before the self-destruction. Early intervention would be the way forward because it’s usually possible to intervene before it gets to the “You really don’t care if I die right in front of you” stage.

btr1701 (profile) says:

Re: Re:

When the cause of the problem is Facebook’s enormous
> scale, and the reason it’s impossible for them to deal
> with the problem effectively is that very same scale,
> then the conclusion is obvious.

> At this point we’ve seen enough serious scale-related
> problems that it’s worth taking a serious look at the
> notion that “too big to succeed is too big to exist.”

So what’s your solution? To say that private citizens lose their right to free expression the moment their voice becomes so loud everyone can hear it?

Or, conversely, that your right to free speech only exists so long as your voice is so weak no one of consequence will hear it and it will affect nothing?

Anonymous Coward says:

Re: Re:

At this point we’ve seen enough serious scale-related problems that it’s worth taking a serious look at the notion that "too big to succeed is too big to exist."

Have you ever stopped to think that the the good side of the big social media sites is that they enabling a strong unifying across humanity, and have started the bumpy ride to a truly peaceful world.

It is easy to the minority of abusive uses made by a minority of people using these sites, as that is new worthy, while ignoring the strong international communities that are built up round their common interests, as that is not news worthy. Making decision bases on what makes the news is usually a bad idea as it is trying to control the majority because of the actions of a minority.

Mason Wheeler (profile) says:

Re: Re: Re:

Have you ever stopped to think that the the good side of the big social media sites is that they enabling a strong unifying across humanity, and have started the bumpy ride to a truly peaceful world.

I’ve thought about it. Then I’ve looked at the real world and seen that this is simply not the case. Every forum beyond a certain number of regular users (I’m not sure, but I suspect this number is somewhere around 150; look up the concept of the "monkeysphere" if you want to know why) seems to inevitably degenerate into a wretched hive of scum and trollery within a decade, despite the best intentions of any number of stakeholders to try to prevent it from happening.

Mason Wheeler (profile) says:

Re: Re: Re:2 Re:

1) I didn’t say "members", I said "active users." The two figures are almost certainly very different from one another, probably by at least two orders of magnitude in this case.
2) How long have they been around?

If you’re going to try to refute what I said, please try to refute what I actually said instead of some strawman that sounds vaguely similar to it.

Anonymous Coward says:

Re: Re: Re:3 Re:

Well, two orders of magnitude would till mean over 1000 active users, so about an order of magnitude above your arbitrary size, and I suspect a larger active membership that that because it is a self help group for machinists.

I also follow various YouTube channels, where videos can gain hundreds of comments, from tens to hundreds of subscribers, and still remain civil. Indeed one of the “complaints” as a channel grows is that they cannot keep up with the comments, and not that the comments section has become the a cess pit full of trolls.

Christenson says:

Re: Too Big....

Mike:
I love the article’s expansion on and using a different example of a problem I presented earlier.

We have sufficient power and information that choices and tradeoffs have to be made, and there will be negative consequences for any choice. This is true with basically any collective choice on a large scale, including the physical environment, where global warming might just kill us all.

Mason:
Too Big to succeed is too big to exist.
Too big to succeed: It happens when the scale overwhelms the context.

Too big to exist? It’s the point of anti-monopoly law — the prevention of the concentration of power into too few hands. It seems to be what Blue and all the Trolls are actually getting at with their “common law” complaints, and we use such concepts as common carriers to try to prevent such inevitably-abused concentrations. Facebook is powerful enough that it ought to be at least a common carrier, just as the network infrastructure ought to be a common carrier, aka Network Neutrality. Not that breaking up huge internet companies would be sufficient, mind you.

Anonymous Coward says:

GDPR is easy

Oh, and for what it’s worth, the article does note in an aside that Facebook does not provide this service in the EU… thanks to the GDPR.

Facebook could easily offer this service in Europe. All it takes is to inform people about it and ask them to check the box when they are interested. Just like the other checkboxes for say, advertisement tracking, shadow profile tracking, emotional manipulation, voting suggestions.

Anonymous Coward says:

Re: Easy fix

One could just find them a state actor to achieve the same goal.

Would a nationalized AOL from 1997, Yahoo from 2001, or Myspace from 2005 still be in that position?

Every time someone censors people, someone else is there to grab the audience. There’s also USENET for those who want unfettered free speech.

ECA (profile) says:

demanding perfection..is imposible.

Dont jump or I will shoot..just dont work..
Understanding that the internet Social environment is like 1 million pennies dropped into a group of people all trying to catch pennies…how many will hit the floor?

This is as evil as seeing all the server break-ins.. and wonder why it cant be protected. Then you remember that having access on the internet is like connecting to 6 billion people, all at once.

A person asked me how long it would take to Write by hand, to 1 million, and I said 3-5 years.. he didnt quite believe me…I didnt see him for about 1 year, he came in an said he quit counting.

Anonymous Coward says:

Opting out

NYT wrote "There is no way of opting out [of Facebook’s suicide risk scoring system], short of not posting on, or deleting, your Facebook account."

Do we know that either method would be effective? They could scan posts about you, even if you don’t post anything or have an account. The only opt-out confirmed by FB is to be in the EU.

btr1701 (profile) says:

Re: Opting out

“There is no way of opting out [of Facebook’s suicide
> risk scoring system], short of not posting on, or
> deleting, your Facebook account.”

I wonder how many people have trolled it just to fuck with the system, making it think they’re a suicide risk just so they can put on a show of challenging the cops and asserting their rights when they show up?

Kinda like what those “Photography is not a Crime” trolls do.

Gwiz (profile) says:

Re: Re: Opting out

Kinda like what those "Photography is not a Crime" trolls do.

I find it interesting, knowing that you are law enforcement and have good working knowledge of Constitutional law, that you refer to people exercising their rights as "trolls". Perhaps that mindset by LEOs is the actual cause of the friction and not so much because of the lawful actions of citizens.

 

Most First Amendment auditors use the same tactics that law enforcement uses in auto theft stings. They set up a situation and wait for a LEO to CHOOSE to violate their rights. Just like in a bait car sting, the perpetrator must choose to violate the law or it’s considered entrapment. Is really all that different just because it’s a citizen catching a LEO doing something illegal?

Anonymous Coward says:

Hey Mike,

Do you know if your partners at Facebook ever looked into how many of the people who were enrolled in this psychological manipulation study without consent ended up killing themselves as a result? Across that sample size, the number is bound to be >0.

https://www.forbes.com/sites/gregorymcneal/2014/06/30/controversy-over-facebook-emotional-manipulation-study-grows-as-timeline-becomes-more-clear/

Mike Masnick (profile) says:

Re: Re:

Do you know if your partners at Facebook

I’m confused by your reference to Facebook as a "partner." What do you mean by that? We have no relationship with Facebook and never have. I know that you regularly accuse us of being a shill for Facebook, but like all such accusations, it is based on figments of your imagination.

As for the rest of your comment, huh?

Anonymous Coward says:

Re: Re: Re:

I don’t think you’re a shill for Facebook, but you definitely do have a serious blind spot when it comes to them. From your repeated refusal years ago to admit that they did anything wrong with their fraudulent accounting practices in their IPO, to more modern posts where you insist against all reason and evidence to the contrary that they aren’t actually malicious, but simply "basically good people who ended up in over their heads in problems that got too big too quickly," you have always show a clear bias towards the best possible interpretation of Facebook’s behavior, no matter how little justification there has been for that interpretation.

Anonymous Coward says:

Re: Re: Re:3 Re:

Malibu Media’s defender of copyright. The one who helped them harass old ladies for downloading illegally filmed pornography and topped the records for most copyright suits filed in 2018. Who the company later fired. Him, right? The shining knight of copyright? Who Colette Pelissier said could do no wrong? Fantastic representative of copyright enforcement, isn’t he?

Anonymous Coward says:

Re: Re: Re:5 Re:

He and many others appear to believe that any AC comment that criticizes Mike and others for the stances they’ve taken in the defense of large tech companies are actually the same person; a prolific troll who’s been bothering the site for years about copyright, calling the users and article authors pirates, and what-not.

I agree with what you’ve said. Mike and others continue to give Facebook a chance despite the fact that the company has placed profit above privacy at every turn. There seems to be a stark refusal on their part to admit that the utter size and scale of these social media companies, combined with their ad and engagement-driven revenue models as well as the the fact that their platforms are often the only Internet access that people can get in some places, has created real harm to the ability for truth, facts, and democracy to win and produce positive societal outcomes. Facebook doesn’t care that fascists were out in the street cheering that Facebook and Whatsapp’s complete and utter dominance allowed them to wage a successful disinformation campaign to elect their choice of dictator, because they got paid either way.

In Hideo Kojima’s 2001 video game ‘Metal Gear Solid 2’, there’s a segment where an AI discusses why it was created and what its main purpose is. Many of the AI’s observations of this fictional world’s Internet and "digital society" are horrifyingly accurate to Internet as we know it today. Facebook and social media played a key role in making Kojima’s predictions come true. Facebook and many other social media companies intentionally designed their platforms to be psychologically addictive in order to drive engagement, and thus the harvesting of more data, enabling the selling of more lucrative targeted ads, and so on. Such as system, as the AI Colonel puts it, "furthers human flaws and rewards the development of convenient half-truths." They intentionally created systems that value endless spews of trivial garbage and falsities over quality content and facts.

So yes, I agree with you. Facebook acts in a malicious manner in the pursuit of profit. There are no good people at Facebook. Anybody who works there is a collaborator. The only good people are those who have the courage to walk out and hopefully move on to other companies that care about basic principles of democracy and truth. Techdirt hits the nail on the head in countless other places like Net Neutrality, police abusing their power, etc. but the neverending benefit of the doubt they give to social media companies is completely and utterly disgusting.

Mike Masnick (profile) says:

Re: Re: Re: Re:

From your repeated refusal years ago to admit that they did anything wrong with their fraudulent accounting practices in their IPO, to more modern posts where you insist against all reason and evidence to the contrary that they aren’t actually malicious, but simply "basically good people who ended up in over their heads in problems that got too big too quickly," you have always show a clear bias towards the best possible interpretation of Facebook’s behavior, no matter how little justification there has been for that interpretation.

If you go around believing that people who work at companies are simply out to get you in the most malicious way… um… you might be the one who has an issue, not me.

The incentives at Facebook are screwed up — we agreed. The management at Facebook is screwed up and bad. But to argue that they are malicious without any evidence is utter nonsense and it is simply not true.

They’re not. They’re just bad at their jobs and in way over their heads. It’s a real problem and something to be concerned about, but the solutions to the problem "this thing is too big and they’re making bad decisions" are very, very different than the solution to "those are evil assholes out to get everyone." You’re going to fuck up a lot of important shit if you insist the latter is true when it is not.

You won’t be happy with what you’re pushing for. It’ll be much worse for everyone in the end.

Anonymous Coward says:

Re: Re: Re:

As for the rest of your comment, huh?

Facebook conducted a psychological experiment on ~700,000 people without consent. The goal of the experiment was to see if it was possible to manipulate the emotional states of those users by intentionally modifying their feeds to show more positive/negative posts.

Across that number of people, chances of at least a few being in a state of “teetering on the edge of suicide” are high. Facebook most likely knows whether or not they pushed some of those over the edge with their experiment.

Mike Masnick (profile) says:

Re: Re: Re: Re:

I’m aware of that situation.

I am not at all aware of what point you think you’re making with it in regards to this story. It appears you think there’s some sort of "gotcha" in that story, but it seems entirely unrelated to the point being made here (and also was discussed to death at the time).

So unless you care to share something relevant, I will assume you are trolling and move on.

Anonymous Coward says:

In order for Facebook to develop a reliable algorithm for predicting suicide it would need a historical model training dataset. The dataset would need to contain contain all available predictors of suicide (possibly things like mentions of “suicide” and associated terms, indicators of prior mental health problems, indicators of social isolation, social stigma, emotional distress etc) as well as a flag for whether each person did in fact go on to do it. If Facebook is not collecting information on user suicides then it cannot even begin to develop a demonstrably effective predictive model. Even people assessing user content and making judgements can’t be evaluated for accuracy with no outcome data.

For Facebook to do this in a serious way it would need to collaborate with governments to get hold of actual health records and suicide outcome data. It would also need to collaborate with experts, epidemiologists etc, and coordinate with governments in setting up and evaluating the “intervention” strategies.

But which governments would trust Facebook with this information? And which governments would be interested in such a project anyway? Health care is not even a citizen right in many countries.

Moreover, it is not really Facebook’s business to be doing this. The fact that it is considered possible that they could develop a reliable, automated suicide prediction system is symptomatic of the fact that they are already collecting, and free to use however they want, far more information that than they should be. But that is another discussion…

Enforcing anti-bullying and harassment rules, and illegal content rules, maybe, but psychographic profiling of users to solve social problems, no. How could that possibly work?

Christenson says:

Re: How could Enforcing anything possibly work?

In the fevered imagination of Facebook, it works thus:
Indicators of impending suicide are well known and documented, and a subject of scientific study.

Accounts with indicators can be detected and flagged.

In reality:
Computers are really bad at context. Suppose I take someone’s post with true indications they are about to kill themselves, and re-post, along with saying: “Watch out for this!” or “Danger Will Robinson!” or “One more example”. Now how does the computer figure out that it isn’t me who has the problem?

And what if I joke sarcastically as follows: “time to kill myself; I’ve lost all the good comment contests at Techdirt!”. I have seen plenty of broken human sarcasm and joke meters in Techdirt’s comments, some involving famous regulars like Stephen T Stone.

And how many links to and quotes from bad behavior such as harrassing have we seen here on Techdirt? Same problem: context.

Wendy Cockcroft (profile) says:

Power and responsibility

I’m a big believer that with great power comes great responsibility and I believe we all agree that there have been many examples of power being exercised without any responsibility being taken, with horrible results.

(I’m curious if reporters ask Ford for a comment when someone commits suicide by leaving their car engine on in a garage?)

How hard is it to build a sensor into the car’s dashboard that indicates the level of carbon monoxide in the car’s interior and whether or not the engine is switched off, which then triggers the engine to switch off when the safety threshold is exceeded? If you can fit a satnav, you can fit a carbon monoxide sensor.

RE: Facebook suicides

I’d recommend a pop-up article triggered by the keywords "suicide" and "kill myself" (and any others that might fit the bill) that provides professional advice on how to distract or delay a suicide attempt along with information on support services in the suicidal person’s area that can help the suicidal person. This would appear on the screens of everyone viewing the feed. Viewers would also be able to alert the local police by pressing a call to action button "Alert the police?" The pop-up could be minimised if it’s not necessary. Of course, this relies on viewers caring enough to want to stop the suicide, but it’s better than nothing. Thoughts?

nae such says:

Re: Power and responsibility

regarding car companies including a sensor, i would want to know the numbers of suicides by the car in the garage and compare cost benefit to the public. i wouldn’t expect the company to care on this. corporate responsibility is not well known. i suspect the only way to get anything done would be through public action. then we would have to trust our regulators to make a law that was actually worth something. the cost and compare part doesn’t sound difficult(^^).

i don’t think adding useful info to a feed would be bad. possibly depress and shame the original poster. the alert the police button i think would find some happy troll abusers though. as the article states in the states that could be lethal. even in other saner parts of the globe if it were abused enough then it would likely start to be ignored.

i suspect corporate responsibility is the key. certainly without incentives many companies could care less. in many instances it appears that the penalty for incompetence, negligence, or outright intentional lapses are not great enough to have an effect. fines that are cheaper to pay than the original offense. oversight that has no teeth. of course that leads down the rabbit hole into political sins.

Leave a Reply to Leigh Beadon Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...