Facebook Ups Surveillance Of Users To Keep Tabs On People Who Don't Like Facebook
from the threat-level:-oversharing dept
Tech companies are becoming far more than useful repositories of third party records. They’re becoming far more active in terms of surveillance, pivoting from platform providers to private sector Big Brothers, weaponizing their data collection capabilities to keep tabs on customers and users.
Facebook has decided to start scanning its platform for threats. Not threats against the many nations it serves or threats targeting other users, but rather threats against Facebook itself.
One of the tools Facebook uses to monitor threats is a “be on lookout” or “BOLO” list, which is updated approximately once a week. The list was created in 2008, an early employee in Facebook’s physical security group told CNBC. It now contains hundreds of people, according to four former Facebook security employees who have left the company since 2016.
Facebook notifies its security professionals anytime a new person is added to the BOLO list, sending out a report that includes information about the person, such as their name, photo, their general location and a short description of why they were added.
Users who publicly threaten the company, its offices or employees — including posting threatening comments in response to posts from executives like CEO Mark Zuckerberg and COO Sheryl Sandberg — are often added to the list. These users are typically described as making “improper communication” or “threatening communication,” according to former employees.
It’s not that Facebook shouldn’t be on the lookout for credible threats. It’s that it’s turned its platform into a surveillance tool for its in-house knockoff law enforcement agency. It’s not clear whether the company is turning over its internal BOLO list to actual law enforcement, but if it is, that raises even more concerns. Certainly the company should be concerned about legitimate threats. But the company is flagging people simply for expressing their displeasure with Facebook in general.
While some users end up on the list after repeated appearances on company property or long email threats, others might find themselves on the BOLO list for saying something as simple as “F— you, Mark,” “F— Facebook” or “I’m gonna go kick your a–,” according to a former employee who worked with the executive protection team.
This undercuts Facebook’s official statements about “rigorous reviews” of detected threats. So does the claim made by former employees that fired employees are automatically added to the BOLO list, despite nearly 100% of fired employees from all vocations posing no threat to their former employers.
And it goes further than simply flagging people (and, apparently, displaying their photos on monitors in the threat detection center). Facebook also tracks listed individuals using their smartphones, thanks to permissions granted to the Facebook app. The app comes pre-installed on most smartphones and most users are unaware how much data Facebook is gathering even when the app isn’t in use.
Presumably, if some “F— you, Mark” person gets too close to the Facebook campus, actual law enforcement is alerted. This sort of situation can only lead to positive outcomes. A person mildly displeased with Facebook’s endless fuckery will be greeted by armed officers under the impression a credible threat has been made against the company. Good times.
More good times await. Facebook is also promising to “help” the suicidal by sending the cops after them.
Since 2011, Facebook has allowed users to flag potential suicidal content; reports prompted emails from Facebook urging the poster to call the National Suicide Prevention Lifeline. But starting in 2017, Facebook introduced bots to search out and report potential suicidal content. The bots report suspected cries for help to human moderators, who may then “work with first responders, such as police departments to send help,” says CNN.
That’s right: Facebook might call the cops on you because a bot thought you seemed sad. Facebook executives think that if a user exhibits signs of depression, it’s up to Facebook—not the user’s friends, family, or community—to intervene.
Rather than trying to track down friends or family, Facebook is turning this over to “first responders.” In most cases, the first responder on the scene is going to be the local PD. Given how often police officers have helped talk people out of suicide by killing them, this effort by Facebook is going to result in more dead suicidal people than simply doing nothing.
I’m not saying Facebook should do nothing about threats against the company or to aid people with suicidal thoughts. But these efforts aren’t going to make anything better and they’re a misuse of Facebook’s vast data collections and moderation efforts. There’s an abuse of trust happening here and Facebook’s efforts are so scattershot and half-assed they’re going to cause a lot of collateral damage.