Facebook, Twitter Consistently Fail At Distinguishing Abuse From Calling Out Abuse

from the the-wrong-approach dept

Time and time again, we see that everyone who doesn’t work in the field of trust and safety for an internet platform seems to think that it’s somehow “easy” to filter out “bad” content and leave up “good” content. It’s not. This doesn’t mean that platforms shouldn’t try to deal with the issue. They have perfectly good business reasons to want to limit people using their systems to abuse and harass and threaten other users. But when you demand that they be legally responsible — as Germany (and then Russia) recently did — bad things happen, and quite frequently those bad things happen to the victims of abuse or harassment or threats.

We just wrote about Twitter’s big failure in suspending Popehat’s account temporarily, after he posted a screenshot of a threat he’d received from a lawyer who’s been acting like an internet tough guy for a few years now. In that case, the person who reviewed the tweet keyed in on the fact that Ken White had failed to redact the contact information from the guy threatening him — which at the very least raises the question of whether or not Twitter considers threats of destroying someone’s life to be less of an issue than revealing that guy’s contact information, which was already publicly available via a variety of sources.

But, it’s important to note that this is not an isolated case. In just the past few days, we’ve seen two other major examples of social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators. The first is the story of Francie Latour, as told in a recent Washington Post article, where she explains how she went on Facebook to vent about a man in a Boston grocery store loudly using the n-word to describe her and her two children, and Facebook’s response was to ban her from Facebook.

But within 20 minutes, Facebook deleted her post, sending Latour a cursory message that her content had violated company standards. Only two friends had gotten the chance to voice their disbelief and outrage.

The second story comes from Ijeoma Oluo, who posted to Medium about a strikingly similar situation. In this case, she made what seems to me to be a perfectly innocuous joke about feeling nervous for her safety as a black woman in a place with many white people. But a bunch of rabid angry people online got mad at her about it and start sending all sorts of abusive tweets and hateful messages to her on Facebook. She actually says that Twitter was pretty good at responding to reports of abusive content. But, as in the Latour story, Facebook responded by banning Oluo for talking about the harassment she was receiving.

And finally, facebook decided to take action. What did they do? Did they suspend any of the people who threatened me? No. Did they take down Twitchy?s post that was sending hundreds of hate-filled commenters my way? No.

They suspended me for three days for posting screenshots of the abuse they have refused to do anything about.

That, of course, is a ridiculous response by Facebook. And Oluo is right to call them out on it, just as Latour and White were right to point out the absurdity of their situations.

But, unfortunately, the response of many people to this kind of thing is just “do better Facebook” or “do better Twitter.” Or, in some cases, they even go so far as to argue that these companies should be legally mandated to take down some of the content. But this will backfire for the exact same reason that these ridiculous situations happened in the first place. When you run a platform and you need to make thousands or hundreds of thousands or millions of these kinds of decisions a day, you’re going to make mistakes. And that’s not because they’re “bad” at this, it’s just the nature of the beast. With that many decisions — many of which involve people demanding immediate action — there’s no easy way to have someone drop in and figure out all of the context in the short period of time they have to make a decision.

On top of that, because this has to be done at scale, you can’t have a team that is all skilled in understanding context and nuance and culture. Nor can you have people who can spend the necessary time to dig deeper to figure out and understand the context. Instead, you end up with a ruleset. And it has to be standardized so that non-experts are able to make judgments on this stuff in a relatively quick timeframe. That’s why about a month ago, there was a kerfuffle when Facebook’s “hate speech rule book” was leaked, and it showed how it could lead to situations where “white men” were going to be protected.

And when you throw into this equation the potential of legal liability, a la Germany (and what a large group of people are pushing for in the US), things will get much, much worse. That’s because when there’s legal liability on the line, companies will be much faster to delete/suspend/ban, just to avoid the liability. And many people calling for such things will be impacted themselves. None of the people in the stories above could have reasonably expected to get banned by these platforms. But, when people demand that platforms “take responsibility” that’s what’s going to happen.

Again, this is not in any way to suggest that online platforms should be a free for all. That would be ridiculous and counterproductive. It would lead to everything being overrun by spam, in addition abusive/harassing behavior. Instead, I think the real answer is that we need to stop putting the burden on platforms to make all the decisions, but figure out alternative ways. I’ve suggested in the past, that one possible solution is turning the tools around. Give end users much more granular control about how they can ban or block or silence content they don’t want to see, rather than leaving it up to a crew of people who have to make snap decisions on who’s at fault when people get angry online.

Of course, there are problems with my suggestion as well — it could certainly accelerate the issues of self-contained bubbles of thought. And it could also result in plenty of incorrect blocking as well. But the larger point is that this isn’t easy, and every single magic bullet solution has serious consequences, and often those consequences fall on the people who are facing the most abuse and harassment, rather than on those doing the abuse and harassment. So, yes, platforms need to do better. The three stories above are all ridiculous, and ended up harming people who were highlighting harassing behavior. But continuing to rely on platforms and teams of people to weed out content someone deems “bad” is not a workable solution, and it’s one that will only lead to more of these kinds of stories.

And, worst of all, the abusers and harassers know and thrive on this. The guy who got Ken White’s account banned gloated about it on Twitter. I’m sure the same was true of the folks who went after Oluo and likely “reported” her to Facebook. Any time you rely on the platform to be the arbiter, remember that he people who want to harass others quickly learn that they can use that as a tool for further harassment themselves.

Filed Under: , , , , , ,
Companies: facebook, twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Facebook, Twitter Consistently Fail At Distinguishing Abuse From Calling Out Abuse”

Subscribe: RSS Leave a comment
41 Comments
DanK (profile) says:

It's an incredibly complex problem

I was discussing the Oluo situation with friends, and it is incredibly difficult to figure out how Facebook could have handled this properly. Put yourself in the shoes of a person tasked with reviewing content, and given only a few seconds per item to review. You are spending hours Per day, deciding “racist”, “sex”, “offensive”, “acceptable”… Then Oluo’s posts come up.

You see the blatant racism. You see the death threats. You see the rape threats. Of course you’d mark it as offensive! The reviewers don’t have the time to get the context that Oluo is posting to shame the original posters (exactly the same as the PopeHat situation). They just see the hateful messages and mark them bad.

Bergman (profile) says:

Re: It's an incredibly complex problem

There’s a flaw in your reasoning, Dan.

All of the posts that Oluo’s account was banned for are posts that were reported to Facebook as being abusive…and Facebook declined to take action because they didn’t violate any rules.

So Oluo posted the abusive comments that, according to Facebook, didn’t break any rules. And was banned for breaking the rules against threatening and abusing people.

Do you see the problem yet?

Anonymous Coward says:

What you understand doubleplusungood is that popehat is guilty of wrongthink and is thus an unprotected person. Facebook and twitter protections only apply to people, i.e. those who have goodthink. When your think is wrongthink, you’re an unprotected, and if it’s doublepluswrongthink, you can even be an unperson. Trust Big Brother.

Michael Chermside (profile) says:

Legal System

The legal systems of the world have evolved slowly over literally thousands of years — with a great deal of cultural inertia but also managing to borrow ideas from each other and improve over time. Nearly all of them (from the Catholic Church’s Canon law to the legal system in the US) incorporate a basic approach that works roughly like this:

(1) both sides present their case

(2) someone decides

(3) there is an “appeals” process where one (or more) layers can review the decisions for fairness

Maybe those companies (Facebook, Twitter, etc) trying to set up a review system should take inspiration from this deep historical source.

Anonymous Coward says:

Re: Re: Legal System

Pay a filing fee…

Is everyone here still really, firmly opposed to “business method” patents? Yeah, I kinda vaguely comprehend that there’s arguably a millenium and more of soi-disant prior art here… But that’s just arguable…   …right?

Look, even after eBay, business method patents are still the law.

And this one is ON A COMPUTER. WITH A SOCIAL NETWORK. FOR DISPUTE RESOLUTION.

Anonymous Coward says:

"social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

HEY, MA SNICK: anyone gets buckets of crap for complaining about the harassment by fanboys here!

**Just read through TODAY’S comments a couple pieces ago!**
https://www.techdirt.com/articles/20170808/15450037961/techdirt-now-with-more-free-speech-reporting.shtml

**I say ME and the others who complained there EXACTLY fit the topic of this piece.** You have taken no action in the 8 years I’ve been complaining here, that those who use words like THIS are the problem, NOT those of us on-topic and civil:

“There are white people, and then there are ignorant motherfuckers like you….”

http://www.techdirt.com/articles/20110621/16071614792/misconceptions-free-abound-why-do-brains-stop-zero.shtml#c1869

But of course YOU, Michael Ma snick, HIRE that person to re-write here! Explain that in light of this piece.

So, Techdirt: the “community standard” that I always exceed is to NOT make completely unprovoked, racist-tinged, vile, insulting, vulgar, off-topic one-liners. — Oh, and Geigner never apologized, but instead tried to dodge with classic abuser tactic of making a deal: he’ll stop if I don’t raise the topic again. Just read a couple after that link, then try to tell ME I’m a “troll”. Phooey on you kids. You’re uncivil, indecent, and liars.

It’s NOT how said, it’s WHAT. YOU are banning viewpoints.


13th attempt starting from 11 Pacific! This topic seems locked down with each comment approved, another hidden censorship tactic here.

Anonymous Coward says:

Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

Let in on stale topic!

Made me notice this: “remember that he people who want to harass others quickly learn that they can use that as a tool for further harassment themselves.” — They can ALSO use a “report” or “flag” to harass. There’s only ONE reason that’s done here, and it’s to reduce impact of some comments. When a site continually colludes with a faction and never punishes comments such as the one I link to, it’s not due any favorable regard, to say the least.

Anonymous Coward says:

Re: Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

It is. He’s discarded his old pseudonym and branded it as a sort of martyr label after spending years of its reputation trolling it away.

And now he thinks no one knows who he is despite the same troll tactics.

Toom1275 (profile) says:

Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

Try resubmitting after an hour, not after less than a minute. Worked for me.

Repeatedly posting the same comment over again is the hallmark of either a spambot, or somebody with the patience of one.

Anonymous Coward says:

“That’s why about a month ago, there was a kerfuffle when Facebook’s “hate speech rule book” was leaked, and it showed how it could lead to situations where “white men” were going to be protected.”

White men *are* protected by virtue of the laws preventing discrimination against anyone for their sex or race, amongst other attributes. Fuck anyone who thinks discrimination is hateful but it’s ok so long as the victim is white and male.

James Burkhardt (profile) says:

Re: Response to: White man are protected by Anonymous Coward on Aug 9th,2017 @ 12:47pm

Interesting standpoint, that misses the context. Facebook prioritized (and may still do) protecting White males over any other gender of ethnic group. If there was a question who was wrong, the white guy was right. Always. That’s the problem. They chose a discriminatory policy that prioritized a White males in an effort to speed up the process. Not that white ben where protected. That protecting white males was prioritized above protecting other groups.

Bergman (profile) says:

Re: Re: Response to: White man are protected by Anonymous Coward on Aug 9th,2017 @ 12:47pm

No, the problem is that you don’t get it.

Facebook didn’t prioritize protecting white men. They prioritized protecting those who were targeted for two or more categories that Facebook was watching for.

Black men get equal protection with white men or asian women under that system. White drivers get no protection, the same way black drivers don’t and women drivers don’t — but black women drivers do get protected.

White men got protected because gender and race are two protected categories under their system. Quit being so racist, it’s not about white people.

Anonymous Coward says:

Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

I don’t see why. We haven’t solved that problem irl either, and it’s pretty much impossible to use a pseudonym there. Honestly, there are much more likely to be consequences doing it in person than doing it with your real name online, but that hasn’t stopped anyone irl.

Oh, a few might get tracked down, but anyone worried about that can just get an account under their real name just to post dumb stuff on and not include location information on it. After all, there are likely thousands of people with your name so it’s not like the average racist(sexist, etc.) git could actually be tracked down by anyone.

Anonymous Troll says:

Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

This is a great idea – as a prolific troll, finding people’s personal information online when I want to ruin their life can be a lot of work. By forcing everyone to make that information public already, you’re dramatically reducing my workload and massively increasing not just the number of people I can harass, but the ease with which I can do it.

Mike Masnick (profile) says:

Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

Because I write under a pseudonym, my proposed solution is entirely hypocritical — but I believe that non-anonymous accounts, which are verified to use the correct name of the real person, would solve 99% of this problem.

We’ve actually discussed this before, and it’s not true for a variety of reasons. First, Facebook already requires real names and there’s a ton of abuse there. Second, multiple studies on the topic have shown the "abuse" levels between anonymous and real names is really no different. Third, being anonymous has tremendous benefits that shouldn’t be tossed out just because some people abuse it.

PaulT (profile) says:

Re: Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

“First, Facebook already requires real names and there’s a ton of abuse there”

Well, they say they do, but I don’t think it’s ever been enforced except in cases where they’re using it as a reason to kick people off after abuse has happened. Unless something’s changed recently, I don’t believe they’re ever pre-vetted anyone.

PaulT (profile) says:

Re: Re: Re:2 My Hypocritical Solution: Verified Non-Anonymous Accounts Only

Well, that’s kind of my point.

In my Facebook feed I have as “friends” 3 dogs, one building, a number of completely made up people (test accounts from when I used to work for a company that produced Facebook games) and couple of businesses (from before Facebook introduced pages). There’s also a few people I know using very obvious pseudonyms, and they’ve never had any issues despite being regular users. In fact, all of these accounts are still active despite them obviously not relating to a real name.

So, since Facebook really don’t do any active vetting of whether people are using their real names, it doesn’t seem right to say that Facebook are already forcing people not to be anonymous when using it.

Bergman (profile) says:

Re: Protection

For thousands of years, pretty much all of human history, there have been categories of people who it is okay to discriminate against. It’s socially acceptable to hate them and even seen as right-thinking and moral to do so.

Pick any race you care to name, any religion, any wealth level, any gender, and you will find that they have been the target of bigotry, etc. Jews, blacks, asians, native Americans, none of them are unique in this.

White people have never been exempt, you can find quite a few places around the world where being white gets you abused and discriminated against.

And in our Western societies, the Social Justice Warriors have decided that being white makes you exempt from having human rights, or deserving to be treated fairly. If any other race is proud of their heritage, it’s good and pure — but heaven help the white kid who is proud of his heritage, because he will be told that being proud of his heritage makes him evil.

PaulT (profile) says:

Re: Re: Protection

I was with you until your last paragraph, largely because it’s full of crap. I’ve never felt discriminated against, but I sure as hell share a race and gender with some ranting assholes who can’t accept that they can’t treat everyone else as inferior any more and have to hold back on their abuse of them. Nobody’s ever removed a human right from me because I’m white, no matter what you claim.

“he will be told that being proud of his heritage makes him evil.”

Define “proud of his heritage”. There might be something in the definition which gives you a clue. Generally speaking, there’s nothing wrong with being proud of your heritage, but there does seem to be a correlation between certain type of “pride” and white nationalism – that correlation might be something you’re inadvertently referencing.

For example – being an Englishman, there’s generally nothing wrong with people being proud to be English. However, the white nationalists have tended to throw around the St George flag as a symbol of their violent racial hatred, and this has led to it being tarnished somewhat as a symbol. I’ve never seen anyone being told that they can’t be proud to be English/British, but it does tend to send a certain type of message if a person chooses the St George flag instead of the Union Flag to broadcast that.

It’s a shame, but the reason it’s objectionable to some is not because people are being told they can’t be “proud of their heritage”. It’s because people flying that flag have beaten and murdered people in its name.

Leave a Reply to btr1701 Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...