Rights Groups Demand Facebook Set Up Real Due Process Around Content Moderation

from the seems-like-a-good-idea dept

For quite some time now, when discussing how the various giant platforms should manage the nearly impossible challenges of content moderation, one argument I’ve fallen back on again and again is that they need to provide real due process. This is because, while there are all sorts of concerns about content moderation, the number of false positives that lead to “good” content being taken down is staggering. Lots of people like to point and laugh at these, but any serious understanding of content moderation at scale has to recognize that when you need to process many many thousands of requests per day, often involving complex or nuanced issues, many, many mistakes are going to be made. And thus, you need a clear and transparent process that enables review.

A bunch of public interest groups (including EFF) have now sent an open letter to Mark Zuckerberg, requesting that Facebook significantly change its content removal appeal process, to be much clearer and much more accountable. The request first covers how clear the notice should be concerning what content caused the restriction and why:

Notice: Clearly explain to users why their content has been restricted.

  • Notifications should include the specific clause from the Community Standards that the content was found to violate.
  • Notice should be sufficiently detailed to allow the user to identify the specific content that was restricted, and should include information about how the content was detected, evaluated, and removed.
  • Individuals must have clear information about how to appeal the decision.

And then it goes into many more details on how an appeal should work, involving actual transparency, more detailed explanations, and knowledge that an appeal actually goes to someone who didn’t make the initial decision:

Appeals: Provide users with a chance to appeal content moderation decisions.

  • The appeals mechanism should be easily accessible and easy to use.
  • Appeals should be subject to review by a person or panel of persons not involved in the initial decision.
  • Users must have the right to propose new evidence or material to be considered in the review.
  • Appeals should result in a prompt determination and reply to the user.
  • Any exceptions to the principle of universal appeals should be clearly disclosed and compatible with international human rights principles.
  • Facebook should collaborate with other stakeholders to develop new independent self-regulatory mechanisms for social media that will provide greater accountability.

Frankly, I think this is a great list, and am dismayed that the large platforms haven’t implemented something like this alread. For example, we recently wrote about Google deeming our blog post on the difficulty of content moderation to be “dangerous or derogatory.” In that case, we initially got no further information other than that claim. And the appeals process was totally opaque. The first time we appealed, the ruling was overturned (again with no explanation) and a month later when that article got dinged again, the appeal was rejected.

After we published that article, we had an employee from the Adsense team eventually reach out to us to explain that it was “likely” that some of the comments on that article were what triggered the problems. After pointing out that there were well over 300 comments on the article, we were eventually pointed to one particular comment that used some slurs, though the comment used them to demonstrate the ridiculousness of automated filters, rather than as derogatory epithets.

However, as I noted in my response, my main complaint was not Google’s silly setup, but the fact that it provided no actual guidance. We were not told that it was a comment that was to blame until after our published article resulted in someone higher up on the AdSense team reaching out. I pointed out that it seemed only reasonable that Google should share with us specifically what term it felt we had violated and which content was the problem so that we could then make an informed decision. Similarly, the appeals process was entirely opaque.

While the reasons that Google and Facebook have not yet created this kind of due process are obvious (it would be kinda costly, for one), it does seem like such a system will be increasingly important, and it’s good to see these groups pushing Facebook on this in particular.

Of course, earlier this year, Zuckerberg had floated an idea of an independent (i.e. outside of Facebook) third party board that could handle these kinds of content moderation appeals, and… a bunch of people freaked out, falsely claiming that Zuckerberg wanted to create a special Facebook Supreme Court (even as he was actually advocating for having a body outside of Facebook reviewing Facebook’s decisions).

No matter what, it would be good for the large platforms to start taking these issues seriously, not only for reasons of basic fairness and transparency, but because it would also serve to better make the public comfortable with how this process works. When it is, as currently construed, a giant black box, that leads to a lot more anger and conspiracy thinking over how content moderation actually works.

Update: It appears that shortly after this post went out, Zuckerberg told reporters that Facebook is now going ahead with creating an independent body to handle appeals. We’ll have more on this once some details are available.

Filed Under:
Companies: aclu, eff, facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Rights Groups Demand Facebook Set Up Real Due Process Around Content Moderation”

Subscribe: RSS Leave a comment
34 Comments
Band N Boston says:

Contradicting own statement Section 230 gives arbitrary power.

"And, I think it’s fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone."

https://www.techdirt.com/articles/20170825/01300738081/nazis-internet-policing-content-free-speech.shtml

So where do you find these new user’s "rights" in the FLAT unqualified statement you made there?

Dan (profile) says:

Re: Contradicting own statement Section 230 gives arbitrary power.

  • Facebook (or Google, or Twitter, or any other platform) has the absolute legal right to moderate content in any way, on any basis, and with any (or no) degree of transparency they wish.
  • It is foolish (and perhaps even morally wrong) for them to moderate in an arbitrary and opaque manner.

These two statements are entirely consistent with each other. But for some reason you seem to believe they contradict each other.

Anonymous Coward says:

Re: Contradicting own statement Section 230 gives arbitrary power.

Saying that you “can” do something is not the same as saying that you “will” do something, or that you “should” do something. Nothing here suggests that Facebook “can’t” be arbitrary and unhelpful in their content moderation decisions, just that they “shouldn’t” be.

Nor did he ever use the term “user rights.” Because these are not “rights” that the user has which independently provide power over facebook, but terms of a contract that facebook might (or might not) voluntarily agree to.

James Burkhardt (profile) says:

Re: Contradicting own statement Section 230 gives arbitrary power.

As others point out, you are conflating Facebook’s legal rights with moral and ethical arguments about how Facebook should exercise its rights. They are two separate but linked issues.

Your failure to address this, combined with your combative tone, is why you get a flag.

Band N Boston says:

Let's have some details on your own alleged "voting system".

When it is, as currently construed, a giant black box, that leads to a lot more anger and conspiracy thinking over how content moderation actually works.

The KEY one of course is whether an Administrator okays the censoring with added editorial warning that you euphemize as "hiding".

You call for others of vastly larger scale to be transparent but to say the least, don’t lead by example.

Anonymous Coward says:

Re: Let's have some details on your own alleged "voting system".

How’s this for transparent:

I flagged you because you’re a belligerent, often incomprehensible fool with massive holes in your understanding of everything and yet insult others for what you perceive (wrongly) to be holes in their understanding.

There ya go. Now piss off.

Anonymous Coward says:

Re: Let's have some details on your own alleged "voting system".

Since you asked:

I flagged you because you never have anything useful to contribute, and your lack of substance has ceased to be amusing and moved into the realm of the tiresome.

Put simply, I’m telling you to shut up and go away. You won’t listen, of course, which is why the flag is there.

Gary (profile) says:

Re: Techdirt has a great voting system

Blue complains about getting downvoted to oblivion. Everyone else cheers.
Blue demands “Transparency” and everyone laughs.

Because you can’t email a transparency report to a nameless troll. How would Mike track the stats for the cowards and let them see the info without a logon and an a working email?

Blue lies, and doesn’t know what Common Law is.

Bamboo Harvester (profile) says:

The point being missed here is...

…Facebook’s sales product is… YOU. Users. The mine, farm, and sell, sell, sell all that user data.

They don’t want to boot ANYONE off for any reason – that’s an inventory loss each time they do so.

BIG markets out there for every possible “group”, even the most radical hate groups.

Facebook is a *company*. Companies exist to make money.

It’s not that difficult to figure out.

Bamboo Harvester (profile) says:

Re: Re: The point being missed here is...

They’re purging the least profitable inventory.

Facebook isn’t going to kick off high profit users/groups, no matter if they start a Nuke the Gay Whales organization.

If Neo-Nazis suddenly stop buying tons of “memorabilia” and such crap, they’ll get parsed out as well. If gays suddenly stop buying from Facebook ads, they’ll get weeded out as well.

Facebook is a BUSINESS.

James Burkhardt (profile) says:

Re: Re: Re: The point being missed here is...

So first off, we are largely talking about content moderation, not user account moderation, which suggests a misunderstanding of the base premises.

Your strain of thought is a reason for why they should do moderation, pruning off content that would drive away advertisers. But ‘should we do moderation’ is not in debate. The debate is around how that moderation occurs, because of clear and obvious variance. Your assertion seems to be that the variance happens because they are this nebulous business entity that is just pumping short term metrics i guess? EFF, Techdirt, and people who understand that a business is a collection of people, believe this variance is occurring due to the need for individuals to make snap value judgments often without the context to understand the content at issue. This forces personal biases to the forefront. The EFF is proposing a system that requires transparency so that appeals can occur, or corrective action taken.

Anonymous Coward says:

Re: Re: Re: The point being missed here is...

"If Neo-Nazis suddenly stop buying tons of "memorabilia" and such crap, they’ll get parsed out as well. If gays suddenly stop buying from Facebook ads, they’ll get weeded out as well.

Facebook is a BUSINESS."

A business that seeks to prosper long term will always have to consider issues other than immediate cash flow.

For instance, YouTube must have lost plenty of money after implementing the decision a year or two ago to ban paid advertising from firearm manufacturers, which specifically targeted viewers of Youtube’s many gun channels. Youtube might not have realized that this policy change would have the ultimate effect of turning most gun reviews into paid promotions when manufacturers switched from running YouTube ads to paying video makers directly. Youtube then cracked down a second time by banning links on YouTube pages going to external gun sites. One thing that Youtube has never taken any interest in is whether a product reviewer is actually a paid shill, an issue that cuts right to the core of ethical conduct, and one type of Youtube’s moderation efforts that viewers would actually welcome.

Through a continuous game of cat and mouse, it’s rather obvious that Youtube is determined to kill off a highly profitable community of consumers rather than profiting from their interests, presumably to show the world (as well as placate its leftist activist employees) that Youtube is on the "correct" side of a highly divisive political issue.

Christenson says:

Re: Re: Re:2 The point being missed here is...

I think you are mistaken about youtube and gun control, in that
“gun nuts” are a relatively small part of their user population, and a relatively small part of their advertising revenue.

Youtube has the same problem as any large newspaper or TV station did in the past: they can’t afford to piss off the vast majority of their viewers or advertisers.

I think youtube, as a corporation, is large enough to have become fairly amoral, and is mostly interested in not angering the majority.

The same can be said for a “fair” moderation process: unfairness, or the perception thereof, can drive away business.

Anonymous Coward says:

I find it interesting how views of some commentators, including Mr. Masnick here, has evolved over times to “should not tell people why they were censored or else trolls will game it” to “people must be informed of why they were censored and must be able to appeal the decision”.

I’m not criticizing. Just amused.

Christenson says:

Re: Omitting user comments from search

That might work for Techdirt, but it will omit the comments from such things as search results. Gets complicated if I want to say, find the last complaint about “blue” or something, or if I want to find a suggestion from the comments that I half-remember.

And by the way, there’s a decent chance I wrote the comment involved, using an example to point out that whether a given sentence would be acceptable or not depended heavily on context, which computers are bad at. Just consider your favorite hate speech, then consider someone complaining, quite correctly, about me saying it and quoting it. Quotes of bad stuff are part of a journalist’s stock in trade.

Darkness Of Course (profile) says:

So, nobody at FB/Google is a software dev

Well, let me clarify that: A good software dev.

The search flags are set, and the wide swatch of the web in Google’s case and the subset of the web that is FB. If something is flagged, the reason for the flag is KNOWN at that time. Proper error returns, or error logging must show what (and hopefully where) the error was as well as the type of error.

TechDirt article X, Hate speech flag, comments section.

Based on Google’s use of Go and their penchant for ML driven solutions one would be surprised if they ever managed to do it correctly.

A detailed error might be difficult for their solution; Which is still fn-ing wrong.

Anonymous Coward says:

Re: So, nobody at FB/Google is a software dev

It is not necessarily valid to place the blame for whatever at the feet of software developers when management directs what they work on, what the requirements are and what equipment they can use. Developers many times indicate their unwillingness to build substandard product and get down voted by the scrummy Agile twits who suck up to management.

Guess what happens when the customer does not like the product … who gets yelled at – never gets old does it?

Christenson says:

Re: Re: So, nobody at FB/Google is a software dev

Agile has a place and a point…you want to find out as soon as you can whether you have the right idea or not, so start small and cover the essentials.

But I suspect that the end user is often left out of these setups as a stakeholder, remember, users don’t write checks to Google.

John Smith says:

I’ve already said that I believe AOL had greater censorship power in the 1990s relative to the size of the internet than any company in history, and that censorship problmes in general are self-correcting. Even Gab found a new host and benefitted from the Streisand Effect as attempts to silence it only increased their media exposure.

Gab itself was a byproduct of Twitter censorship. There is, of course, always USENET for those who truly want free speech. USENET’s sharp decline in the past decade or two shows that this is simply not a big priority for a public that seems to want to be spoonfed its information, “fake news” or not.

Is there any American who could be trusted with absolute censorship power?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...