Facebook Promises To Distinguish Takedowns From Governments; Whether They're For Illegal Content, Or Merely Site Rules Violations

from the that's-a-start dept

There’s an interesting discussion that happens in content moderation circles with regards to government requests for takedowns: are those requests about content that violates local laws or about the content violating website policies? Many people lump these two things together, but they’re actually pretty different in practice. Obviously, if a government comes across content that violates the law, it seems reasonable for them to alert the platform to it and expect that the content will be removed (though, there may be some questions about jurisdiction and such). However, when it’s just content that may violate site policy, there are some pretty big questions raised. This actually gets to the “jawboning” question we’ve been discussing a lot lately, and exploring where the line is between a politician persuading a website to take something down and compelling them to do so.

There is one argument that suggests that a government pointing out content that might violate site policies is simply helping out. The website doesn’t want content that violates its standards, and the government reporting it is just like any other user reporting it (though, backed up by whatever credibility the government may or may not have). But, the flip side of that is, if the content is perfectly legal, this often starts to feel like a loophole through which government actors can engage in wink-wink-nudge-nudge censorship — just send a notification to the site that this particular content may not break the law, but hey, doesn’t it violate your policies?

A recent decision by the Oversight Board, and a corresponding statement from Facebook, suggests that Facebook is going to be clearer about when this situation happens. The case the Oversight Board reviewed was an interesting one. Here’s how it summarized the situation:

In January 2021, an Instagram user in the United States posted a picture of Abdullah Öcalan, one of the founding members of the Kurdistan Workers? Party (PKK). The picture included the words ?y?all ready for this conversation.? Underneath the picture the user wrote that it was time to talk about ending Öcalan?s isolation in prison on Imrali Island. They encouraged readers to engage in conversation about his imprisonment and the inhumane nature of solitary confinement.

Facebook removed the content for violating Instagram?s Community Guidelines after the post was automatically flagged for review (at this stage, the Board does not know if the content was removed by an automated system or through human review). These Guidelines under the heading ?follow the law,? set out that ?Instagram is not a place to support or praise terrorism, organized crime, or hate groups.? The Guidelines link to Facebook?s Community Standard on Dangerous Individuals and Organizations. These rules clarify that Facebook also prohibits any support or praise for groups, leaders, or individuals involved in terrorist activity or other serious crimes committed by these groups. The PKK has been designated a terrorist organization by multiple countries, including Turkey, the United States, and the EU.

The user states in their appeal that Öcalan has been a political prisoner for decades and that banning any reference to him prevents discussions that could advance the position of the Kurdish people. They argue that Öcalan?s philosophy is peaceful and that his writings are widely available in bookshops and online. The user compares Öcalan?s imprisonment to that of former South African President Nelson Mandela, noting that discussion of Öcalan?s imprisonment should be allowed and encouraged.

In July, the Oversight Board overturned Facebook’s initial decision (even though once the case was taken Facebook admitted it had taken down the content in error before the Oversight Board even ruled). A key part of the discussion was that Facebook claimed it had “misplaced” an “internal policy exception” for the “Dangerous Individuals and Organizations” policy, as that misplaced policy included an exception for the discussion of human rights issues, even among those designated as “dangerous.” As with many other Oversight Board rulings, there were a bunch of other recommendations as well.

In early August, Facebook responded to the recommendations, and, as Evelyn Douek noted, buried deep within them (recommendation 11) was an admission that Facebook would start revealing information and data on requests from governments based on “Community Standards” violations, rather than legal violations. Here’s what the Oversight Board asked for:

Include information on the number of requests Facebook receives for content removals from governments that are based on Community Standards violations (as opposed to violations of national law), and the outcome of those requests.

While this may have been a minor part of this particular situation, it’s important information that has previously not be available. And, as Douek also notes, human rights activists have been begging Facebook for this data for ages, with no luck. But the Oversight Board seems to have made it happen. Here’s what Facebook had to say about it:

Our commitment: We are actively working to provide additional transparency when we remove content under our Community Standards following a formal report by a government, including the total number of requests we receive.

Considerations: As noted in our response to Recommendation 9 above, when we receive a formal government report about content that may violate local law, we first review it against our global Community Standards, just as we would review a report from any other source. If the content violates our Community Standards, we will remove it and count it in our Community Standards Enforcement Report. These reports are reviewed under a standardized process in the same way and against the same policies as reports from any other source. As a result, we are not currently able to report when we remove content based on a report by a government or from a Facebook user. In addition, we may receive reports of a piece of content that may violate our policies from multiple sources at the same time?for example, from a government and from user reports on Facebook. Such situations create additional challenges in determining whether content should be considered as removed in response to a government report.

We have been exploring ways to increase the level of transparency we provide to users and the public about requests we receive from governments, in line with best practices laid out by civil society efforts like the Santa Clara Principles and the Ranking Digital Rights project. We are prioritizing that work in response to this recommendation.

Next steps: We are planning work that will enable us to include information on content removed for violating our Community Standards following a formal report by a government, including the number of requests we receive, as a distinct category in our Transparency Center.

Again, as Douek points out, there is some wiggle room here in that Facebook committed to counting “formal” requests from government, which means that totally informal requests and suggestions — pure “jawboning” situations — might not be counted. But, still, this is a step forward in getting more transparency about how often governments push Facebook and Instagram to take down content that is legal, even if it may violate site policies.

Filed Under: , , , , ,
Companies: facebook, oversight board

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Facebook Promises To Distinguish Takedowns From Governments; Whether They're For Illegal Content, Or Merely Site Rules Violations”

Subscribe: RSS Leave a comment

This comment has been flagged by the community. Click here to show it.

Chozen says:

Its Crime

Anytime an agent of the state reports any legal speech to Facebook or any other social media all that social media should do is report the agent to the DOJ civil rights division for violation of 18 U.S. Code § 241 – Conspiracy against rights. To even humor the request and fail to report the crime is to involve itself in the illegal conspiracy.

This comment has been deemed insightful by the community.
Rocky says:

Re: Its Crime

Title 18, U.S.C., Section 241 – Conspiracy Against Rights
This statute makes it unlawful for two or more persons to conspire to injure, oppress, threaten, or intimidate any person of any state, territory or district in the free exercise or enjoyment of any right or privilege secured to him/her by the Constitution or the laws of the United States, (or because of his/her having exercised the same).

It further makes it unlawful for two or more persons to go in disguise on the highway or on the premises of another with the intent to prevent or hinder his/her free exercise or enjoyment of any rights so secured.

I would be interested in hearing your legal theory how this applies. It should be an entertaining read in stupidity considering the other legal theories you have mentioned.

This comment has been flagged by the community. Click here to show it.

Rocky says:

Re: Re: Re: Its Crime

You really need to learn what words mean. If anyone, regardless of their governmental status, reports speech to Facebook et al, it’s not a conspiracy.

If someone, regardless of their governmental status, conspire with Facebook et al to stop someone from speaking, 18 USC 241 may apply after examining the mens rea requirements and defining the predicate right.

Vic says:

And don’t forget that the government could very easily be hiding. I am still waiting when FB will realize that it takes down too many posts and "digitally jails" for weeks and months the voices of opposition to Russian government exactly because the troll farms’ jobs are now – flooding the FB complaint department (or however it’s called). MZ has really become Putin’s best friend now!

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...