Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Removes Account For Pointing Users To Leaked Documents Obtained By A Hacking Collective (June 2020)

from the reporting-on-hacking dept

Summary: Late in June 2020, a leak-focused group known as “Distributed Denial of Secrets” (a.k.a., “DDoSecrets”) published a large collection of law enforcement documents apparently obtained by the hacking collective Anonymous.

The DDoSecrets’ data dump was timely, released as protests over the killing of a Black man by a white police officer continued around the nation neared their second consecutive month. Links to the files hosted at DDoSecrets’ website spread quickly across Twitter, identified by the hashtag #BlueLeaks.

The 269-gigabyte trove of law enforcement data, emails, and other documents was taken from Netsential, which confirmed a security breach had led to the exfiltration of these files. The exfiltration was further acknowledged by the National Fusion Center Association, which told affected government agencies the stash included personally identifiable information. While this trove of data proved useful to activists and others seeking uncensored information about police activities, some expressed concern the personal info could be used to identify undercover officers or jeopardize ongoing investigations.

The first response from Twitter was to mark links to the DDoSecret files as potentially harmful to users. Users clicking on links to the data were told it might be unsafe to continue. The warning suggested the site might steal passwords, install malicious software, or harvest personal data. The final item on the list in the warning was a more accurate representation of the link destination: it said the link led to content that violated Twitter’s terms of service.

Twitter’s terms of service forbid users from “distributing” hacked content. This ban includes links to other sites hosting hacked content, as well as screenshots of forbidden content residing elsewhere on the web.

Shortly after the initial publication of the document trove, Twitter went further. It permanently banned DDoSecrets’ Twitter account over its tweets about the hacked data. It also began removing tweets from other accounts that linked to the site.

Decisions to be made by Twitter:

  • Should the policy against the posting of hacked material be as strictly enforced when the hacked content is potentially of public interest?
  • Should Twitter have different rules for ?journalists? or ?journalism organizations? with regards to the distribution of information?
  • How should Twitter distinguish ?hacked? information from ?leaked? information?
  • Should all hacked content be treated as a violation of site terms, even if it does not contain personal info and/or trade secrets?
  • How should Twitter handle mirrors of such content?
  • How should Twitter deal with the scenario in which someone links to the materials because of their newsworthiness, without even knowing the material was hacked?

Questions and policy implications to consider:

  • Does a strict policy against “distributing” hacked content negatively affect Twitter’s value as a source of breaking news?
  • Does the mirroring of hacked content significantly increase the difficulty and cost of moderation efforts?

Resolution: While DDoSecrets’ site remains up and running, its Twitter account does not. The permanent suspension of the account and additional moderation efforts have limited the spread of URLs linking to the apparently illicitly-obtained documents.

Filed Under: , , , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitter Removes Account For Pointing Users To Leaked Documents Obtained By A Hacking Collective (June 2020)”

Subscribe: RSS Leave a comment
5 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

Can't link to hacked information?

The same can be said about the Panama Papers, and even the Pentagon Papers.

Does Twitter bar links to those papers, or reporting about those papers? If not, why not?

The Pentagon Papers pose a particular problem in this regard: They were just as governmental secret as the DDoSecrets trove. Prior to 2011, they were every bit as classified. Twitter had 5 years there where they theoretically should have enacted the same policy on the earlier leak.

So sure, they don’t have to play with policy about them today. But 10 years ago?

PaulT (profile) says:

Re: Can't link to hacked information?

"Does Twitter bar links to those papers, or reporting about those papers? If not, why not?"

I can’t speak to the actual policy, but I’d assume that at some point they get classed as historical data rather than current hacks.

"Twitter had 5 years there where they theoretically should have enacted the same policy on the earlier leak."

Is the policy the same, or has that changed in the intervening years?

That One Guy (profile) says:

Re: Re: Can't link to hacked information?

I can’t speak to the actual policy, but I’d assume that at some point they get classed as historical data rather than current hacks.

Which would completely gut the usefulness of them. ‘People had evidence of at-the-time ongoing corruption and violations of laws and rights, but we didn’t allow links to that evidence to be posted. Now that several years have passed and it’s all a moot point we will allow those links, assuming anyone cares at this point, just in case they want to see what was going on back then.’

That One Guy (profile) says:

Shooting your trustworthiness, and users, in the back

The first response from Twitter was to mark links to the DDoSecret files as potentially harmful to users. Users clicking on links to the data were told it might be unsafe to continue. The warning suggested the site might steal passwords, install malicious software, or harvest personal data. The final item on the list in the warning was a more accurate representation of the link destination: it said the link led to content that violated Twitter’s terms of service.

There is a huge difference between ‘following this link might compromise your computer and all that entails’ and ‘this link leads to content that violates our TOS’, and mixing those two up is a great way to screw Twitter’s users over by making them less likely to take any such warnings seriously.

If the content is a TOS violation then mark it as such or pull it if you really don’t want to be associated with it, if it’s malicious then don’t allow the link in the first place, but conflating the two is a terrible practice that they should have known better about.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow