from the this-isn't-easy dept
And apparently Twitter is listening. As Sarah Jeong notes in a great article over at The Verge, it appears that Twitter recently deployed a feature to block some very abusive tweets, going further than its past tactics of pulling abusive tweets and killing accounts. In this case, it blocked certain tweets from being sent altogether. It appeared to be a filter that combined certain keywords with an @ symbol towards a person receiving a lot of abuse. Of course, as with many such things, the abusers just sought ways around the filter.
For a while, at least, Berger didn’t receive any tweets containing anti-Semitic slurs, including relatively innocuous words like "rat." If an account attempted to @-mention her in a tweet containing certain slurs, it would receive an error message, and the tweet would not be allowed to send. Frustrated by their inability to tweet at Berger, the harassers began to find novel ways to defeat the filter, like using dashes between the letters of slurs, or pictures to evade the text filters. One white supremacist site documented various ways to evade Twitter’s censorship, urging others to "keep this rolling, no matter what."The assumption, from those with at least some understanding of what happened, is that it was done via Twitter's spam filter:
Trollish behavior and abusive behavior is definitely a concern. And making people feel welcome online on services like Twitter, without fear of harassment seems like a welcome goal. But there are also some pretty serious concerns about how this is all happening in a non-transparent manner, leaving it open to abuse in its own way:
A source familiar with the incident told us, "Things were used that were definitely abnormal."
A former engineer at Twitter, speaking on the condition of anonymity, agreed, saying, "There’s no system expressly designed to censor communication between individuals. … It’s not normal, what they’re doing."
He and another former Twitter employee speculated that the censorship might have been repurposed from anti-spam tools—in particular, BotMaker, which is described here in an engineering blog post by Twitter. BotMaker can, according to Twitter "deny any Tweets" that match certain conditions. A tweet that runs afoul of BotMaker will simply be prevented from being sent out—an error message will pop up instead. The system is, according to a source, "really open-ended" and is frequently edited by contractors under wide-ranging conditions in order to effectively fight spam.
What’s worrisome to free speech advocacy groups like the EFF about this incident is how quietly it happened. Others may see the bigger problem being the fact that it appears to have been done for the benefit of a single, high-profile user, rather than to fix Twitter’s larger harassment issues. The selective censorship doesn’t seem to reflect a change in Twitter abuse policies or how they handle abuse directed at the average user; aside from a vague public statement by Twitter that elides the specific details of the unprecedented move, and a few, mostly-unread complaints by white supremacists, the entire thing could have gone unnoticed.It opens up some pretty serious questions, and you enter into the same slippery slope that questions around legislating against harassing speech, or even things like revenge porn, start to raise some concerns. It's not that anyone wants to support those activities. The harassment here is absolutely deplorable. But there is a serious concern about where it leads when an intermediary suddenly takes it upon itself to determine what is and what is not acceptable speech. Was it used for good reasons in this case? Probably. But will it always be done in that manner? That's where it gets a lot trickier.
Eva Galperin thinks incidents like these could be put in check by transparency reports documenting the application of the terms of services, similar to how Twitter already puts out transparency reports for government requests and DMCA notices. But while a transparency report might offer users better information as to how and why their tweets are removed, some still worry about the free-speech ramifications of what transpired. One source familiar with the matter said that the tools Twitter is testing "are extremely aggressive and could be preventing political speech down the road." He added, "are these systems going to be used whenever politicians are upset about something?"