Be Cautious About Big Internet Platforms Bearing Plans For Global Censorship
from the let's-not-get-carried-away-here dept
In the wake of the Christchurch shooting massacre in New Zealand, there has been a somewhat odd focus on the internet platforms — mainly those that ended up hosting copies of the killer’s livestream of the attack. As we previously discussed, this is literally blaming the messenger, and taking away focus from the much deeper issues that led up to the attack. Still, in response, Microsoft’s Brad Smith decided to step forward with a plan to coordinate among big internet companies a system for blocking and taking down such content.
Ultimately, we need to develop an industrywide approach that will be principled, comprehensive and effective. The best way to pursue this is to take new and concrete steps quickly in ways that build upon what already exists.
Smith points to an earlier agreement between YouTube, Facebook, Twitter and Microsoft to form GIFCT, the Global Internet Forum to Counter Terrorism, by which the various big platforms share hashes of content deemed “terrorist content” so they can all spot it across their platforms. Here, Smith suggests expanding that effort:
We need to take new steps to stop perpetrators from posting and sharing acts of violence against innocent people. New and more powerful technology tools can contribute even more than they have already. We must work across the industry to continue advancing existing technologies, like PhotoDNA, that identify and apply digital hashes (a kind of digital identifier) to known violent content. We must also continue to improve upon newer, AI-based technologies that can detect whether brand-new content may contain violence. These technologies can enable us more granularly to improve the ability to remove violent video content. For example, while robust hashing technologies allow automated tools to detect additional copies already flagged as violent, we need to further advance technology to better identify and catch edited versions of the same video.
We should also pursue new steps beyond the posting of content. For example, we should explore browser-based solutions ? building on ideas like safe search ? to block the accessing of such content at the point when people attempt to view and download it.
We should pursue all these steps with a community spirit that will share our learning and technology across the industry through open source and other collaborative mechanisms. This is the only way for the tech sector as a whole to do what will be required to be more effective.
Some of this may be reasonable, but we should be careful. As Emma Llanso neatly lays out in a series of tweets, before we expand the power and role of GIFCT, we should take care of many of the existing concerns with the program. Here’s a (lightly edited) transcription of Llanso’s concerns:
In Brad Smith’s post on Microsoft’s response to the New Zealand attacks, we see another example of a company promoting an expanded role for the GIFCT without addressing any of the long-standing transparency and accountability issues with the consortium. Smith makes several proposals to further centralize and coordinate of content-blocking by major tech companies and fails to include any real discussion of transparency, external accountability to users, or safeguards against censorship.
The closest he gets is describing a “joint virtual command center” of tech companies to coordinate during major events, which would enable tech companies to ensure they “avoid restricting communications that [tech companies unilaterally] decide are in the public interest”. Public interest must be part of the analysis but media orgs & nations have come to different conclusions about how to cover the NZ attacks. It’s naive to suggest that a consensus view of “public interest” could, much less ought, be set by a consortium of US-born tech companies.
There’s also a chilling call to “explore browser-based solutions” to block people’s ability to view or download content, with no recognition of how dangerous it is to push censorship deeper into infrastructure. “Safe-search” is user-controlled; would MSFT’s terror-block be as well?
Smith is calling for discussion about how tech can/should be involved in responding to terrorism, which is reasonable. But any discussion that fails to include transparency and safeguards against censorship, from the very beginning, is irresponsible. I know that many people’s instincts right now are focused on how to take more content down faster, but as Smith notes, “the public rightly expects companies to apply a higher standard.” Takedown policies without safeguards are incomplete and are not “solutions”.
Llanso makes a number of good points here, but a key one to me: while coordination and agreement to act together may sound like a good way to approach global scale issues that can move from platform to platform, it also suggests that there is only one solution to such content (which is to outright ban it across all platforms). That takes out of the equation more creative or alternative approaches. It also takes out context. As we’ve discussed before, in some cases someone’s “terrorist content” is actually evidence of war crimes that it might be useful for someone to have.
Yes, lots of people are rightly concerned that videos and manifestos related to attacks may inspire copycat (or worse) attacks. But trying to stuff the entire thing down the memory hole in a single coordinated plan — where the big internet platforms are the final arbiters of everything — hardly seems like the right solution either. Indeed, taking such a position actually makes it that much harder for different platforms to experiment with different, and possibly more effective, ways of dealing with this kind of content.