The Codification Of Web DRM As A Censorship Tool
from the exceptions-that-create-a-rule dept
The ongoing fight at the W3C over Encrypted Media Extensions -- the HTML5 DRM scheme that several companies want ensconced in web standards -- took two worrying turns recently. Firstly, Google slipped an important change into the latest Chrome update that removed the ability to disable its implementation of EME, further neutering the weak argument of supporters that the DRM is optional. But the other development is even more interesting -- and concerning:
Dozens of W3C members -- and hundreds of security professionals -- have asked the W3C to amend its policies so that its members can't use EME to silence security researchers and whistleblowers who want to warn web users that they are in danger from security vulnerabilities in browsers.
So far, the W3C has stonewalled on this. This weekend, the W3C executive announced that it would not make such an agreement part of the EME work, and endorsed the idea that the W3C should participate in creating new legal rights for companies to decide which true facts about browser defects can be disclosed and under what circumstances.
One of the major objections to EME has been the fact that, due to the anti-circumvention copyright laws of several countries, it would quickly become a tool for companies to censor or punish security researchers who find vulnerabilities in their software. The director of the standards body called for a new consensus solution to this problem but, unsurprisingly, "the team was unable to find such a resolution." So the new approach will be a forced compromise of sorts in which, instead of attempting to carve out clear and broad protections for security research, they will work to establish narrower protections only for those who follow a set of best practices for reporting vulnerabilities. In the words of one supporter of the plan, it "won't make the world perfect, but we believe it is an achievable and worthwhile goal."
But this is not a real compromise. Rather, it's a tacit endorsement of the use of DRM for censoring security researchers. Because the argument is not about to what degree such use is acceptable, but whether such use is appropriate at all. It's not, but this legitimizes the idea that it is.
Remember: it's only illegal to circumvent DRM due to copyright law, which is not supposed to have anything to do with the act of exploring and researching software and publishing findings about how it functions. On paper, that's a side effect (though obviously a happy and intentional side effect for many DRM proponents). The argument at the W3C did not start because of an official plan to give software vendors a way to censor security research, but because that would be the ultimate effect of EME in many places thanks to copyright law. Codifying a set of practices for permissible security disclosures might be "better" than having no exception at all in that narrow practical sense, but it's also worse for effectively declaring that to be an acceptable application of DRM technology in the first place. It could even make things worse overall, arming companies with a classic "they should have used the proper channels" argument.
In other words, this is a pure example of the often-misunderstood idea of an exception that proves a rule -- in this case, the rule that DRM is a way to control security researchers.
Of course, security research isn't the only thing at stake. Cory Doctorow was active on the mailing list in response to the announcement, pointing out the significant concerns raised by people who need special accessibility tools for various impairments, and the lack of substantial response:
The document with accessibility use-cases is quite specific, while all the dismissals of it have been very vague, and made appeals to authority ("technical experts who are passionate advocates for accessibility who have carefully assessed the technology over years have declared that there isn't a problem") rather than addressing those issues.
How, for example, would the 1 in 4000 people with photosensitive epilepsy be able to do lookaheads in videos to ensure that upcoming sequences passed the Harding Test without being able to decrypt the stream and post-process it through their own safety software? How would someone who was colorblind use Dankam to make realtime adjustments to the gamut of videos to accommodate them to the idiosyncrasies of their vision and neurology?
I would welcome substantive discussion on these issues -- rather than perfunctory dismissals. The fact that W3C members who specialize in providing adaptive technology to people with visual impairments on three continents have asked the Director to ensure that EME doesn't interfere with their work warrants a substantive reply.
For the moment, it doesn't look like any clear resolution to this debate is on the horizon inside the W3C. But these latest moves raise the concern that the pro-DRM faction will quietly move forward with making EME the norm (Doctorow also questioned the schedule for this stuff, and whether these "best practices" for security research will lag behind the publication of the standard). Of course, the best solution would be to reform copyright and get rid of the anti-circumvention laws that make this an issue in the first place.