The Codification Of Web DRM As A Censorship Tool

from the exceptions-that-create-a-rule dept

The ongoing fight at the W3C over Encrypted Media Extensions — the HTML5 DRM scheme that several companies want ensconced in web standards — took two worrying turns recently. Firstly, Google slipped an important change into the latest Chrome update that removed the ability to disable its implementation of EME, further neutering the weak argument of supporters that the DRM is optional. But the other development is even more interesting — and concerning:

Dozens of W3C members — and hundreds of security professionals — have asked the W3C to amend its policies so that its members can’t use EME to silence security researchers and whistleblowers who want to warn web users that they are in danger from security vulnerabilities in browsers.

So far, the W3C has stonewalled on this. This weekend, the W3C executive announced that it would not make such an agreement part of the EME work, and endorsed the idea that the W3C should participate in creating new legal rights for companies to decide which true facts about browser defects can be disclosed and under what circumstances.

One of the major objections to EME has been the fact that, due to the anti-circumvention copyright laws of several countries, it would quickly become a tool for companies to censor or punish security researchers who find vulnerabilities in their software. The director of the standards body called for a new consensus solution to this problem but, unsurprisingly, “the team was unable to find such a resolution.” So the new approach will be a forced compromise of sorts in which, instead of attempting to carve out clear and broad protections for security research, they will work to establish narrower protections only for those who follow a set of best practices for reporting vulnerabilities. In the words of one supporter of the plan, it “won’t make the world perfect, but we believe it is an achievable and worthwhile goal.”

But this is not a real compromise. Rather, it’s a tacit endorsement of the use of DRM for censoring security researchers. Because the argument is not about to what degree such use is acceptable, but whether such use is appropriate at all. It’s not, but this legitimizes the idea that it is.

Remember: it’s only illegal to circumvent DRM due to copyright law, which is not supposed to have anything to do with the act of exploring and researching software and publishing findings about how it functions. On paper, that’s a side effect (though obviously a happy and intentional side effect for many DRM proponents). The argument at the W3C did not start because of an official plan to give software vendors a way to censor security research, but because that would be the ultimate effect of EME in many places thanks to copyright law. Codifying a set of practices for permissible security disclosures might be “better” than having no exception at all in that narrow practical sense, but it’s also worse for effectively declaring that to be an acceptable application of DRM technology in the first place. It could even make things worse overall, arming companies with a classic “they should have used the proper channels” argument.

In other words, this is a pure example of the often-misunderstood idea of an exception that proves a rule — in this case, the rule that DRM is a way to control security researchers.

Of course, security research isn’t the only thing at stake. Cory Doctorow was active on the mailing list in response to the announcement, pointing out the significant concerns raised by people who need special accessibility tools for various impairments, and the lack of substantial response:

The document with accessibility use-cases is quite specific, while all the dismissals of it have been very vague, and made appeals to authority (“technical experts who are passionate advocates for accessibility who have carefully assessed the technology over years have declared that there isn’t a problem”) rather than addressing those issues.

How, for example, would the 1 in 4000 people with photosensitive epilepsy be able to do lookaheads in videos to ensure that upcoming sequences passed the Harding Test without being able to decrypt the stream and post-process it through their own safety software? How would someone who was colorblind use Dankam to make realtime adjustments to the gamut of videos to accommodate them to the idiosyncrasies of their vision and neurology?

I would welcome substantive discussion on these issues — rather than perfunctory dismissals. The fact that W3C members who specialize in providing adaptive technology to people with visual impairments on three continents have asked the Director to ensure that EME doesn’t interfere with their work warrants a substantive reply.

For the moment, it doesn’t look like any clear resolution to this debate is on the horizon inside the W3C. But these latest moves raise the concern that the pro-DRM faction will quietly move forward with making EME the norm (Doctorow also questioned the schedule for this stuff, and whether these “best practices” for security research will lag behind the publication of the standard). Of course, the best solution would be to reform copyright and get rid of the anti-circumvention laws that make this an issue in the first place.

Filed Under: , , , , , ,
Companies: google, w3c

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Codification Of Web DRM As A Censorship Tool”

Subscribe: RSS Leave a comment
30 Comments
Ninja (profile) says:

Even current DRM schemes already fail. There are tools to safely extract the stream from Netflix and save locally. The only real thing HTML 5 DRM will achieve is to make users less safe by making the platform more vulnerable.

Of course, we can always go without. I’ve learned it and it saves tons of money that is often directed to more satisfying goals. If the service uses EME just go without, let them fade into irrelevance. (Of course right now there’s no critical mass to make this stand meaningful but it has to start somewhere)

Machin Shin (profile) says:

Re: Re:

I have seen some great videos of Doctorow giving presentations where he points out how stupid DRM is.

You can use encryption to keep communication between two people secure from a 3rd person. DRM though is encryption where your sending the data to someone and giving them the key to open it but hoping they don’t find the key.

Encryption does not work if you give the attacker the key. As a result DRM does not and cannot work.

Thad (user link) says:

Re: Re: Re:

Exactly. Instead, you get ridiculous situations like Blu-Ray: the keys have long since been found, and absolutely anyone can rip a Blu-Ray disc, but nobody on Linux (or, last I checked, MacOS) can actually play a Blu-Ray disc. Great job, MPAA; you clearly thought this one through.

The one exception I can think of is streaming interactive content: if you’re playing a game, using an app, etc., and your computer is just a dumb terminal sending inputs and receiving outputs; if the program is run entirely on the server side and its code is never stored in the client’s memory — that’s a case where DRM is uncrackable (short of breaking into the server and pulling data from it). But that’s not exactly DRM in the way we usually think of it.

Anything passive — books, movies, music, whatever — if there’s a way to play it through your speakers, show it on your screen, etc. then there’s a way to record it and do it again later. Anything that can be copied into volatile memory can also be copied into peristent storage. Ones and zeroes don’t know the difference.

Machin Shin (profile) says:

Re: Re: Re: Re:

Blu-Ray DRM is actually what made me totally loose all respect for these systems. Years ago I had figured I am going 100% legal.

I jumped over and started using Linux and mostly open software. Following all the licensing and everything. Then I pop my brand new Blu-Ray into the disk drive to watch my movie and it wouldn’t play.

Ok, I figured it was just me missing some software or codec. So off I go in search of how to play my movie. After good bit of digging I find that there was no way to play the stupid thing off the disk. In fact, the only way to play it was to RIP THE MOVIE.

So in the end the stupid DRM that is supposed to prevent me from making a copy would not let me watch my move UNLESS I COPIED IT.

Pretty much been all down hill from there. I tried to play by their stupid rules and they responded by locking me out of my legally purchased movie. Now I just enjoy content on my own terms.

Anonymous Coward says:

Re: Re: Re:2 Re:

I buy the DRM-hostage DVD movies and TV shows, and then I go online and download a pirated copy so I can watch on any device I choose. If the worthless douchebags ever sue me, I can point to my receipts from Target and show them the DVDs that I bought and paid for.

Another ridiculous DRM trend of the past few years is e-textbooks. So I paid over $150 for an e-textbook rental, and then I found out that the stupid fuckers set it up so that you can’t print out even a single page. For an advanced math course, is it completely unreasonable for a student to expect to be able to print out at least some of the pages? Of course, I found a workaround hack online to unlock the printing feature, but that really shouldn’t be necessary.

That One Guy (profile) says:

Re: Re: Re:3 Re:

I buy the DRM-hostage DVD movies and TV shows, and then I go online and download a pirated copy so I can watch on any device I choose. If the worthless douchebags ever sue me, I can point to my receipts from Target and show them the DVDs that I bought and paid for.

Which, and here’s the kicker, wouldn’t actually do anything to save you from a lawsuit large enough to buy one or more houses with, as the delight that is copyright law means that just because you already bought a broken copy doesn’t mean you can download a copy that works without breaking the law in the same manner as you would have had you never bought the original copy in the first place.

That One Guy (profile) says:

"No really, trust us."

The document with accessibility use-cases is quite specific, while all the dismissals of it have been very vague, and made appeals to authority ("technical experts who are passionate advocates for accessibility who have carefully assessed the technology over years have declared that there isn’t a problem") rather than addressing those issues.

I would welcome substantive discussion on these issues — rather than perfunctory dismissals.

The default assumption when someone gives vague responses to specific concerns like that is that the concerns are accurate and the ones being asked are simply not honest enough to admit it. If the concerns were really baseless then they could easily address them directly and confirm that, dancing around and giving vague responses just makes it clear that the issues and worries are valid and if anything understate how bad things are.

That One Guy (profile) says:

"If you make confidential disclosure impossible, you make public disclosure inevitable."

(So much wrong here I missed another big point)

So far, the W3C has stonewalled on this. This weekend, the W3C executive announced that it would not make such an agreement part of the EME work, and endorsed the idea that the W3C should participate in creating new legal rights for companies to decide which true facts about browser defects can be disclosed and under what circumstances.

Someone has not been paying attention. While I can certainly understand why companies might want to set up a ‘proper channels’ system like the government currently enjoys/abuses, if white-hats and/or security researchers can’t bring security vulnerabilities to light in a way that they believe will actually fix the problem but which allows the companies time to do so before said vulnerability is made public, they’re likely to either ignore those vulnerabilities entirely, opening the door to black-hats to find and exploit them, or disclose them publicly with no warning.

Smart companies benefit when those that find problems in their programs feel safe telling the company directly, as it makes it more likely that the company will be in a position to deal with the problem before it’s widely known and exploited.

Unfortunately there are plenty of stupid companies out there that lash out and sue or threaten to sue anyone who finds flaws in their software, meaning if the flaw it to be fixed the one who finds it is much safer anonymously making the flaw public such that the company can’t ignore it. Make it so that white-hats/security researchers must go through the companies, and even then leaving it up to the companies to decide what they want to be made public will just make the ‘go public from the get-go’ option all the more attractive.

Anonymous Coward says:

Re: Re:

When the people stop buying works from the creators that become mesmerized by the stacks of cash the publishers WAVE around in front of their faces. Sure they don’t really get near the cash the COULD get, but they sure do get dreamy over the crumbs from the tables!

When you sell you soul to the devil, welp… good luck with that contract!

Anonymous Coward says:

Can someone explain this passage?

“The argument at the W3C did not start because of an official plan to give software vendors a way to censor security research, but because that would be the ultimate effect of EME in many places thanks to copyright law.”

In what way would EME have the effect of giving software vendors a way to censor security research? I understand that DRM in general gives this capability, but how would EME give this ability to who? As I understand it, for EME to give this capability to someone, it has to bring something that wasn’t already protected by anti-circumvention laws under their purview—it has to turn something that didn’t have DRM into something that does have DRM. How would it do this?

Anonymous Coward says:

Re: Can someone explain this passage?

If I understood it correctly, it placed the W3C square in the middle of proposing DRM as a sort of standard (or part of one) where previously it wasn’t.
No doubt companies would/could still use DRM where they wanted it, but it would be “an extra”. This way the standard allows for it (like the title says, it codifies it) without any safeguards for security research or all the wonderful copyright abuses for censoring it that we’ve come to know and love. A promise that all will be well if it goes through “proper channels” demonstrably means less than nothing, nowadays.

Leigh Beadon (profile) says:

Re: Can someone explain this passage?

it has to turn something that didn’t have DRM into something that does have DRM

By ensconcing EME in the standards for the web, it turns every browser into something that has DRM. (Except those that take a principled stand by violating the standards).

Plus, sometimes, the vulnerability is in the EME implementation itselfhttps://boingboing.net/2016/06/24/googles-version-of-the-w3c.html

Christenson says:

Really Bad Ideas.

This might not be the bad thing that noone supports that happens because everyone is scrambling right now over the immigration ban…that’s more likely Betsy deVos, of “no education vampire left behind”, or the Dakota Access Pipeline.

However, dear W3C executive, I assure you that you are working very hard to burn the reputation of the W3C to the ground and to destroy it, just as surely if you took someone to court on an anti-disparagement clause in a license agreement, or gone copyright trolling.

There’s also the ADA to consider, and the fact that it’s not unreasonable for you to be dragged into court for promulgating a standard that does not include “reasonable accommodations” for the visually handicapped. And, by the way, since I now need glasses, I also need some accommodation.

trump vader says:

ummmm

“if the program is run entirely on the server side and its code is never stored in the client’s memory — that’s a case where DRM is uncrackable (short of breaking into the server and pulling data from it). But that’s not exactly DRM in the way we usually think of it.”

the rub here is it has no choice but to run in some fashion in hte local ram or even virtual ram , and thus can be sniffed out basically or copied and cracked….

all one has to do …nvm not gonna tell you..oh and yea i also dropped chrome browser…and stopped using google search…..the revolution is on and google is losing

Thad (user link) says:

Re: ummmm

the rub here is it has no choice but to run in some fashion in hte local ram or even virtual ram , and thus can be sniffed out basically or copied and cracked….

Not if all of the operations are being performed server-side. I’m talking about a program where the client records inputs and sends them to a server, and the server returns video of the program responding to the input. Think of game streaming, or the ChromeOS implementation of Photoshop.

In a case like that, no portion of the main program code is ever loaded into the client’s memory. All the client is running is an input reader and a video streamer.

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...