Last week, we kicked off our Error 402 series on the history (and hopefully future) of web monetization, by talking about much of the framing of what this series will be about. I started it out by noting that it has been 30 years since I first got online in 1993. That also happened to be right about the point at which the ability to exchange money online became a thing.
While the predecessor of the Internet, the ARPANET goes back to the late 1960s, it was gradually replaced by what became the Internet with the adoption of protocols like TCP/IP and then having the National Science Foundation (NSF) establish NSFNET, which officially resulted in the phasing out of the ARPANET in 1990. This is right about the time that Tim Berners-Lee was creating the concept of the World Wide Web, with the first web server showing up at the end of 1990.
Around this same time, the government started waking up to the potential of such a network. While he is often mocked as taking credit for “creating the Internet” (or “inventing” it, though he never said that), Al Gore was the author of the High Performance Computing Act of 1991 that basically supercharged the internet. Among other things, it funded the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana Champaign (UIUC) where a young Marc Andreessen created Mosaic, the first web browser with integrated graphics, which supercharged the World Wide Web.
While some of the early commercialization was around companies setting up their own access ramps to the Internet in the form of Internet Service Providers, plenty of people were planning out other ways to make money online. In the summer of 1994, the NY Times wrote an article proclaiming the first secure credit card transaction online for a music CD:
From his work station in Philadelphia, Mr. Brandenburger logged onto the computer in Nashua, and used a secret code to send his Visa credit card number to pay $12.48, plus shipping costs, for the compact disk “Ten Summoners’ Tales” by the rock musician Sting.
Much of the article focuses on the wonders of encryption that allowed this transaction to occur securely, but notably towards the end of the article, it’s admitted that many people had actually been making purchases less securely prior to this, though it quotes PGP creator Phil Zimmermann talking about his hope that encrypted transactions will open the floodgates for online commerce:
Although Net Market has been selling various products like CD’s, flowers and books for several months on behalf of various merchants, yesterday was the first time they had offered digitally secure transactions.
“I think it’s an important step in pioneering this work, but later on we’ll probably see more exciting things in the way of digital cash,” said Philip R. Zimmermann, a computer security consultant in Boulder, Colo., who created the PGP program.
Digital cash, Mr. Zimmermann explained, is “a combination of cryptographic protocols that behave the way real dollars behave but are untraceable.”
And while the NY Times article declared the (new) Internet “open for business,” it still took a while before we figured out exactly what that would look like. And those early days certainly were not about monetizing content online, but rather selling goods, which is what we’ll cover in next week’s article.
Today has been declared the 50th anniversary of the internet, as on October 29th, 1969, a team at UCLA, lead by Leonard Kleinrock, sent a message to a team at the Stanford Research Institute (SRI), representing the very first transmission over the then ARPANET, which later became the internet. This seems like a good moment to think about all that the internet has enabled — but also just how far we may have strayed from its early promise and how far we might still be able to go. On the historical side, Kleinrock himself has posts at both ICANN and the Internet Society, and both are worth reading. The ICANN post is all about that first message transmission:
The ARPANET?s first host-to-host message was sent at 10:30 p.m. on October 29, 1969 when one of my programmers, Charley Kline, proceeded to ?login? to the SRI host from the UCLA host.
The procedure was to type in ?log,? and the system at SRI was clever enough to fill out the rest of the command, adding ?in,? thus creating the word ?login.?
Charley at our end and Bill Duvall at the SRI end each had a telephone headset so they could communicate by voice as the message was being transmitted. Note the irony that here we were using the telephone network to launch the new technology of packet switching which would destroy the telephone network!
At the UCLA end, Charley typed in the ?l? and asked SRI ?did you get the l?? ?Got the l? came the voice reply. He typed in the ?o,? ?Did you get the o?? and received ?Got the o.? UCLA then typed in the ?g,? asked ?Did you get the g?? at which point the system crashed! This was quite a beginning.
So, the very first message on the Internet was the prescient word ?lo? (as in, ?lo and behold!?). We hadn?t prepared a special message (as did, for example, Samuel Morse with ?What Hath God Wrought?) but our ?lo? could not have been a more succinct, a more powerful or a more prophetic message. Heck, we didn?t have a camera or even a voice recorder. The only record of this event is an entry in our IMP log recording.
The ARPANET and its successor, the Internet, had now been launched.
There’s a lot more in that post about what happened prior to that to bring the ARPANET about in the first place and I recommend reading the whole thing. Kleinrock’s piece for the Internet Society, on the other hand tries to look forward about what the internet might still become — in particular, how the internet should become “invisible.” It, too, is well worth reading. Here’s a snippet.
Such an invisible Internet will provide intelligent spaces. When I enter such a space, it should know I entered and it should present to me an experience that matches my privileges, profile, and preferences. These spaces can be any location on earth, i.e., my room, my desk, my automobile, my fingernails, my body, my favorite shopping mall, London, or even the Dead Sea. Moreover, I should be able to interact with that space using human friendly interfaces such as speech, gestures, haptics and, eventually, brain-to-Internet interfaces. Indeed, what I am talking about is characterized by a pervasive global nervous system across this planet. The Internet will be everywhere and it will be invisible.
Vint Cerf, one of the architects of the original internet has a nice post detailing some of the key milestones of the internet. For a variety of reasons, I appreciate the second milestone:
1971: Networked electronic mail was created using file transfers as a mechanism to distribute messages to users on the Arpanet.
You don’t say? Cerf, like Kleinrock, is also interested in what comes next. His final point gets to that as well:
2019-2069 (the next 50 years): In the next five decades I believe that computer communications will become completely natural. Like using electricity, you won?t think about it anymore. Access will be totally improved?think thousands of low Earth orbit satellites?and speeds will be higher, with 5G and optical fiber, and billions of networked devices with increased interactive capabilities in voice, gesture, and artificially intelligent systems. I also imagine an expansion of the Interplanetary Internet. But who knows, after everything that has been accomplished in the past 50 years, the only thing we can be certain about is that the possibilities are endless.
Note the similarity to Kleinrock’s concept of an “invisible” internet to Cerf’s idea that “you won’t think about it anymore.”
Meanwhile, Sir Tim Berners-Lee, who did not help architect the original internet infrastructure, but did make it usable by the average human being with his 1991 invention of the World Wide Web, is also thinking about the future, but not just how it will be invisible, but how we can bring it back to some of its original underpinnings as “a force for good.”
?It?s astonishing to think the internet is already half a century old. But its birthday is not altogether a happy one. The internet ? and the World Wide Web it enabled ? have changed our lives for the better and have the power to transform millions more in the future. But increasingly we?re seeing that power for good being subverted, whether by scammers, people spreading hatred or vested interests threatening democracy.
?A year ago, I called for a new Contract for the Web, bringing together governments, companies and citizen groups to come up with a clear plan of action to protect the web as a force for good. In a month?s time that plan will be ready. This birthday must mark the moment we take on the fight for the web we want.?
I think these perspectives are important. With so much attention being paid these days to the problems brought about by the internet, we shouldn’t lose sight of two key things: (1) the internet has brought about many, many wonderful things as well and (2) it’s still the early days. Many of the discussions about today’s internet seem to act as if it’s a static thing that exists in stone, and the problems of today’s internet need to be dealt with via heavy handed regulations, rather than allowing technology, social pressure, and the market to work. Perhaps, in the long run, they will be proven correct, but the fact that even those who were around in the earliest days are thinking about how to continue to improve the technology, I have much more faith in letting these things play out.
The internet we have today is a different one than what was initially envisioned. In some ways it’s better than the early expectations, and in some ways it’s much, much worse. But the promise and opportunity remains, and many of us are focused on using that promise as a guiding star towards where the internet needs to be pushed. Over the last 50 years, amazing things have been accomplished, but the promise of the internet is only partially visible today. We need to work to bring back that promise and, as Tim Berners-Lee notes, make sure the internet remains a force for good.
Here we go. For years I’ve been talking about how we really need to move the web to a world of protocols instead of platforms. The key concept is that so much of the web has been taken over by internet giants who have built data silos. There are all sorts of problems with this. For one, when those platforms are where the majority of people get their information, it makes them into the arbiters of truth when that should make us quite uncomfortable. Second, it creates a privacy nightmare where hugely valuable data stores are single points of failure for all your data (even when those platforms have strong security, just having so much data held by one source is dangerous). Finally, it really takes us far, far away from the true promise of cloud computing, which was supposed to be a situation where we separated out the data and the application layers and could point multiple applications at the same data. Instead, we got silos where you’re relying on a single provider to host both the data and the application (which also raises privacy concerns).
Despite some people raising these issues for quite some time, there hasn’t been much public discussion of them until just recently (in large part, I believe, driven by the growing worries about how the big platforms have become so powerful). A few companies here or there have been trying to move us towards a world of protocols instead of platforms, and one key project to watch is coming from the inventor of the web himself, Tim Berners-Lee. He had announced his project Solid a while back: an attempt to separate out the data layer, allowing end users to control that data and have much more control over what applications could access it. I’ve been excited about the project, but just last week I commented to someone that it wasn’t clear how much progress had actually been made.
Then, last Friday, Berners-Lee announced that he’s doubling down on the project, to the point that he’s taken a sabbatical from MIT and reduced his involvement with the W3C to focus on a new company to be built around Solid called inrupt. inrupt’s new CEO also has a blog post about this, which admittedly comes off as a bit odd. It seems to suggest that the reason to form inrupt was not necessarily that Solid has made a lot of forward progress, but rather than it needs money, and the only way to get some is to set up a company:
Solid as an open-source project had been facing the normal challenges: vying for attention and lacking the necessary resources to realize its true potential. The solution was to establish a company that could bring resources, process and appropriate skills to make the promise of Solid a reality. There are plenty of examples of a commercial entity serving as the catalyst for an open-source project, to bolster the community with the energy and infrastructure of a commercial venture.
And so we started planning inrupt – a company to do just that. Inrupt?s mission is to ensure that Solid becomes widely adopted by developers, businesses, and eventually ? everyone; that it becomes part of the fabric of the web. Tim, as our CTO, has committed his time and talent to the company, and I am delighted to be its chief executive. We also have an exceptional investor as part of the team.
I’m certainly hopeful that something significant comes of this, as it truly is an opportunity to move the internet into that kind of more distributed, less centralized/silo’d world that shows off the true power of the web. I have heard some grousing among some people that this is just Tim Berners-Lee just rebranding the concept of the Semantic Web that he started pushing nearly two decades ago, without any real traction. And, of course, there have been plenty of other attempts over the decades to build these kinds of systems. As it stands right now, there are a few other projects that are getting some traction, including the more distributed social platform Mastodon or some of the ideas that have come out of IndieWeb.
That said, we may finally be entering an era where both users and companies alike are recognizing the benefits of a more distributed web and the downsides of a more centralized one. So it really does feel like there’s an opportunity to embrace these concepts, and it’s good to see the founder of the world wide web ramping up his efforts on this. If it produces real, workable solutions, that would obviously be fantastic, but at the very least if it gets more people just thinking about these concepts, that would also be useful. So, this should be seen as big news for anyone concerned about the powers of the largest internet companies (especially if you’re skeptical about government trying to step in to deal with those companies when they don’t know what they’re doing). While the details and implementation will matter quite a bit, it’s exciting to see more movement towards a world in which the data layer is not just separated out, but where end users will be able to fully control that layer themselves, and potentially choose which apps can access what (and for how long). It certainly opens up a real opportunity to bring back the early promise of a truly decentralized web… and that would be a web built on protocols rather than centralized, silo’d platforms.
By now the FCC has made it clear it has absolutely no intention of actually listening to the public or to experts when it comes to its plan to repeal popular net neutrality rules later this week.
It doesn’t really matter to the FCC’s myopic majority that the vast majority of the record 22 million public comments on its plan think it’s a stupid idea. It apparently doesn’t matter than over 800 startups have warned the FCC that its attack on the rules undermines innovation, competition, and the health of the internet. And it certainly doesn’t appear to matter than over 190 academics, engineers, and tech-policy experts have told the agency that its repeal will dramatically harm the internet — or that the FCC’s justifications for the reversal make no technical or engineering sense.
If the current FCC was actually capable of hearing these dissenting expert voices, they’d probably find this new letter from 21 of them worth a look. You might recognize some of the authors. They include Internet Protocol co-inventor Vint Cerf, Apple co-founder Steve Wozniak, several designers of the Domain Name System (DNS), World Wide Web inventor Tim Berners-Lee, public-key cryptography inventors Whitfield Diffie and Martin Hellman, and more.
In their letter, they effectively argue that the FCC’s entire rationale for dismantling net neutrality protections rests on a flawed misunderstanding of how the internet actually operates. And worse, that the FCC has made absolutely no attempt to correct its flawed logic as this week’s rule-killing vote approached:
“It is important to understand that the FCC?s proposed Order is based on a flawed and factually inaccurate understanding of Internet technology. These flaws and inaccuracies were documented in detail in a 43-page-long joint comment signed by over 200 of the most prominent Internet pioneers and engineers and submitted to the FCC on July 17, 2017.
Despite this comment, the FCC did not correct its misunderstandings, but instead premised the proposed Order on the very technical flaws the comment explained. The technically-incorrect proposed Order dismantles 15 years of targeted oversight from both Republican and Democratic FCC chairs, who understood the threats that Internet access providers could pose to open markets on the Internet.”
Their previous, ignored warnings highlighted how the FCC’s Notice for Proposed Rulemaking (NPRM) includes incorrect assessments and conflation of the differences between ISPs and edge providers (Netflix, content companies), incorrect claims in the NPRM about how the transition from IPv4 to IPv6 functions, how firewalls work, and more. Instead of consulting people that actually know how the internet works in public hearings, the FCC blindly doubled down on flawed reasoning and technical inaccuracies. Why? Because ISP-driven ideological rhetoric, not facts, are driving the repeal.
The letter notes how experts aren’t the only ones the FCC is ignoring. It’s also blatantly ignoring the will of the public, as well as turning a blind eye to efforts to undermine the public’s only opportunity to make its voice heard during the open comment period of the proceeding:
“The experts? comment was not the only one the FCC ignored. Over 23 million comments have been submitted by a public that is clearly passionate about protecting the Internet. The FCC could not possibly have considered these adequately. Indeed, breaking with established practice, the FCC has not held a single open public meeting to hear from citizens and experts about the proposed Order.
Furthermore, the FCC?s online comment system has been plagued by major problems that the FCC has not had time to investigate. These include bot-generated comments that impersonated Americans, including dead people, and an unexplained outage of the FCC?s on-line comment system that occurred at the very moment TV host John Oliver was encouraging Americans to submit comments to the system.”
And again, while the FCC may be eager to ignore objective experts and the will of the public as it rushes to give VerizoCasT&T a sloppy kiss, the fact they did so will be playing a starring role in the lawsuits filed against the agency in the new year. In court the FCC will have to prove that the broadband market changed dramatically enough in two years to warrant a wholesale reversal in net neutrality policy. But critics will have plenty of ammunition in their attempts to prove the FCC engaged in “arbitrary and capricious” policy based predominately on fluff and nonsense, not hard data or engineering expertise.
This is not a huge surprise, but it’s still disappointing to find out that the W3C has officially approved putting DRM into HTML 5 in the form of Encrypted Media Extensions (EME). Some will insist that EME is not technically DRM, but it is the standardizing of how DRM will work in HTML going forward. As we’ve covered for years, there was significant concern over this plan, but when it was made clear that the MPAA (a relatively new W3C member) required DRM in HTML, and Netflix backed it up strongly, the W3C made it fairly clear that there was no real debate to be had on the issue. Recognizing that DRM was unavoidable, EFF proposed a fairly straightforward covenant, that those participating agree not to use the anti-circumvention provisions of the DMCA (DMCA 1201) to go after security researchers, who cracked DRM in EME. The W3C already has similar covenants regarding patents, so this didn’t seem like a heavy lift. Unfortunately, this proposal was more or less dismissed by the pro-DRM crowd as being an attempt to relitigate the question of DRM itself (which was not true).
Earlier this year, Tim Berners-Lee, who had the final say on things, officially put his stamp of approval on EME without a covenant, leading the EFF to appeal the decision. That appeal has now failed. Unfortunately, the votes on this were kept entirely secret:
So much for transparency.
In Bryan Lunduke’s article about this at Network World, he notes that despite the W3C saying that it had asked members if they wanted their votes to be public, with all declining, Cory Doctorow (representing EFF) says that actually EFF was slapped on the wrist for asking W3C members if they would record their votes publicly:
?The W3C did not, to my knowledge as [Advisory Committee] rep, ask members whether they would be OK with having their votes disclosed in this latest poll, and if they had, EFF would certainly have been happy to have its vote in the public record. We feel that this is a minimal step towards transparency in the standards-setting that affects billions of users and will redound for decades to come.?
?By default, all W3C Advisory Committee votes are ?member-confidential.? Previously, EFF has secured permission from members to disclose their votes. We have also been censured by the W3C leadership for disclosing even vague sense of a vote (for example, approximate proportions).?
It was eventually revealed that out of 185 members participating in the vote, 108 voted for DRM, 57 voted against, and 20 abstained.
And while the W3C insisted it couldn’t reveal who voted for or against the proposal… it had no problem posting “testimonials” from the MPAA, the RIAA, NBCUniversal, Netflix, Microsoft and a few others talking about just how awesome DRM in HTML will be. Incredibly, Netflix even forgot the bullshit talking point that “EME is not DRM” and directly emphasized how “integration of DRM into web browsers delivers improved performance, battery life, reliability, security and privacy.” Right, but during this debate we kept getting yelled at by people who said EME is not DRM. So nice of you to admit that was all a lie.
The W3C is a body that ostensibly operates on consensus. Nevertheless, as the coalition in support of a DRM compromise grew and grew ? and the large corporate members continued to reject any meaningful compromise ? the W3C leadership persisted in treating EME as topic that could be decided by one side of the debate. In essence, a core of EME proponents was able to impose its will on the Consortium, over the wishes of a sizeable group of objectors ? and every person who uses the web. The Director decided to personally override every single objection raised by the members, articulating several benefits that EME offered over the DRM that HTML5 had made impossible.
But those very benefits (such as improvements to accessibility and privacy) depend on the public being able to exercise rights they lose under DRM law ? which meant that without the compromise the Director was overriding, none of those benefits could be realized, either. That rejection prompted the first appeal against the Director in W3C history.
In our campaigning on this issue, we have spoken to many, many members’ representatives who privately confided their belief that the EME was a terrible idea (generally they used stronger language) and their sincere desire that their employer wasn’t on the wrong side of this issue. This is unsurprising. You have to search long and hard to find an independent technologist who believes that DRM is possible, let alone a good idea. Yet, somewhere along the way, the business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool’s errand.
We believe they will regret that choice. Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they?ll be able to ensure no one ever subjects them to the same innovative pressures.
This is a disappointing day for the web, and a black mark on Tim Berners-Lee’s reputation and legacy of stewardship over it.
Last week, we wrote about the unfortunate and short-sighted decision by Tim Berners-Lee to move forward with DRM in HTML. To be more exact, the move forward is on Encrypted Media Extensions in HTML, which will allow third party DRM to integrate simply into the web. It’s been a foregone conclusion that EME was going to get approved, but there was a smaller fight about whether or not W3C would back a covenant not to sue security and privacy researchers who would be investigating (and sometimes breaking) that encryption. Due to massive pushback from the likes of the MPAA and (unfortunately) Netflix, Tim Berners-Lee rejected this covenant proposal.
In response, W3C member EFF has now filed a notice of appeal on the decision. The crux of the appeal is the claimed benefits of EME that Berners-Lee put forth won’t actually be benefits without the freedom of security researchers to audit the technology — and that the wider W3C membership should have been able to vote on the issue. This appeals process has never been used before at the W3C, even though it’s officially part of its charter — so no one’s entirely sure what happens next.
The appeal is worth reading so we’re reposting a big chunk of it here:
1. The enhanced privacy protection of a sandbox is only as good as the sandbox, so we need to be able to audit the sandbox.
The privacy-protecting constraints the sandbox imposes on code only work if the constraints can’t be bypassed by malicious or defective software. Because security is a process, not a product and because there is no security through obscurity, the claimed benefits of EME’s sandbox require continuous, independent verification in the form of adversarial peer review by outside parties who do not face liability when they reveal defects in members’ products.
This is the norm with every W3C recommendation: that security researchers are empowered to tell the truth about defects in implementations of our standards. EME is unique among all W3C standards past and present in that DRM laws confer upon W3C members the power to silence security researchers.
EME is said to be respecting of user privacy on the basis of the integrity of its sandboxes. A covenant is absolutely essential to ensuring that integrity.
2. The accessibility considerations of EME omits any consideration of the automated generation of accessibility metadata, and without this, EME’s accessibility benefits are constrained to the detriment of people with disabilities.
It’s true that EME goes further than other DRM systems in making space available for the addition of metadata that helps people with disabilities use video. However, as EME is intended to restrict the usage and playback of video at web-scale, we must also ask ourselves how metadata that fills that available space will be generated.
For example, EME’s metadata channels could be used to embed warnings about upcoming strobe effects in video, which may trigger photosensitive epileptic seizures. Applying such a filter to (say) the entire corpus of videos available to Netflix subscribers who rely on EME to watch their movies would safeguard people with epilepsy from risks ranging from discomfort to severe physical harm.
There is no practical way in which a group of people concerned for those with photosensitive epilepsy could screen all those Netflix videos and annotate them with strobe warnings, or generate them on the fly as video is streamed. By contrast, such a feat could be accomplished with a trivial amount of code. For this code to act on EME-locked videos, EME’s restrictions would have to be bypassed.
It is legal to perform this kind of automated accessibility analysis on all the other media and transports that the W3C has ever standardized. Thus the traditional scope of accessibility compliance in a W3C standard — “is there somewhere to put the accessibility data when you have it?” — is insufficient here. We must also ask, “Has W3C taken steps to ensure that the generation of accessibility data is not imperiled by its standard?”
There are many kinds of accessibility metadata that could be applied to EME-restricted videos: subtitles, descriptive tracks, translations. The demand for, and utility of, such data far outstrips our whole species’ ability to generate it by hand. Even if we all labored for all our days to annotate the videos EME restricts, we would but scratch the surface.
However, in the presence of a covenant, software can do this repetitive work for us, without much expense or effort.
3. The benefits of interoperability can only be realized if implementers are shielded from liability for legitimate activities.
EME only works to render video with the addition of a nonstandard, proprietary component called a Content Decryption Module (CDM). CDM licenses are only available to those who promise not to engage in lawful conduct that incumbents in the market dislike.
For a new market entrant to be competitive, it generally has to offer a new kind of product or service, a novel offering that overcomes the natural disadvantages that come from being an unknown upstart. For example, Apple was able to enter the music industry by engaging in lawful activity that other members of the industry had foresworn. Likewise Netflix still routinely engages in conduct (mailing out DVDs) that DRM advocates deplore, but are powerless to stop, because it is lawful. The entire cable industry — including Comcast — owes its existence to the willingness of new market entrants to break with the existing boundaries of “polite behavior.”
EME’s existence turns on the assertion that premium video playback is essential to the success of any web player. It follows that new players will need premium video playback to succeed — but new players have never successfully entered a market by advertising a product that is “just like the ones everyone else has, but from someone you’ve never heard of.”
The W3C should not make standards that empower participants to break interoperability. By doing so, EME violates the norm set by every other W3C standard, past and present.
It’s unclear to me why Tim Berners-Lee has been so difficult on this issue — as he’s been so good for so long on so many other issues. I understand that not everyone you agree with should ever agree with you on all things, but this seems like a very weird hill to die on.
For years now, we’ve discussed the various problems with the push (led by the MPAA, but with some help from Netflix) to officially add DRM to the HTML 5 standard. Now, some will quibble with even that description, as supporters of this proposal insist that it’s not actually adding DRM, but rather this “Encrypted Media Extensions” (EME) is merely just a system by which DRM might be implemented, but that’s a bunch of semantic hogwash. EME is bringing DRM directly into HTML and killing the dream of a truly open internet. Instead, we get a functionally broken internet. Despite widespread protests and concerns about this, W3C boss (and inventor of the Web), Tim Berners-Lee, has signed off on the proposal. Of course, given the years of criticism over this, that signoff has come with a long and detailed defense of the decision… along with a tiny opening to stop it.
There are many issues underlying this decision, but there are two key ones that we want to discuss here: whether EME is necessary at all and whether or not the W3C should have included a special protection for security researchers.
First, the question of whether or not EME even needs to be in HTML at all. Many — even those who dislike DRM — have argued that it was kind of necessary. The underlying argument here is that certain content producers would effectively abandon the web without EME being in HTML5. However, this argument rests on the assumption that the web needs those content producers more than those content producers need the web — and I’m not convinced that’s an accurate portrayal of reality. It is fair to note that, especially with the rise of smart devices from phones to tablets to TVs, you could envision a world in which the big content producers “abandoned” the web and only put their content in proprietary DRM’d apps. And maybe that does happen. But my response to that is… so what? Let them make that decision and perhaps the web itself is a better place. And plenty of other, smarter, more innovative content producers can jump in and fill the gaps, providing all sorts of cool content that doesn’t require DRM, until those with outdated views realize they’re missing out. Separately, I tend to agree with Cory Doctorow’s long-held view that DRM is an attack on basic computing principles — one that sets up the user as a threat, rather than the person who owns the computer in question. That twisted setup leads to bad outcomes that create harm. That view, however, is clearly not in the majority, and many people admitted it was a foregone conclusion that some form of EME would move forward.
The second issue is much more problematic. A bunch of W3C members had made a clear proposal that if EME is included, there should be a covenant that W3C members will not sue security researchers under Section 1201 of the DMCA should they crack any DRM. There is no reason not to support this. Security researchers should be encouraged to be searching for vulnerabilities in DRM and encryption in order to better protect us all. And, yet, for reasons that no one can quite understand, the W3C has rejected multiple versions of this proposal, often with little discussion or explanation. The final decision from Tim Berners-Lee on this is basically “sure a covenant not to sue would have been nice, and we think companies shouldn’t sue, but… since this wasn’t raised at the very beginning, we’re not supporting it”:
We recommend organizations involved in DRM and EME implementations ensure proper security and privacy protection of their users. We also
recommend that such organizations not use the anti-circumvention
provisions of the Digital Millennium Copyright Act (DMCA) and similar laws around the world to prevent security and privacy research on the specification or on implementations. We invite them to adopt the proposed best practices for security guidelines  (or some variation),
intended to protect security and privacy researchers. Others might advocate for protection in public policy fora ? an area that is outside the scope of W3C which is a technical standards organization. In addition, the prohibition on “circumvention” of technical measures to protect copyright is broader than copyright law’s protections against infringement, and it is not our intent to provide a technical hook for those paracopyright provisions.
Given that there was strong support to initially charter this work
(without any mention of a covenant) and continued support to
successfully provide a specification that meets the technical
requirements that were presented, the Director did not feel it
appropriate that the request for a covenant from a minority of Members
should block the work the Working Group did to develop the specification
that they were chartered to develop. Accordingly the Director overruled these objections.
This is unfortunate. What’s bizarre is that the supporters of DRM basically refuse to discuss any of this. Even just a few days ago, the Center for Democracy and Technology proposed a last-ditch “very narrow” compromise to protect a limited set of security and privacy researchers (just those examining implementations of w3C specifications for privacy and security flaws.) Netflix flat out rejected this compromise saying that it’s “similar to the proposal” that was made a year ago. Even though it’s not. It was more narrowly focused and designed to respond to whatever concerns Netflix and others had.
The problem here seemed to be that Netflix and the MPAA realized that they had enough power to push this through without needing to protect security researchers, and just decided “we can do it, so fuck it, let’s do it.” And Tim Berners-Lee — who had the ability to block it — caved in and let it happen. The whole thing is a travesty.
Corry Doctorow has a thorough and detailed response to the W3C’s decision that pushes back on many of the claims that the W3C and Berners-Lee have made in support of this decision. Here’s just part of it:
We’re dismayed to see the W3C literally overrule the concerns of its public interest members, security experts, accessibility members and innovative startup members, putting the institution’s thumb on the scales for the large incumbents that dominate the web, ensuring that dominance lasts forever.
This will break people, companies, and projects, and it will be technologists and their lawyers, including the EFF, who will be the ones who’ll have to pick up the pieces. We’ve seen what happens when people and small startups face the wrath of giant corporations whose ire they’ve aroused. We’ve seen those people bankrupted, jailed, and personally destroyed.
This was a bad decision done badly, and Tim Berners-Lee, the MPAA and Netflix should be ashamed. The MPAA breaking the open internet I can understand. It’s what that organization has wanted to do for over a decade. But Netflix should be a supporter of the open internet, rather than an out and out detractor.
As Cory notes in his post, there is an appeals process, but it’s never been used before. The EFF and others are exploring it now, but it’s a hail mary process at this point. What a shame.
For the last four years, the Web has had to live with a festering wound: the threat of DRM being added to the HTML 5 standard in the form of Encrypted Media Extensions (EME). Here on Techdirt, we’ve written numerousposts explaining why this is a reallystupid idea, as have many, many other people. Despite the clear evidence that EME will be harmful to just about everyone — except the copyright companies, of course — the inventor of the Web, and director of the W3C (World Wide Web Consortium), Sir Tim Berners-Lee, has just given his blessing to the idea:
The question which has been debated around the net is whether W3C should endorse the Encrypted Media Extensions (EME) standard which allows a web page to include encrypted content, by connecting an existing underlying Digital Rights Management (DRM) system in the underlying platform. Some people have protested “no”, but in fact I decided the actual logical answer is “yes”. As many people have been so fervent in their demonstrations, I feel I owe it to them to explain the logic.
He does so in a long, rather rambling post that signally fails to convince. Its main argument is defeatism: DRM exists, the DMCA exists, copyright exists, so we’ll just have to go along with them:
could W3C make a stand and just because DRM is a bad thing for users, could just refuse to work on DRM and push back wherever they could on it? Well, that would again not have any effect, because the W3C is not a court or an enforcement agency. W3C is a place for people to talk, and forge consensus over great new technology for the web. Yes, there is an argument made that in any case, W3C should just stand up against DRM, but we, like Canute, understand our power is limited.
But there’s a world of difference between recognizing that DRM exists, and giving it W3C’s endorsement. Refusing to incorporate DRM in HTML5 would send a strong signal that it has no place in an open Internet, which would help other efforts to get rid of it completely. That’s a realistic aim, for reasons that Berners-Lee himself mentions:
we have seen [the music] industry move consciously from a DRM-based model to an unencrypted model, where often the buyer’s email address may be put in a watermark, but there is no DRM.
In other words, an industry that hitherto claimed that DRM was indispensable, has now moved to another approach that does not require it. The video industry could do exactly the same, and refusing to include EME in HTML5 would be a great way of encouraging them to do so. Instead, by making DRM an official part of the Web, Berners-Lee has almost guaranteed that companies will stick with it.
Aside from a fatalistic acceptance of DRM’s inevitability, Berners-Lee’s main argument seems to be that EME allows the user’s privacy to be protected better than other approaches. That’s a noble aim, but his reasoning doesn’t stand up to scrutiny. He says:
If put it on the web using EME, they will get to record that the user unlocked the movie. The browser though, in the EME system, can limit the amount of access the DRM code has, and can prevent it “phoning home” with more details. (The web page may also monitor and report on the user, but that can be detected and monitored as that code is not part of the “DRM blob”)
In fact there are variousways that a Web page can identify and track a user. And if the content is being streamed, the company will inevitably know exactly what is being watched when, so Berners-Lee’s argument that EME is better than a closed-source app, which could be used to profile a user, is not true. Moreover, harping on about the disadvantages of closed-source systems is disingenuous, since the DRM modules used with EME are all closed source.
Also deeply disappointing is Berners-Lee’s failure to recognize the seriousness of the threat that EME represents to security researchers. The problem is that once DRM enters the equation, the DMCA comes into play, with heavy penalties for those who dare to reveal flaws, as the EFF explained two years ago. The EFF came up with a simple solution that would at least have limited the damage the DMCA inflicts here:
a binding promise that W3C members would have to sign as a condition of continuing the DRM work at the W3C, and once they do, they not be able to use the DMCA or laws like it to threaten security researchers.
Berners-Lee’s support for this idea is feeble:
There is currently (2017-02) a related effort at W3C to encourage companies to set up “bug bounty” programs to the extent that at least they guarantee immunity from prosecution to security researchers who find and report bugs in their systems. While W3C can encourage this, it can only provide guidelines, and cannot change the law. I encourage those who think this is important to help find a common set of best practice guidelines which companies will agree to.
One of the biggest problems with the defense of his position is that Berners-Lee acknowledges only in passing one of the most serious threats that DRM in HTML5 represents to the open Web. Talking about concerns that DRM for videos could spread to text, he writes:
For books, yes this could be a problem, because there have been a large number of closed non-web devices which people are used to, and for which the publishers are used to using DRM. For many the physical devices have been replaced by apps, including DRM, on general purpose devices like closed phones or open computers. We can hope that the industry, in moving to a web model, will also give up DRM, but it isn’t clear.
So he admits that EME may well be used for locking down e-book texts online. But there is no difference between an e-book text and a Web page, so Berners-Lee is tacitly admitting that DRM could be applied to basic Web pages. An EFF post spelt out what that would mean in practice:
It’s also totally different from the Web that Berners-Lee invented in 1989, and then generously gave away for the world to enjoy and develop. It’s truly sad to see him acquiescing in a move that could destroy the very thing that made the Web such a wonderfully rich and universal medium — its openness.
We’ve been explaining this since it was first proposed two years ago: but the IANA transfer away from the Commerce Dept. is a good thing on a variety of important levels. Earlier this year, we did a more thorough explaination on why it was a good thing, and then a further post earlier this month explained why Ted Cruz, who was leading the charge in blocking the transition, was basically wrong on every point about it. And not just wrong, dangerously so. Cruz keeps claiming that the transition makes it easier for Russia, China and the UN to “take control” over internet governance. The exact opposite is true. But we’ll get there.
“Donald J. Trump is committed to preserving Internet freedom for the American people and citizens all over the world. The U.S. should not turn control of the Internet over to the United Nations and the international community. President Obama intends to do so on his own authority ? just 10 days from now, on October 1st, unless Congress acts quickly to stop him. The Republicans in Congress are admirably leading a fight to save the Internet this week, and need all the help the American people can give them to be successful. Hillary Clinton?s Democrats are refusing to protect the American people by not protecting the Internet.
The U.S. created, developed and expanded the Internet across the globe. U.S. oversight has kept the Internet free and open without government censorship ? a fundamental American value rooted in our Constitution?s Free Speech clause. Internet freedom is now at risk with the President?s intent to cede control to international interests, including countries like China and Russia, which have a long track record of trying to impose online censorship. Congress needs to act, or Internet freedom will be lost for good, since there will be no way to make it great again once it is lost.” – Stephen Miller, National Policy Director
First of all, here’s Trump going on and on about “internet freedom” and “free speech.” And yet… this is the very same candidate just a few months ago who talked about “shutting down parts of the internet” and mocking those who would say “oh freedom of speech” claiming anyone who fell back on that claim were “foolish people.”
So, apparently it’s okay to shut down parts of the internet, and those talking about free speech are “foolish people,” but a symbolic effort over who controls the domain name system must be stopped because internet freedom and free speech are too important.
More importantly, almost everything the Trump campaign says in those two short paragraphs about the transition is wrong. And it’s a really, really stupid and dangerous position to take for the internet. First off, as we’ve explained, the current link between the Commerce Department and ICANN and its IANA functions is more theoretical than real anyway. The US government really doesn’t have any official control here. It’s symbolic and that symbolism is doing a hell of a lot more to hurt the internet than to help it. Yes, Russia and China have, in the past, tried to take more control over internet governance via the UN/ITU, but that was stopped. But — and this is the important part — a big part of their rationale for trying to do so was the US’s “control” over IANA via the Commerce Dept. That is, keeping this small bit of internet governance loosely connected to the US government adds fuel to the fire for authoritarian governments to seek more control over the internet. And that doesn’t even get into the backlash that it will create if we go back on our word and refuse to complete the transfer of IANA away from the Commerce Dept (again, a largely symbolic move anyway).
But, don’t trust me. Trust basically anyone and everyone with any actual knowledge on the situation. Here’s Tim Berners-Lee, the guy who invented the web itself, explaining why the transition must go forward and why Cruz (and, by extension now, Trump) are totally wrong:
The global consensus at the heart of the Internet exists by virtue of trust built up over decades with people from all over the world collaborating on the technical design and operation of the network and the web. ICANN is a critical part of this global consensus. But if the United States were to reverse plans to allow the global Internet community to operate ICANN independently, as Sen. Cruz is now proposing, we risk undermining the global consensus that has enabled the Internet to function and flourish over the last 25 years.
Contrary to the senator?s view, ICANN is no ?mini-United Nations.? ICANN is a vital part of the voluntary, global network of private organizations that provides Internet stability and the ability to innovate free from government interventions around the world.
Berners-Lee makes it clear that going back on the transfer will put the US gov’t in the same kind of dangerous category that Cruz (and Trump) put Russia and China in:
But by forcibly undermining the global Internet community?s ability to make decisions about ICANN, the United States would stoop to the level of Russia, China and other authoritarian regimes that believe in the use of force to limit freedom online.
If not them, how about Kathryn Brown, who runs the Internet Society. She also argues that delaying the transition is what helps the case for Russia and China, rather than the other way around:
Some warn that if the plan to transition authority on Oct. 1 is delayed, countries like Russia and China could try to shift domain name responsibilities to the United Nations, giving those nations more influence over global internet policy.
“Any delay would add a degree of instability and make the prospect of government control of the internet more likely, not less,” said Kathryn Brown, president of the Internet Society, a nonprofit organization that advocates open internet policies.
It vaguely suggests that the transition might create ?an opportunity for an enhanced role for authoritarian nation-states in Internet governance,? but provides no evidence as to how or why it does. In fact, if the U.S. is forced to abort the transition now it would play right into the hands of authoritarian states. Killing ICANN?s reforms through impulsive and arbitrary American action would fatally undermine the global Internet governance model rooted in nonstate actors. It would strengthen the case for national sovereignty-based Internet models favored by authoritarian states. ?Look,? they will say, ?the U.S. wants to control the Internet, why can?t we?? ICANN?s independence from unilateral U.S. government control is a logically and politically necessary consequence of its independence from all governments. By getting in the way of that, it is the Congressmen, not the Commerce Department, who are creating an opportunity for authoritarian states to enhance their influence in Internet governance.
The Congressmen suggest that ?this irreversible decision could result in a less transparent and accountable Internet governance regime.? But how? No reference is made to the actual reform plans. In fact, the transition brings with it major corporate governance changes that would significantly improve ICANN?s accountability and transparency. The transition brings with it a new set of bylaws that gives the public enhanced rights to inspect ICANN?s books, the right to remove board members, and the power to prevent the board from unilaterally modifying its bylaws. Under U.S. government supervision for the past 18 years, ICANN has been almost completely unaccountable ? yet this is the status quo they want to retain. By opposing the transition, the Congressmen are getting in the way of reforms that address the very things ICANN critics have been complaining about.
The congressmen claim that ?Questions have been raised about ICANN?s antitrust status.? Well, what questions, and what are their implications for the future of Internet governance? No answer. This is a phony issue. ICANN is not, and never has been, exempt from antitrust liability.
And so forth and so on. Part of the attempt to throw a wrench into the transition was Cruz claiming that Congress needs to approve the transition, as it has the power to determine if the government can “dispose of… property.” But the Government Accountability Office (GAO) just released a report basically saying that doesn’t apply here and the Commerce Dept is free to move ahead with the transition. Specifically, the GAO finds it to be ridiculous that the entire domain name system should be considered “property of the US government” because it’s not.
It is unlikely that either the authoritative root zone file?the public ?address book? for the top
level of the Internet domain name system?or the Internet domain name system as a whole, is
U.S. Government property under Article IV. We did not identify any Government-held
copyrights, patents, licenses, or other traditional intellectual property interests in either the root
zone file or the domain name system. It also is doubtful that either would be considered
property under common law principles, because no entity appears to have a right to their
exclusive possession or use.
In short, there’s a legitimate concern that Russia and China would like more control over the internet. But that’s the only point that Trump and Cruz get right. What’s astounding is that their preferred course of action — delaying or even blocking the IANA transition away from the Commerce Dept actually supports Russia and China in their efforts to gain control over the internet. So if you care about the future of the internet and how it is governed, could someone please educate Cruz and Trump that they’re doing exactly the kind of damage they claim to be trying to stop?
Europe only has a few days left to ensure that its member countries are actually protected by real net neutrality rules. As we’ve been discussing, back in October the European Union passed net neutrality rules, but they were so packed with loopholes to not only be useful, but actively harmful in that they effectively legalize net neutrality violations by large telecom operators. The rules carve out tractor-trailer-sized loopholes for “specialized services” and “class-based discrimination,” as well as giving the green light for zero rating, letting European ISPs trample net neutrality — just so long as they’re clever enough about it.
In short, the EU’s net neutrality rules are in many ways worse than no rules at all. But there’s still a change to make things right.
While the rules technically took effect April 30 (after much self-congratulatory back patting), the European Union’s Body of European Regulators of Electronic Communications (BEREC) has been cooking up new guidelines to help European countries interpret and adopt the new rules, potentially providing them with significantly more teeth than they have now. With four days left for the public to comment (as of the writing of this post), Europe’s net neutrality advocates have banded together to urge EU citizens to contact their representatives and demand they close these ISP-lobbyist crafted loopholes.
Hoping to galvanize public support, Sir Tim Berners-Lee, Barbara van Schewick, and Larry Lessig have penned a collective letter to European citizens urging them to pressure their constituents. The letter mirrors previous concerns that the rules won’t be worth much unless they’re changed to prohibit exceptions allowing “fast lanes,” discrimination against specific classes of traffic (like BitTorrent), and the potential paid prioritization of select ?specialized? services. These loopholes let ISPs give preferential treatment to select types of content or services, providing they offer a rotating crop of faux-technical justifications that sound convincing.
The letter also urges the EU to follow India, Chile, The Netherlands, and Japan in banning “zero rating,” or the exemption of select content from usage caps:
“Like fast lanes, zero-rating lets carriers pick winners and losers by making certain apps more attractive than others. And like fast lanes, zero-rating hurts users, innovation, competition, and creative expression. In advanced economies like those in the European Union, there is no argument for zero-rating as a potential onramp to the Internet for first-time users.
The draft guidelines acknowledge that zero-rating can be harmful, but they leave it to national regulators to evaluate zero-rating plans on a case-by-case basis. Letting national regulators address zero-rating case-by-case disadvantages Internet users, start-ups, and small businesses that do not have the time or resources to defend themselves against discriminatory zero-rating before 28 different regulators.”
Here in the States the FCC decided to not ban zero rating and follow this “case by case” enforcement, which so far has simply resulted in no serious enforcement whatsoever, opening the door ever wider to the kind of pay-to-play lopsided business arrangements net neutrality rules are supposed to prevet. Of course European ISPs have been busy too, last week falling back on the old, bunk industry argument that if regulators actually do their job and protect consumers and small businesses from entrenched telecom monopolies, wireless carriers won’t be able to invest in next-generation networks.