Back in December, when the Sony emails first leaked, we wrote a detailed post about the bizarre views of the MPAA on site blocking, in that it was absolutely obsessed with putting site blocking in place while admitting it didn't understand the technical issues. That was based on the reporting done by some reporters who had seen a few of the emails. Now that Wikileaks has released the entire trove, we can discover some more details, like the fact that part of the MPAA's plan was to figure out how to create pro-censorship propaganda. It really is incredible, but that's a bulletpoint in an email from the MPAA's top lawyer, Steven Fabrizio, about part of the strategy at a "site blocking confab" the major studios held last fall:
Outreach to academics, think tanks and other third parties to foster the publication of research papers, white papers and other articles that tell the positive story of site blocking: e.g., it is commonplace around the world and working smoothly; it has not broken the internet; it is not incompatible with DNSSEC; it is effective; legitimate sites/content have not been blocked; etc.
Think about that for a second. The MPAA, which likes to declare itself one of the foremost defenders of free speech, was literally conspiring on how to create propaganda in favor of censorship, pointing to countries that already censor the web as "good examples" to follow. You'd think they would have learned from the time Bono tried to use China's censorship as an example of how to deal with copyright infringement what a ridiculous idea this is.
For a while now, Techdirt has been tracking the continuing efforts of the Russian government to rein in the Internet, at the cost of squeezing much of the life out of it. As an article on Global Voices reports, this has now reached ridiculous levels:
Russian censors have determined that one of the most popular forms of Internet meme is illegal. According to Roskomnadzor, the Kremlin's media watchdog, it's now against the law to use celebrities' photographs in a meme, "when the image has nothing to do with the celebrity's personality."
Roskomnadzor's statement is the result of a decision by a court in Moscow, which decided that a particular photo meme violated the privacy of Russian singer Valeri Syutkin -- the Global Voices post has the fascinating details. Although no new law is involved, Roskomnadzor's power is such that it is able to make these kinds of rule changes -- and enforce them. Along with a ban on the use of celebrities' photographs in what are termed "image macros," the new ruling also forbids the creation of parody accounts or sites (original in Russian.) The key problem with the image macro part is the following:
Roskomnadzor's vague new policy threatens to do more than crack down on potentially defamatory juxtaposition, however. By saying it is illegal to add celebrities' images to memes that "have nothing to do with the celebrity's personality," the Kremlin could be opening the door to banning a whole genre of absurdist online humor.
Even if the policy is not rigorously enforced, it could have a chilling effect on the Russian online space, already under pressure because of previous censorship moves. And that's probably precisely what the authorities are seeking to achieve here. After all, when it comes to Russian celebrities' photographs with witty captions, what name springs to mind?
Last week, a Turkish court ordered an access ban on a single post in the vast sea of more than 60 million individual blogs on WordPress. But for many users, that meant their Internet service providers blocked WordPress entirely.
A lawyer and Turkish Pirate Party member tracked down the root of the sudden ban on all of Wordpress: a court order seeking to block a single blog post written by a professor accusing another professor of plagiarism. This post apparently led to several defamation lawsuits and the lawsuits led to a court order basically saying that if blocking the single post proved too difficult, fuck it, block the entire domain.
It is the second sentence in the order, however, that caused the complete ban of WordPress in the country. “If the access to the single page cannot be possible due to technical reasons,” it reads, “block access to wordpress.com.”
According to the Daily Dot's Efe Kereme Sozeri, this tactic has often been deployed by Turkish government censors when outsmarted by the internet. If the targeted URL proves difficult to block, court orders demand ISPs block entire domains as Plan B.
This is Turkey's inelegant "solution" to a problem it shouldn't be trying to solve. Sozeri points out that its domain blocking efforts have, somewhat oddly, made oft-affected US tech companies much more responsive to its censorious demands.
The reason that Turkey had to request Google, Facebook, and Twitter to remove content is because they use SSL certificates, which secures users’ communication with their servers. (SSL certificates are what allow the implementation of HTTPS.) They’re technically quite difficult to intercept, but these companies still bow down to Turkey’s requests. Why? Because Erdoğan has completely banned access to their domains time and again when they failed to comply.
In order to continue doing business in Turkey, these companies have acquiesced to multiple censorship requests. Sozeri also has more bad news: Turkey's ISPs are commonly blocking domains at the DNS level, providing for more complete censorship while bypassing the targeted entity's participation in government censorship.
This incident also serves as an example of why targeted censorship so often fails. In most cases, those asking for content to be blocked are largely unconcerned about collateral damage -- whether it's rights holders trying to "protect" their intellectual property or governments seeking to control the content accessible to their citizens. And because they don't care who else is harmed, they'll push for the most "effective" form of censorship: deploying a nuclear weapon to kill a single person, to mix metaphors. And because the only good censorship target is a dead censorship target, they're not above using everything from overly-broad blocking orders to man-in-the-middle attacks to achieve their aims.
As you know, last year the Supreme Court made a very important ruling in the Alice v. CLS Bank case, in which it basically said that merely doing something on a general purpose computer didn't automatically make it patentable. This has resulted in many courts rejecting patents and the USPTO being less willing to issue patents, based on that guidance. The USPTO sought to push out new "guidance" to its examiners taking the ruling into account. Soon after the Alice ruling, it issued some "Preliminary Examination Instructions." However, it then issued the so-called 2014 Interim Guidance on Subject Matter Eligibility and sought public comment through March 16 of this year.
Plenty of folks did comment, including the EFF. However, the USPTO apparently was offended at parts of the EFF's comment submission, claiming that it was an "improper protest." In response, the EFF refiled the comment, but redacted the part that the USPTO didn't like. Here's what page 5 of the document on the USPTO site looks like:
However, EFF also added the following footnote (footnote 8) on page 6:
On April 2, 2015, the PTO contacted EFF to request that we remove a portion of these comments on the basis that they constituted an improper “protest.” We respectfully disagree that our comments were a protest under 35 U.S.C. § 122(c). Rather, our comments discussed a specific application to illustrate our broader points about the importance of applying Alice. Nevertheless, to ensure these comments are considered by the Office, we have redacted the relevant discussion in this revised version of our comments. Our original comments remain available to the public at: https://www.eff.org/files/2015/03/18/eff_comments_regarding_ interim_eligibility_guidance.pdf.
And, of course, if you go to that link, you get the full, unredacted version of the EFF's filing.
As you can see by the full filing, the EFF filing isn't some sort of improper protest. Rather it is a clear demonstration of how the USPTO does not appear to be living up to what the courts are saying in the wake of the Alice ruling. It is difficult to see what the USPTO was thinking in trying to silence the EFF's comment. It is beyond ludicrous on multiple levels. First, it suggests a skin so thin at the USPTO that you can see right through it. Second, it suggests that the USPTO doesn't want people to recognize that its guidance is problematic in light of what actual federal courts are saying. And, finally, it suggests (still) a complete lack of understanding of how the internet and freedom of expression works, thereby guaranteeing that the EFF's complete dismantling of the USPTO's guidelines will now get that much more attention...
Has anyone patented a method and system for self-inflicted shaming for being overly sensitive to someone pointing out your flaws?
As you may have heard, yesterday the FBI "uncovered" yet another of its own terrorist plots, the latest in a very long line of "terrorist plots" the FBI has "uncovered" -- in which the details always show that it was an undercover FBI "informant" (often doing this to get off leniently for some other issue), who more or less goads hapless, naive people, into a "plot" that had no real chance of ever happening. This appears to be the same sort of thing.
Still, politicians never leave an opportunity like this unexploited, and so in jumps Senator Dianne Feinstein, arguing that the only proper way to deal with this is to, of course... censor the internet:
I am particularly struck that the alleged bombers made use of online bombmaking guides like the Anarchist Cookbook and Inspire Magazine. These documents are not, in my view, protected by the First Amendment and should be removed from the Internet.
For what it's worth, Dianne Feinstein's "view" is wrong. The Anarchist Cookbook is very much protected by the First Amendment. While the book is banned in other countries, who don't have the equivalent of the First Amendment, it's perfectly legal in the US. The FBI/DOJ has extensively investigated the Anarchist's Cookbook in particular over the years, and as far back as 1997 directly told Senator Feinstein that she could not ban it. This is from the DOJ back in 1997:
Senator Feinstein introduced legislation during the last Congress in an attempt to fill this gap. The Department of Justice agrees that it would be appropriate and beneficial to adopt further legislation to address this problem directly, if that can be accomplished in a manner that does not impermissibly restrict the wholly legitimate publication and teaching of such information, or otherwise violate the First Amendment.
The First Amendment would impose substantial constraints on any attempt to proscribe indiscriminately the dissemination of bombmaking information. The government generally may not, except in rare circumstances, punish persons either for advocating lawless action or for disseminating truthful information -- including information that would be dangerous if used -- that such persons have obtained lawfully.
And yet, Feinstein's first response to the FBI uncovering yet another of its own plots is to go back to trying to censoring the internet in direct violation of the First Amendment? Yikes.
Oh, and even worse... in keeping with the fact that this plot was actually created by the FBI itself, guess where the two "terrorist wannabes" got the Anarchist Cookbook? From the undercover FBI agent! From the criminal complaint itself [pdf]:
On or about Novermber 2, 2014, the UC [Undercover Officer] met with VELNTZAS and SIDDIQUI. When VELENTZAS was reading a book called "Chemistry: The Central Science," the UC asked how this book was going to benefit them. VELENTZAS stated that they could practice at her house, but could not leave any residue. The UC stated that practicing at the house was not a good idea because the people living in the apartment below VELENTZAS might hear loud noises, referring to noises from explosions. VELENTZAS said she could always tell her neighbors that she dropped some bookshelves. The UC and VELENTZAS then discussed the fact that the UC had downloaded The Anarchist Cookbook. VELENTZAS suggested the UC print out the parts of the book that they would need. During the conversation, the UC stated, "We read chemistry books with breakfast. Like, who does that?" VELENTZAS responded, "People who want to make history."
The complaint also lists many other books and magazines and web pages that the various people read throughout, and later has one of the wannabe terrorists thanking the undercover agent for introducing The Anarchist's Cookbook to her.
As for the other document that Feinstein wants to censor, Inspire is Al Qaeda's magazine. And, again, reading through the complaint you see that it was actually the undercover agent who brought the magazine. The wannabe terrorist did ask the undercover agent to get it, and eventually it was the undercover agent who actually got it. Velentzas keeps asking the undercover agent to find a copy of Inspire, over and over again in the complaint until eventually the agent complies:
On or about December 24, 2014, the UC visited VELENTZAS and brought the Spring 2014 issue of Inspire magazine, as previously requested by VELENTZAS.
In other words, in neither case did the would be terrorists get the "bad" material from the internet. In both cases it came from the undercover FBI agent.
Meanwhile, it seems like the only real result of this ridiculous statement will be for Feinstein to drive ever more awareness to the old Anarchist's Cookbook, so yet another generation of teenagers can discover it and think they've found something totally cool online.
Techdirt has been following for a while Canada's moves to stop scientists from speaking out about areas where the facts of the situation don't sit well with the Canadian government's dogma-based policies. Sadly, it looks like the UK is taking the same route. It concerns a new code for the country's civil servants, which will also apply to thousands of publicly-funded scientists. As the Guardian reports:
Under the new code, scientists and engineers employed at government expense must get ministerial approval before they can talk to the media about any of their research, whether it involves GM crops, flu vaccines, the impact of pesticides on bees, or the famously obscure Higgs boson.
The fear -- quite naturally -- is that ministers could take days before replying to requests, by which time news outlets will probably have lost interest. As a result of this change, science organizations have sent a letter to the UK government, expressing their "deep concern" about the code. A well-known British neurobiologist, Sir Colin Blakemore, told the Guardian:
"The real losers here are the public and the government. The public lose access to what they consider to be an important source of scientific evidence, and the government loses the trust of the public," Blakemore said.
Not only that, by following Canada's example, the British government also makes it more likely that other countries will do the same, which will weaken science's ability to participate in policy discussions around the world -- just when we need to hear its voice most.
If you pay attention to Github (and you should), you know that late last week the site started experiencing some problems staying online, thanks to a massive and frequently changing DDoS attack. Over the past few days a lot more details have come out, making it pretty clear that the attack is coming via China with what is likely direct support from the Chinese government. While it's messing with all of Github, it's sending traffic to two specific Github pages: https://github.com/greatfire and https://github.com/cn-nytimes. Those both provide tools to help people in China access Greatfire and the NY Times. Notably, Greatfire itself notes that prior to the DDoS on Github, its own site was hit with a very similar DDoS attack.
If you want the technical details, Netresec explains how the DDoS works, noting that it's a "man-on-the-side" attack, injecting certain packets alongside code loaded by Chinese search engine Baidu (including both its ad platform and analytics platform), but is unlikely to be coming directly from Baidu itself.
But the much more interesting part is why China is using a DDoS attack, rather than its standard approach of just blocking access in China, as it has historically done. The key is that, two years ago, China tried to block Github entirely... and Chinese programmers flipped out, pointing out that they couldn't do their jobs without Github. The Chinese censors were forced to back down, leading to a sort of loophole in the Great Firewall. That leads to the next question of why China doesn't just block access to the URLs of the two repositories it doesn't like? And the answer there: HTTPS. Because all Github traffic is encrypted via HTTPS, China can't just block access to those URLs, because it doesn't know specifically what's being accessed.
And thus, we get the decision to turn its firewall around, launching a rather obvious DDoS attack on the two sites it doesn't like, with the rather clear message being sent to Github: if you stop hosting these projects, the DDoS will stop. Of course, so far Github is taking a stand and refusing to take down those projects (which is great and exactly what it should be doing).
However, this does suggest an interesting escalation in questions about the increasing attempts to fragment the internet. You see various countries demanding (or forcing) certain websites get blocked. But those solutions are truly only temporary. Because the overall internet is too important to block, and because some sites are necessary (like Github) there are always holes in the system. Add in a useful dose of encryption (yay!) and the ability to control everything that's read in one particular country becomes increasingly difficult. You might hope the response would be to give up attempts to censor, but China isn't likely to give up just like that. So, instead, it's basically trying to censor the global internet, by launching a high powered attack on the site that is the problem, while basically saying "get rid of these projects and we'll stop the attack."
It seems likely that this sort of escalation is only going to continue -- but in some ways it's actually a good sign. It shows that there are real cracks in China's attempts to censor the internet. We're basically realizing the limits of the Great Firewall of China, and useful services like Github have allowed a way to tunnel through. China is responding by trying to make life difficult for Github, but as long as Github and others can figure out ways to resist, censorship attempts like the Great Firewall will increasingly be useless.
In the early days of the internet, people talked about how it was resistant to censorship. Over the past decade or so, China has challenged that idea, showing that it could basically wall off large parts of the internet, and actually keep things semi-functional. Yes, there were always cracks in the wall, but for the most part, China showed that you could censor large parts of the internet. This latest move suggests that we may be moving back towards a world where the internet really is resistant to censorship -- and China is freaking out about it and responding by trying to increase the censorship globally. It's a battle that is going to be important to follow if you believe in supporting free expression online.
As I noted earlier this week, at the launch of the Copia Institute a couple of weeks ago, we had a bunch of really fascinating discussions. I've already posted the opening video and explained some of the philosophy behind this effort, and today I wanted to share with you the discussion that we had about free expression and the internet, led by three of the best people to talk about this issue: Michelle Paulson from Wikimedia; Sarah Jeong, a well-known lawyer and writer; and Dave Willner who heads up "Safety, Privacy & Support" at Secret after holding a similar role at Facebook. I strongly recommend watching the full discussion before just jumping into the comments with your assumptions about what was said, because for the most part it's probably not what you think:
Internet platforms and free expression have a strongly symbiotic relationship -- many platforms have helped expand and enable free expression around the globe in many ways. And, at the same time, that expression has fed back into those online platforms making them more valuable and contributing to the innovation that those platforms have enabled. And while it's easy to talk about government attacks on freedom of expression and why that's problematic, things get really tricky and really nuanced when it comes to technology platforms and how they should handle things. At one point in the conversation, Dave Willner made a point that I think is really important to acknowledge:
I think we would be better served as a tech community in acknowledging that we do moderate and control. Everyone moderates and controls user behavior. And even the platforms that are famously held up as examples... Twitter: "the free speech wing of the free speech party." Twitter moderates spam. And it's very easy to say "oh, some spam is malware and that's obviously harmful" but two things: One, you've allowed that "harm" is a legitimate reason to moderate speech and two, there's plenty of spam that's actually just advertising that people find irritating. And once we're in that place, it is the sort of reflexive "no restrictions based on the content of speech" sort of defense that people go to? It fails. And while still believing in free speech ideals, I think we need to acknowledge that that Rubicon has been crossed and that it was crossed in the 90s, if not earlier. And the defense of not overly moderating content for political reasons needs to be articulated in a more sophisticated way that takes into account the fact that these technologies need good moderation to be functional. But that doesn't mean that all moderation is good.
This is an extremely important, but nuanced point that you don't often hear in these discussions. Just today, over at Index on Censorship, there's an interesting article by Padraig Reidy that makes a somewhat similar point, noting that there are many free speech issues where it is silly to deny that they're free speech issues, but plenty of people do. The argument then, is that we'd be able to have a much more useful conversation if people admit:
Don't say "this isn't a free speech issue", rather "this is a free speech issue, and I’m OK with this amount of censorship, for this reason.” Then we can talk."
Soon after this, Sarah Jeong makes another, equally important, if equally nuanced, point about the reflexive response by some to behavior that they don't like to automatically call for blocking of speech, when they are often confusing speech with behavior. She discusses how harassment, for example, is an obvious and very real problem with serious and damaging real-world consequences (for everyone, beyond just those being harassed), but that it's wrong to think that we should just immediately look to find ways to shut people up:
Harassment actually exists and is actually a problem -- and actually skews heavily along gender lines and race lines. People are targeted for their sexuality. And it's not just words online. It ends up being a seemingly innocuous, or rather "non-real" manifestation, when in fact it's linked to real world stalking or other kinds of abuse, even amounting to physical assault, death threats, so and so forth. And there's a real cost. You get less participation from people of marginalized communities -- and when you get less participation from marginalized communities, you lead to a serious loss in culture and value for society. For instance, Wikipedia just has fewer articles about women -- and also its editors just happen to skew overwhelmingly male. When you have great equality on online platforms, you have better social value for the entire world.
That said, there's a huge problem... and it's entering the same policy stage that was prepped and primed by the DMCA, essentially. We're thinking about harassment as content when harassment is behavior. And we're jumping from "there's a problem, we have to solve it" and the only solution we can think of is the one that we've been doling out for copyright infringement since the aughties, and that's just take it down, take it down, take it down. And that means people on the other end take a look at it and take it down. Some people are proposing ContentID, which is not a good solution. And I hope I don't have to spell out why to this room in particular, but essentially people have looked at the regime of copyright enforcement online and said "why can't we do that for harassment" without looking at all the problems that copyright enforcement has run into.
And I think what's really troubling is that copyright is a specific exception to CDA 230 and in order to expand a regime of copyright enforcement for harassment you're going to have to attack CDA 230 and blow a hole in it.
She then noted that this was a major concern because there's a big push among many people who aren't arguing for better free speech protections:
That's a huge viewpoint out right now: it's not that "free speech is great and we need to protect against repressive governments" but that "we need better content removal mechanisms in order to protect women and minorities."
From there the discussion went in a number of different important directions, looking at other alternatives and ways to deal with bad behavior online that get beyond just "take it down, take it down," and also discussed the importance of platforms being able to make decisions about how to handle these issues without facing legal liability. CDA 230, not surprisingly, was a big topic -- and one that people admitted was unlikely to spread to other countries, and the concepts behind which are actually under attack in many places.
That's why I also think this is a good time to point to a new project from the EFF and others, known as the Manila Principles -- highlighting the importance of protecting intermediaries from liability for the speech of their users. As that project explains:
All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association and the right to privacy.
With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.
In short, it's important to recognize that these are difficult issues -- but that freedom of expression is extremely important. And we should recognize that while pretty much all platforms contain some form of moderation (even in how they are designed), we need to be wary of reflexive responses to just "take it down, take it down, take it down" in dealing with real problems. Instead, we should be looking for more reasonable approaches to many of these issues -- not in denying that there are issues to be dealt with. And not just saying "anything goes and shut up if you don't like it," but that there are real tradeoffs to the decisions that tech companies (and governments) make concerning how these platforms are run.
The court said such a law hit at the root of liberty and freedom of expression, the two cardinal pillars of democracy. The court said the section has to be erased from the law books as it has gone much beyond the reasonable restrictions put by the Constitution on freedom of speech. The Supreme Court said section 66A was vaguely worded and allowed its misuse by police.
But the judges did not eliminate another controversial power granted by the IT Act:
The court, however, upheld the validity of section 69B and the 2011 guidelines for the implementation of the I-T Act that allowed the government to block websites if their content had the potential to create communal disturbance, social disorder or affect India's relationship with other countries.
Those are pretty vague criteria, and it's easy to see them being abused, just as Section 66A was. Nonetheless, this is an important ruling (pdf), not least for the Indian Supreme Court's robust defense of free speech. Let's hope future Indian laws attempting to control online activities take note of its wisdom.
We had been noting, in the wake of the Charlie Hebdo attacks in France, how the country that then held a giant "free speech" rally appeared to be, instead, focusing on cracking down on free speech at every opportunity. And target number one: the internet. Earlier this week, the Interior Minister of France -- with no court review or adversarial process -- ordered five websites to not only be blocked in France, but that anyone who visits any of the sites get redirected to a scary looking government website, saying:
You are being redirected to this official website since your computer was about to connect with a page that provokes terrorist acts or condones terrorism publicly.
It appears that the French government has a very low opinion of the intelligence of the French public -- believing that merely reading something online will suddenly make them rush to join ISIS.
"I do not want to see sites that could lead people to take up arms on the Internet," Interior Minister Bernard Cazeneuve said.
"I make a distinction between freedom of expression and the spread of messages that serve to glorify terrorism. These hate messages are a crime."
Except... it already appears that France is really just censoring websites with messages it doesn't like. In that first batch was a site called "islamic-news.info." The owner of that site not only notes that he was never first contacted to "remove" whatever material was deemed terrorist supporting (as required by the law), but that nothing in what he had posted was supporting terrorism. He has written a public statement posted on the French news site Numerama, in which he makes it clear that he's a one-man operation, and that he's been doing everything based on a 50 euro/month hosting plan, and that he doesn't support ISIS or Al Qaeda at all. His site is opinionated, but mostly just against current Syrian leader Bashar al-Assad. In fact, he notes that he specifically avoided topics that might be misinterpreted to suggest that he supported terrorists. He did not share ISIS propaganda or similar content. He even points out how he denounced a Syrian fighter who argued for attacks on Europe, saying that such things would reflect poorly on Muslims in Europe.
But, with no judicial review, no due process at all, the French government declared the site to be a terrorist supporter and now it's gone.
All that talk about France and free speech quickly fade into nothing. As Glenn Greenwald, at the Intercept, points out in response to all of this, blatant government censorship is far more damaging than terrorist attacks (while also noting that governments around the globe are moving in similar directions):
In sum, far more damage has been inflicted historically by efforts to censor and criminalize political ideas than by the kind of “terrorism” these governments are invoking to justify these censorship powers.
And whatever else may be true, few things are more inimical to, or threatening of, Internet freedom than allowing functionaries inside governments to unilaterally block websites from functioning on the ground that the ideas those sites advocate are objectionable or “dangerous.” That’s every bit as true when the censors are in Paris, London, and Ottawa, and Washington as when they are in Tehran, Moscow or Beijing.
France's "motto" is supposedly Liberté, égalité, fraternité. I have difficulty seeing how blatantly censoring websites you disagree with, without any sort of due process, fits with any of those three ideals.