The history of anti-piracy activities by the legacy entertainment and software industries always seems to focus on the mistaken idea that if only the public were "more educated" piracy would magically go away. That's never been true. In fact, nearly every attempt at an education campaign hasn't just failed to work, they've often actively backfired and been mocked and parodied. And yet, if you talk to politicians and industry folks, they still seem to think that "more education" will magically work next time. One can only wonder what the hell the geniuses at the Software Alliance (the BSA -- which used to be the "Business Software Alliance" but has dropped the "Business" part, but not the "B" in its name) were thinking when they decided to "settle" with a guy who apparently uploaded some Microsoft software in the Czech Republic. The terms of the settlement required him to take part in a "professionally produced" anti-piracy video and that the video needed to get 200,000 views on YouTube or he might face having to pay damages in court.
The BSA is a well-known front for Microsoft, and has a long history of rather ridiculous claims about "piracy," so I guess it's little surprise that it's now engaged in out and out propaganda, but done so badly that it's turned the whole thing into a laughingstock. The whole "compelled speech" aspect of the settlement, including the requirement to get so many views strikes basically everyone as ridiculous and stupid. Press attention has of course propelled the video to well over 200,000 thousand views at this point, and many of the YouTube comments are completely mocking the campaign -- and noting that they're watching the video to help the accused be let off the hook. The video is in Czech, but even so it's hilarious. It has the same sort of ominous production values as the old "You wouldn't download a car!" ads that have been mocked for years as well:
It really highlights just how out of touch folks at the BSA are, in that anyone actually thought this kind of thing would help it in any way, rather than making it a continued laughingstock.
from the can-a-whole-country-do-a-streisand-effect dept
Just about a week ago, the NY Times had a giant article comparing Saudi Arabia to ISIS. It was a rather powerful article that highlights the similarities and connections between the two, while really highlighting the incredibly hypocritical attitude of many Western politicians who freely embrace the Saudi government while claiming that ISIS is barbaric.
Then, just a few days later, Saudi Arabia's Justice Ministry announced that it would sue someone for calling Saudi Arabia "ISIS-like." Of course, it's not the NY Times that the Saudi government is going after, but a Twitter user, who compared a Saudi death sentence for a Palestinian poet to the way ISIS carries out its own "justice" system. The Twitter user in question has not been named. It seems like the strategy here is to scare people away from comparing Saudi Arabia to ISIS, but there's a decent chance that it goes in the other direction. Such a plan is so ridiculous that it seems only likely to draw many more comparisons.
And, really, if your goal is to distance yourself from a group of crazy nutjobs who appear to have a somewhat arbitrary sense of justice and thrive on using the death penalty as a weapon, perhaps announcing plans to go after individuals criticizing you on Twitter isn't the best way to further the distinction.
There's just something absolutely nutty when politicians with no technical knowledge whatsoever try to make technology policy, and it often crosses over into out-and-out slapstick when that technology policy involves surveillance. It's why we see things like talk of "golden keys" for encryption that somehow wouldn't be "backdoors" (even though they are). Over in the UK, they're going through something similar with the current "debate" (if you can call it that) over the latest Snooper's Charter bill, officially known as the "Investigatory Powers Bill" or the "IPBill."
A key element in the bill is the demand for "internet connection records." The draft bill has a whole section on these "ICRs" which it defines as:
A kind of communications data, an ICR is a record of the internet services a specific
device has connected to, such as a website or instant messaging application. It is captured
by the company providing access to the internet. Where available, this data may be
acquired from CSPs by law enforcement and the security and intelligence agencies.
An ICR is not a person’s full internet browsing history. It is a record of the services
that they have connected to, which can provide vital investigative leads. It would not reveal
every web page that they visit or anything that they do on that web page.
The explanatory notes, and one of the clauses in the bill, make use of the term “Internet
Connection Record”. We are concerned that this creates the impression that an “Internet
Connection Record” is a real thing, like a “Call Data Record” in telephony.
An ICR does not exist - it is not a real thing in the Internet. At best it may be the collection of, or
subset of, communications data that is retained by an operator subject to a retention order which
has determined on a case by case basis what data the operator shall retain. It will not be the same
for all operators and could be very different indeed.
We would like to see the term removed, or at least the vague and nondescript nature of the
term made very clear in the bill and explanatory notes.
From there, it goes even further, pointing out that the justification for needing these non-existent ICRs was a statement from UK Home Secretary Theresa May about how useful such info would be in finding a missing girl:
"Consider the case of a teenage girl going missing. At present we can ask her mobile provider for
call records before she went missing which could be invaluable to finding her. But for Internet
access, all we get is that the Internet was accessed 300 times. What would be useful would be to
know she accessed twitter just before she went missing in the same way as we could see she
make a phone call"
Except, as Kennard points out, that's not how the internet actually works. You don't "connect" to Twitter like that, because you're constantly connected to Twitter:
...in yesterday’s meeting I, and other ISPA members immediately pointed out the huge flaw
in this argument. If the mobile provider was even able to tell that she had used twitter at all (which
is not as easy as it sounds), it would show that the phone had been connected to twitter 24 hours a
day, and probably Facebook as well. This is because the very nature of messaging and social
media applications is that they stay connected so that they can quickly alert you to messages,
calls, or amusing cat videos, without any delay.
It should be noted that it is quite valid for a “connection” of some sort to last a long time. The main
protocol used (TCP) can happily have connections for hours, days, months or even years. Some
protocols such as SCTP, and MOSH are designed to keep a single connection active indefinitely
even with changes to IP addresses at each end and changing the means of connection (mobile,
wifi, etc). Given the increasing use of permanent connections on mobile devices, it is easy to see
how more and more applications will use such protocols to stay connected - making one “internet
connection record” which could even have passed the 12 month time limit by the time it is logged.
Connections are also typically encrypted and have some data passing all the time, so it would not
be practical for an ISP, even using deep packet inspection, to indicate that the girl “accessed
twitter” right before she vanished, or even at all (just that there is a twitter app on the phone and
This seems like a rather important point: the people who put together the Snooper's Charter for spying on the internet don't seem to understand the first thing about how the internet actually works. And yet we're supposed to give them sweeping powers to spy on it? How does that make any sense?
On Friday, the Wall Street Journal's Stacy Meichtry and Joshua Robinson published an in-depth bit of reporting on the planning and operational setup of the Paris attackers, revealing a bunch of previously unknown details. The key thing, however, isn't just the total lack of anything that looks like sophisticated encryption, but the opposite. The attackers basically did nothing to hide themselves, communicating out in the open, booking houses and cars in their real names, despite some of them being on various terrorist watch lists. It discusses how Brahim Abdeslam booked a house using an online website (Homelidays -- a French service that is similar to Airbnb, though it predates Airbnb by a lot), using his own name. So did his brother, Salah Abdeslam, who booked a hotel for a bunch of the attackers (using his real name) on Booking.com.
The piece mentions, as we noted earlier, that the attackers appeared to communicate via unencrypted SMS. It also mentions how the guy who planned the attacks, Abdelhamid Abaaoud, bragged about his plans in ISIS's English-language glossy magazine months ago. Again, you'd think that this would alert the intelligence community to actually watch the guy, but again it appears he did little to hide his movements or communications.
In fact, the report notes that after Abaaoud shot up a restaurant, he went back to check out the aftermath of the attacks that he had helped put together -- and kept his mobile phone with him the whole time, making it easy to track his whereabouts:
An hour after Mr. Abaaoud finished shooting up restaurants, he emerged from a metro station in the 12th district, according to data police pulled from his cellphone. He headed west toward the sound of sirens, his path zigzagging as he returned to the scene of his crimes.
For two hours after the massacre ended, prosecutors say, Mr. Abaaoud surveyed his handiwork, at one point blending in with panicked crowds and bloodied victims streaming from the Bataclan
You can read the entire thing and note that, nowhere does the word "encryption" appear. There is no suggestion that these guys really had to hide very much at all.
So why is it that law enforcement and the intelligence community (and various politicians) around the globe are using the attacks as a reason to ban or undermine encryption? Again, it seems pretty clear that it's very much about diverting blame for their own failures. Given how out in the open the attackers operated, the law enforcement and intelligence community failed massively in not stopping this. No wonder they're grasping at straws to find something to blame, even if it had nothing to do with the attacks.
The Montana Standard, a newspaper in Butte, Montana has apparently decided on a new strategy for its online commenters, requiring "real names" to be associated with every comment. We've spent plenty of time arguing why this is kind of stupid, but many websites falsely believe that anonymity leads to less friendly comments, and using "real names" will magically make people nice (in our experience, people with real names can still be insufferable jackasses, while some of our best comments come from anonymous users, but...). But, that change in policy alone isn't that big of a deal. What is a big deal is that the Standard has decided to do this retroactively. As it stands now, and as it's been in the past, when you sign up to comment, it directly asks you for both your real name and your "screenname" and states pretty clearly that this is the name that will display with your comments:
But on January 1st, all of that changes, and whatever people put in as their "real names" will show up. The Standard is allowing people who are concerned to email them before December 26th to argue for why their comments should be removed before the January 1st switch over, but it seems likely that many won't even realize this is happening. Lots of people have been using the comments on that post itself to criticize this plan, and Paul Alan Levy has written a thorough post explaining why this is so problematic:
The Standard’s retroactive application of its real name policy seems to me highly irresponsible. You can easily imagine a newspaper deciding that is not going to rely on anonymous sources in its news stories – certainly there have been media entities that have claimed to have adopted such policies. But can you imagine a paper doing so retroactively, leaving its stories online that were previously sources anonymously but replacing such categories as “inside source” with the name of a whistleblower, or replacing “highly placed official” with the name of the conniving government official speaking “candidly” about his internal adversaries under cover of source protection? “I’m sorry, Deep Throat, we have decided to tell Nixon and his henchmen who you really are.” You could have a number of unhappy sources, not to speak of some dead ones where the sources live abroad in a society or culture where dissent is not tolerated. The source’s life could be in danger even if the source lives inside the United States, if the source was talking about the Crips, or MS-13, or some militia group.
The Standard’s editor told Davis that it is publishing notice of its new policy, including the retroactive application, in both its print editions and web site, and that it “is sending emails to prior commenters, when it has valid email addresses.” (Although as of today, when I looked at the page where the site’s users register to be allowed to comment, there was no notice of any impending policy; to the contrary, the site still promises that the screen name “is the name that will be displayed next . . . for comments, blog posts, and more. Choose wisely!”) But depending on how long it has been since the Standard started accepting registrations, it is quite possible that users may have changed their email addresses, or have moved on to a new email address without ever canceling the old one, and hence they might not see the Standard’s notice. And it is also quite possible that some of the commenters may have made comments that place their economic or even physical security at risk from the individuals or companies that they criticized in online comments. Or, their comments might have revealed something about their own experiences or past conduct that they were willing to share with the public anonymously, making a valuable contribution to a discussion, but would never have been willing to provide had they known that their own names would be attached. The Standard could be putting livelihoods and more at risk through its retroactive changes.
Levy further tested the existing commenting system, discovering that it was, in fact, easy to sign up with fake "real names" -- including a test where he signed up using the name of the Standard's editor, David McCumber.
I was able to register with a completely invented name, in which I provided a real email address but no other truthful information in the various boxes on the registration page. The comment I posted is the only one that was posted on November 23, 2015 – it appears with the screen name “notmyrealname.” As a further test, I registered again today, again providing false information throughout the registration process, but this time the “real name” I provided was the name of the Standard’s editor, David McCumber, and the street address that I provided was the Standard’s own address. The comment duly appeared on the paper’s web site a few minutes later – it is there under the screen name “NotReallytheEditor.” So, presumably, this comment will appear on January 1 as having been posted by David McCumber.
Promising to keep people's names hidden, and then retroactively changing that with little notice seems like an incredibly irresponsible thing to do. One hopes that the Standard will reconsider.
As you may have heard, last week actor Charlie Sheen announced that he is HIV positive, which got lots of news coverage. Related to that, In Touch magazine produced the non disclosure agreement (NDA) that it claims "Charlie Sheen had his sexual partners sign when they came to his house." I guess if you're a celebrity known for sleeping around, this is the kind of thing you have your lawyers cook up for you. But what struck me as interesting was that, beyond the basic NDA language, there was some copyright language concerning any images, videos or sound recordings. You can understand why Sheen (and his lawyers) don't want anyone taking pictures of him or even talking about the relationship to book or magazine writers, so they include some bizarre copyright transfer language for the partner to agree to:
It's a little difficult to read, so here are the relevant sections:
1.3 No Participation in Books or Articles. Without Your advance express written consent, I will not give or participate in any interviews, write or be a source for, any articles, books, programs, or stories about You or the Related Parties, whether truthful, fictionalized, on the record, or "off the record." If I breach these promises, My copyright in any such unauthorized material shall be automatically and immediately transferred by Me to You as of its creation and in perpetuity, and this Agreement shall constitute a valid transfer of copyright.
1.4 Images and Recordings. Without Your advance express written consent, I will not create any photographs, movies, videos, sound or image recordings or otherwise capture any depictions or likenesses of You, Your family, friends, associates or employees ("Images and Recordings"). If I breach these promises any images and Recordings I create shall be considered Confidential Information, and My copyright in them shall be deemed automatically and immediately transferred by Me to You as of its creation and in perpetuity, and this Agreement shall constitute a valid transfer of copyright. If you expressly direct Me to create any Images and Recordings, they will be Confidential Information in which I have no legal rights or interest whatsoever, including any copyright, trademark, "moral rights," patent, or other similar rights, and I convey, transfer and assign to You all of My right, title and interest (if any) of whatever kind or nature in all Images and Recordings as of their creation and in perpetuity, and this Agreement shall constitute a valid transfer of copyrights.
Of course, the "in perpetuity" is not really accurate, as you can't give up your termination rights, even with a contractual agreement, to take back your copyrights after 35 years, but, really, that's besides the point. I do wonder how valid Section 1.3 is at all. If the partner is interviewed for a book or a magazine article, there likely isn't any copyright for Sheen's partner to transfer in the first place, as nothing is "fixed" by that partner. Furthermore, in most cases, the book or magazine author/publisher would likely have a strong fair use claim if Sheen tried to have those quotes deleted via copyright. If anything, this just seems like a way to make it sound scary to go out and talk to a magazine or book author.
The transfer of copyright in the photos and videos at least seems a bit more legit, if still sketchy. Of course, once again, though, this shows where copyright is being used directly for censorship purposes, entirely divorced from its supposed purpose of providing incentives to create.
World Intellectual Property Review (WIPR) is reporting that the European Patent Office, EPO, has threatened Roy Schestowitz with a defamation lawsuit over a blog post he did. Schestowitz writes the Techrights blog, which I personally think can go overboard with some of its stories at times. However, to argue that his stories are defamation, especially by a government agency, is crazy. Back in October, Schetowitz had a story claiming that the EPO was prioritizing patent applications from large companies like Microsoft to "foster a better esprit de service." I actually don't think the program described by the EPO actually sounds that crazy, and the EPO's response isn't that crazy either -- it's just about more efficiently handling certain patent applications to keep the office from getting swamped. Indeed, it does seem like Schestowitz may have overreacted with his interpretation of the memo. But, misinterpreting something is hardly defamation.
In fact, to argue that Schestowitz's post is defamatory is crazy. Threatening Schestowitz with a defamation claim is much crazier and dangerous than even Schestowitz's own interpretation of the EPO's memo. If you're working for a government agency, such as the EPO, you have to be willing to accept some amount of criticism, even if you disagree with it. To claim it's defamation and to threaten a lawsuit is really, really screwed up. Frankly, this calls into question what the EPO is focused on much more than any claims of favoring large companies. Also bizarre is the fact that WIPR edited its own story to remove any mention of what Schestowitz's original blog posts were about in the first place. They had originally included a sentence briefly describing the original Techrights blog post that got the EPO upset, but then deleted that part.
The EPO has been coming under a fair bit of criticism lately, and the entire organization appears to be astoundingly thin-skinned. A few months ago, the office apparently blocked access to Techrights altogether from within its network. That seems like a pretty strange move in the first place. Florian Mueller (and, yes, I know that many people here don't trust Mueller, but...) has pointed out how absolutely ridiculous the EPO can be about just about anything related to how it works:
The European Patent Office is the last dictatorship on Central European soil. Local police cannot allowed to enter the EPO's facilities without an invitation from the president. National court rulings cannot be enforced; compliance is voluntary. Employees and visitors are subjected to covert surveillance. And if employees are fired (or "suspended"), which just happened to several staff representative, they won't get their day in court for about ten years.
The EPO's leaders have a rather selective attitude toward the law. When it's about their wrongdoings, they want their organization to be a lawless, autocratic island that disrespects human rights. But when the rules of the world around the EPO come in handy, the leadership of the EPO tries to leverage them against those who dare to criticize it.
I'm having trouble thinking of any other governmental agency that has ever threatened a public critic with defamation. Basic concepts around free speech suggest that the EPO should suck it up. If it disagrees with Schestowitz's interpretation of what it's doing, then it can come out and explain its side of the story. Threatening him with defamation actually only makes me think that perhaps his interpretation hits closer to home than I originally believed.
As you hopefully already know, we take a bit of a different view of ad blockers around here on Techdirt, recognizing that many people have very good reasons for using them, and we have no problem if you make use of them. In fact, we give you the option of turning off the ads on Techdirt separately, whether or not you use an ad blocker. And we try to make sure that the ads on Techdirt are not horrible, annoying or dangerous (and sometimes, hopefully, they're even useful). Most publications, however, continue to take a very antagonistic view towards their very own communities and readers, and have attacked ad blockers, sometimes blocking users from reading content if they have an ad blocker. Perhaps no publication has fought harder against ad blockers than German publishing giant Axel Springer, the same company that frequently blames Google for its own failure to adapt.
Axel Springer has been suing the makers of various ad blockers. So far, those cases have failed miserably, making Axel Springer look like a whiny, out-of-touch publication that refuses to get with the times. But, instead, it just keeps on suing. From TechCrunch:
German media giant Axel Springer, which operates top European newspapers like Bild and Die Welt, and who recently bought a controlling stake in Business Insider for $343 million, has a history of fighting back against ad-blocking software that threatens its publications’ business models. Now, it’s taking that fight to mobile ad blockers, too. According to the makers of the iOS content blocker dubbed “Blockr,” which is one of several new iOS 9 applications that allow users to block ads and other content that slows down web browsing, Axel Springer’s WELTN24 subsidiary took them to court in an attempt to stop the development and distribution of the Blockr software.
Specifically, explains the law firm representing Blockr, Axel Springer wanted to prohibit Blockr’s developers from being able to “offer, advertise, maintain and distribute the service” which can be used today to block ads on http://www.welt.de, including the website’s mobile version.
Isn't that nice. Rather than recognize that people don't like your ads, you try to sue the companies serving an actual consumer need so that you can continue to piss off your readers. It's the dinosaur strategy -- rather than innovate, you sue to try to stave off the inevitable decline.
Did you hear that story about how ISIS is so sophisticated with encryption that they have a special "opsec" manual on computer security protocols? You might have, because last week it was all over the internet. Yahoo kicked it off with a story, claiming it was the secret manual ISIS "uses to teach its soldiers about encryption." Wired followed up with its own story, as did The Telegraph. The "manual" was "discovered" by analysts at the Combating Terrorism Center, based out of the US Military Academy at West Point. Thankfully, Buzzfeed has the details, noting that the guide, created by a cybersecurity firm in Kuwait, named Cyberkov, is actually a guide for journalists and activists to protect their communications from oppressive governments. And there's nothing particularly secret about it, as apparently it's basically just repurposed stuff from the EFF's website:
“Our guide is based on publicly available tools, instructions and best practices. The guidelines in our manual are sourced from the EFF [Electronic Frontier Foundation] and other sources of privacy organizations,” wrote CyberKov CEO Abdullah AlAli to BuzzFeed News in an email. He said his organization had no idea its guide had been repurposed by ISIS. He was surprised to see it cited in articles, many of which have been updated since they were originally posted to note the document’s origin, and “even more shocked to see the Combating Terrorism Center at West Point simply Google-Translated it and claimed it as ISIS’s.”
Now, it does appear that some folks in ISIS may have sent around versions of the guide, but it sort of undermines the idea that they had created their own special set of guidelines to avoid being tracked, when all they're doing is picking up publicly available information on security best practices.
Look, everyone has known for quite some time that Senator Dianne Feinstein's big push for so-called "cybersecurity" legislation in the form of CISA had absolutely nothing to do with cybersecurity. It was always about giving another surveillance tool to her friends at the NSA. However, given that she was one of the most vocal in selling it as a "cybersecurity" bill (despite the fact that no cybersecurity experts actually thought the bill would help) it seems worth comparing her statements from just a month ago, with her new attacks on actual cybersecurity in the form of encryption.
"Millions of personal records and hundreds of billions of dollars fall victim to cyber-attacks every year, and we’ve done little to stem the tide."
Of course, CISA does nothing to protect any of that. You know what does protect against that -- better use of encryption to keep that information from getting hacked in any useful manner.
Okay, fast forward. Following the Paris attacks, Feinstein has been among the most vocal in claiming that we need to undermine encryption, which is pretty amazing given that she represents California (and is from San Francisco), home to tons of tech companies that actually get this and think she's completely crazy for undermining actual cybersecurity.
Never mind that, though. Here she is this past weekend, on CBS's Face the Nation totally attacking encryption itself and mocking the tech companies that just a month ago she was insisting needed special government help to protect against cyberattacks. She was asked if the intelligence community has the tools it needs, and she decides to attack encryption -- even choosing to cite as a source CIA director John Brennan -- the same John Brennan who illegally spied on her staffers and then lied about it repeatedly.
"I can say this. [FBI] Director [James Comey] and, I think John Brennan, would agree, that the Achilles Heel in the internet is encryption. Because there are now... it's a black web! And there's no way of piercing it. And this is even in commercial products! PlayStation, John! Which our kids use. If the two ends communicate, that's encrypted. So terrorists can use PlayStation to be able to communication and there's nothing that can be done about it."
The host, John Dickerson, then points out that the tech industry (again, mostly based in or near Feinstein's hometown, and that she's supposed to be representing) says that backdooring encryption makes us less safe and opens us up to more attack, and Feinstein brushes it off, relying on her apparent years of computer security training...
No. I don't think so. I think with a court order, with good justification, all of that can be prevented. It can be prevented in Europe, because Europe has been a major driver for more encryption. And I think that they are now seeing the results. I have visited with all of the General Counsels of the tech companies, just to try to get them to take bomb building recipes off the internet. Recipes that have been tested and we know can explode a plane. Directions. Where to sit on the plane to blow it up. We know that there are bombs that can go through magnetometers. And to put that information out on the internet, is terrible. And I sorta got 'well, pass a law.' So, we may just have to do that. But I am hopeful that the companies, most of whom are my constituents -- not most, but many -- will understand what we're facing. And we're not crying wolf. There's good reason for this. And people are dying all over the world. And I think the Sinai-Russian airliner is a classic example of a bomb that got on a plane, that blew up that plane.
Where to start with this nonsense? First, note that she doesn't actually respond to the question concerning how undermining encryption will make us all less safe and make all that information Feinstein herself claimed was under attack just a month ago more vulnerable, other than to say that she, personally, doesn't think that what every computer security expert has been saying is true. Yikes.
Second, rather than focus on encryption, she pivots to her other pet projects, claiming that the government should force internet companies to censor The Anarchist's Cookbook. She keeps on this despite the fact that all the way back in 1997, the DOJ directly told Feinstein that this would violate the First Amendment. From the DOJ to Feinstein:
The First Amendment would impose substantial constraints on any attempt to proscribe indiscriminately the dissemination of bombmaking information. The government generally may not, except in rare circumstances, punish persons either for advocating lawless action or for disseminating truthful information -- including information that would be dangerous if used -- that such persons have obtained lawfully.
Third, this weird infatuation with The Anarchist's Cookbook, despite the fact that it's generally recognized as a joke for fools, where the likelihood of being able to build an actual bomb from it are minimal at best. And, while she pretends that the GCs of tech companies just sort of shrugged their shoulders about this, it's much more likely that it's because they thought she was being ridiculous trying to censor the internet in violation of the First Amendment. Whoever told her "well, pass a law" was almost certainly trying to get rid of her, knowing that any such law would be unconstitutional.
Fourth, this tangent about "bomb making instructions" online still has absolutely nothing to do with encryption or the question about how encryption makes us all much more vulnerable to attack and actually makes us all less safe.
Fifth, the comment about Europe is insane. Again, while the attackers may have used some encryption, it's been revealed (since long before Feinstein did this interview) that they did an awful lot of communicating in the clear, including unencrypted SMS and Facebook messenger. On top of that, what the hell does "Europe has been a major driver for more encryption" even mean? Perhaps it's true that they've been adopting more encryption to hide from the NSA's spying that Feinstein herself helped hide from everyone.
Sixth: the whole PlayStation thing has been debunked as a way that the Paris attackers communicated. They did not. Furthermore, she's just wrong that the PlayStation has end-to-end encryption. It does not.
Seventh, does she honestly believe that whoever blew up that Russian airplane downloaded bomb-making instructions from the internet? Also, if it were really so easy to get such instructions and get them through security, don't you think we'd have seen a lot more airplanes blown up by now?
In summary, Feinstein (a month ago) said we should all be deathly afraid of cyberattacks, and the only way to solve it was to give the government much greater access to companies' computer systems, via CISA. And, now, she insists that encryption is an "Achilles's heel" and that actual cybersecurity experts are lying when they say undermining encryption will put everyone at risk. Why? Because The Anarchist's Cookbook is online and Google won't take it down.
Is it really so much to ask for politicians to actually understand technology before they go off on ridiculous, ignorant, uninformed rants about it -- often leading to even more ridiculous and dangerous legislation?
Over the weekend, the Telegraph (which, really, is probably only the second or third worst UK tabloid), published perhaps the dumbest article ever on encryption, written by Clare Foges, who until recently, was a top speech writer for UK Prime Minister David Cameron (something left unmentioned in the article). The title of the article should give you a sense of its ridiculousness: Why is Silicon Valley helping the tech-savvy jihadists? I imagine her followups will including things like "Why is Detroit helping driving-savvy jihadists?" and "Why are farmers feeding food-savvy jihadists?"
The article is perhaps even dumber than the headline, but let's dig in.
What will it take? 129 dead on American soil? 129 killed in California? What level of atrocity, what location will it take for the Gods of Silicon Valley to wake up to the dangerous game they are playing by plunging their apps and emails ever deeper into encryption, so allowing jihadists to plot behind an impenetrable wall?
"Plunging their apps even deeper into encryption"? I don't even know what that means, but let's flip it around: How many hacked credit cards, medical information and email accounts will it take for the Gods of Silicon Valley to wake up and recognize they need to better protect user data. Because that's what's actually happening. Encryption is not about "allowing jihadists to plot behind an impenetrable wall" it's about protecting your data -- even that of Clare Foges -- from malicious attackers who want access to it. Or does Foges and her former boss David Cameron communicate out in the open where any passerby can snoop on their messages?
Does this mean some bad people can use encryption? Yes. But it's not as "impenetrable" as she seems to think (we'll get to her knowledge of technology and encryption in a moment). Even if you're using encryption, there is still plenty of metadata revealed. Furthermore, there have always been ways to communicate in less-than-understandable or less-than-trackable way -- and the terrorist community has used them forever. They don't need to rely on "Silicon Valley" giants.
But, more to the point, undermining encryption makes everyone significantly less safe. The whole idea that weakening encryption makes people more safe is profoundly ignorant. Even more ridiculously, Foges blames Ed Snowden for this:
Why? It goes back to Edward Snowden, the weaselly inadequate whose grasp for posterity has proved a boon for Isil. They should be gratefully chanting his name in Raqqa, for it was Snowden’s revelations about government surveillance methods that triggered this extraordinary race towards deeper encryption.
This, of course, is wrong. Stupidly, ignorantly, wrong. Again, studies have shown that post-Snowden, terrorists didn't change anything in how they communicate. They were already using encryption and reports suggest that they'd been using encryption going back more than a decade. Snowden's revelations only pointed out how governments were doing mass surveillance on ordinary citizens. Everyone -- including various terrorist organizations -- already assumed (correctly) that they were spying on terrorist organizations and sympathizers. So it's not clear what Foges is claiming here, other than that she's pulling a Dana Perino and shielding her ex-boss from criticism by blaming the whistleblower.
All this is making the job of the security services infinitely harder. FBI Director James Comey calls the challenge “going dark”. Leads are followed until they hit the brick wall of indecipherable data. A few years ago law enforcement agencies could approach Hotmail or Google with a warrant and get vital information to stop horrors unfolding. Now the data they salvage is often gobbledegook – a load of encrypted numbers that are impossible to read. They are trying to save lives but are being frustrated by encrypted technology.
This is also astoundingly ignorant and wrong. To date, the FBI and others have failed to present a single example of where encryption has actually been a problem in deciphering this information. Also, naming Hotmail and Google is wrong as well, as neither Hotmail nor Gmail currently offer end-to-end encryption in a manner that anyone really uses. Google does have a test version available, but the number of people using it is barely notable. So, yes, if law enforcement goes to Google with a valid warrant, it's going to turn over your emails.
This isn’t about privacy, it’s about profit
This may be the most ignorant statement of all. Encryption also means that these same companies cannot scan the contents of your email, for example to place ads against them. In fact, most people have noted that the reason Google hasn't really embraced end-to-end encryption in Gmail is that it would undermine the business model of that product. But, Foges is on a roll of ignorant bullshit and she can't let little things like facts get in the way.
And, of course she concludes with the usual ridiculousness about how she's just so sure that if they put their minds and money to it, they can figure out how to fix this "problem."
The global tech industry made around $3.7 trillion last year. They employ some of the brightest people on the planet. Apple et al could, if they wanted, employ a fraction of these resources to work out how we can simultaneously keep the good guys’ data secure and keep the bad guys in plain sight. The geniuses of Silicon Valley would be more than a match for the dunderheads in the desert.
Except, overestimating your side and underestimating the enemy seems like a pretty stupid idea -- especially when you're pushing for the impossible. And the idea that you can magically "keep the good guys’ data secure and keep the bad guys in plain sight" is pretty laughable. You don't need to be an expert to recognize the ridiculousness of that statement. Who do you determine are "the good guys" and who are "the bad guys"? Is that something you can code? Because, based on this, I'd argue that Foges is "a bad guy." Is she okay with her information being passed in plain sight? And, of course, the reality is even more ridiculous because, as has been explained in great detail in the past, encryption where "the good guys" have access is encryption that doesn't work -- and thus it's encryption that makes us all less safe.
Asking for encryption that only protects "the good guys" is publicly asking for the impossible. It's an astoundingly ignorant question, that anyone with any amount of expertise would tell you is not a good question to ask.
On Twitter, some people have been pushing back on Foges, and her response has been... well, less than inspiring. When people have pointed out that she seems ignorant of the facts, she not only misses the point, but seems proud of her ignorance.
It's fairly stunning, but Foges article gets almost everything wrong. It doesn't understand encryption. It doesn't understand what tech companies are doing. It doesn't understand how security works. It's just... wrong. When someone on Twitter confronted her about this, she insisted that she interviewed people who felt that it was possible to create such encryption, but then went silent when lots and lots of tech experts asked her to name a single technology professional who agreed with her.
Similarly, it's somewhat bizarre that the Telegraph doesn't note that Foges spent the past few years as UK Prime Minister David Cameron's chief speech writer, and still lists herself as an advisor to Cameron. Seems like something that should have been disclosed. The newspaper isn't exactly known for its accuracy, but this is an embarrassment for both Foges and the Telegraph.
Judge Liam O'Grady -- the same guy who helped the US government take all of Kim Dotcom's stuff, is the judge handling the wacky Rightscorp-by-proxy lawsuit against Cox Communications. The key issue: Rightscorp, on behalf of BMG and Round Hill Music flooded Cox Communications with infringement notices, trying to shake loose IP addresses as part of its shake down. Cox wasn't very happy about cooperating, and in response BMG and Round Hill sued Cox, claiming that 512(i) of the DMCA requires ISPs to kick people off the internet if they're found to be "repeat infringers." Historically, it has long been believed that 512(i) does not apply to internet access/broadband providers like Cox, but rather to online service providers who are providing a direct service on the internet (like YouTube or Medium or whatever). However, the RIAA and its friends have hinted for a while that they'd like a court to interpret 512(i) to apply to internet access providers, creating a defacto "three strikes and you lose all internet access" policy. Rightscorp (with help from BMG and Round Hill Music) have decided to put that to the test.
This is a big, big deal. If the case goes against Cox, then it would create a massive problem for the public on the internet. Accusations of infringement could potentially lead to you totally losing access to the internet, which could really destroy people's lives, given how important the internet is for work and life these days. The details of the case look like they should favor Cox pretty easily. After all, Cox pointed out that Rightscorp only had licenses from the publishes, meaning they had no copyright in the sound recording -- yet they admitted to downloading the sound recording, suggesting that, if anything, Rightscorp was a mass infringer. On top of that there was pretty strong evidence that Rightscorp does not act in good faith in how it runs its shakedown practice, telling people that they have to take their computers to the police to prove their innocence (really).
Unfortunately, as Eriq Gardner reports, Judge O'Grady has ruled against Cox on a very key point: does its current policy grant it safe harbor under the DMCA. The judge said no, though we're still waiting for the full ruling as to why.
The bigger story is O'Grady's determination that there is "no genuine issue of material fact as to whether defendants reasonably implemented a repeat-infringer policy as is required by §512(i) of the DMCA," granting a motion that Cox is not entitled to a safe harbor defense.
Now, just because you're not protected by the safe harbor it does not mean that you are automatically guilty of infringement. There are cases where sites have not qualified for the safe harbor and still prevailed. But it does make things more difficult and complicated and, much more importantly, opens the door to lots and lots of mischief by the RIAAs and MPAAs of the world to use this to kick people off the internet entirely based on accusations of copyright infringement. That's immensely worrisome.
O'Grady doesn't seem to think that kicking people off the internet is really a big deal. Earlier in the case, we've discovered, in the process of flat out rejecting an attempt by Public Knowledge and EFF to file an amicus brief, Judge O'Grady made his views clear:
I read the brief.
It adds absolutely nothing helpful at all. It is a combination
of describing the horrors that one endures from losing the
Internet for any length of time. Frankly, it sounded like my
son complaining when I took his electronics away when he
watched YouTube videos instead of doing homework. And it's
That's his response to two well known public interest groups explaining to him the "real world harmful effects" of Rightscorp's copyright shake-down trolling business. But he didn't want to hear any of it. Because protecting the ability of Americans to not be the subjects of extortion schemes and to enable them to communicate and work is "hysterical" and no different from kids not doing their homework because of too much YouTube.
The details here matter, but I would imagine that Cox is likely to appeal. One hopes that the appeals court is more open to listening to the concerns over copyright trolling and kicking people off the internet.
from the always-good-to-legislating-while-freaking-out,-huh? dept
The attacks in Paris were a horrible and tragic event -- and you can understand why people are angry and scared about it. But, as always, when politicians are angry and scared following a high-profile tragedy, they tend to legislate in dangerous ways. It appears that France is no exception. It has pushed through some kneejerk legislation that includes a plan to censor the internet. Specifically the Minister of the Interior will be given the power to block any website that is deemed to be "promoting terrorism or inciting terrorist acts." Of course, this seems ridiculous on many levels.
First, there are the basic concerns about free speech. Yes, I know this is France and it doesn't value free speech in the same way as the US, but it's still rather distressing just how quickly and easily the French government seems willing to adopt censorship measures. Second, what good does this actually do? If ISIS sympathizers are expressing their views publicly, doesn't that make it easier to track them and to find out what they're doing and saying? Isn't that what law enforcement should want? Focusing on censorship rather than tracking simply drives those conversations and efforts underground where they can still be used to influence people, but where it's much harder for government and law enforcement ot keep track of what's being said. It also only confirms to ISIS supporters that what they're saying must be so important and valuable if the government won't even let them say it. It's difficult to see how it does any good, and instead it opens up the possibility of widespread government censorship and the abuse of such a power.
Back in 2013, we were impressed when the folks at Automattic (the company behind WordPress), actually filed some lawsuits against people who were abusing DMCA takedown notices just to takedown content they didn't like. Earlier this year, the company also took a strong stand against DMCA abuse by including a "Hall of Shame" in which it called out and shamed particularly egregious takedowns. At the time, we mentioned that other companies should pay attention. Fighting for your users' rights is important, but too many companies don't do it (and many just take things down on demand).
Now YouTube has stepped up a bit as well. There have been plenty of complaints about how YouTube -- and ContentID in particular -- deal with fair use. It's quite difficult for an algorithm to determine fair use, and that's part of the reason why we get nervous when copyright system defenders insist that you can automate takedown processes without collateral damage. However, Google has announced that it will promise to pay the legal fees (up to $1 million) of certain YouTubers where takedowns have been issued in cases where YouTube agrees that fair use applies:
We are offering legal support to a handful of videos that we believe represent clear fair uses which have been subject to DMCA takedowns. With approval of the video creators, we’ll keep the videos live on YouTube in the U.S., feature them in the YouTube Copyright Center as strong examples of fair use, and cover the cost of any copyright lawsuits brought against them.
We’re doing this because we recognize that creators can be intimidated by the DMCA’s counter notification process, and the potential for litigation that comes with it (for more background on the DMCA and copyright law see check out this Copyright Basics video). In addition to protecting the individual creator, this program could, over time, create a “demo reel” that will help the YouTube community and copyright owners alike better understand what fair use looks like online and develop best practices as a community.
It is absolutely true that even when video creators believe that their use is non-infringing because it's fair use, many still won't issue a counternotice, because the next step, if the copyright holder disagrees, is to go to court. And even if you have a slam dunk case, that can be both time consuming and incredibly expensive. And, of course, if you lose, it can be life-destroying expensive, thanks to the idiocy of statutory damages provisions in copyright law.
Constantine Guiliotis, who goes by Dean and whose channel dedicated to debunking sightings of unidentified flying objects has just over 1,000 subscribers, is one of the video makers YouTube will defend. Mr. Guiliotis has received three takedown notices from copyright holders of videos that he has found online and posted to his YouTube channel, U.F.O. Theater.
In his videos, Mr. Guiliotis includes the videos he found but also provides analysis and commentary, which YouTube argues is within the guidelines of fair use rules. The site reposted the videos after its review and told Mr. Guiliotis it would defend him against any future legal action. Like the other creators YouTube has selected, Mr. Guiliotis has not been sued for his videos.
“It was very gratifying to know a company cares about fair use and to single out someone like me,” Mr. Guiliotis said.
Sherwin Siy, over at Public Knowledge, notes that Google probably won't have to spend much money, as any copyright holder who realizes that Google is backstopping the videos will probably (wisely) realize that going to court is less likely to have the desired effect (which is usually just intimidating people into taking down content). However, it's still an important move in creating extra protection for fair use and in helping to establish a clear bar of what's considered to be fair use:
But while this means that Google isn’t likely to spend much, if any money, in litigating these cases, the program still does two very important things. First, it does in fact protect those uploaders. By giving these videos a stamp of approval, Google’s legal team will make the sort of person who sends a bogus or careless takedown notice think even harder about filing a bogus lawsuit. That sort of reassurance can be enough encouragement for someone to put back a video. Oftentimes, someone receiving a takedown notice can shy away from exercising her rights to have it put back because doing so exposes her to a lawsuit. With this sort of protection, much of that fear disappears.
But perhaps the more useful aspect of the program is that it sets a clear example of what fair use is. As videos are added to the program, other users will have a useful set of models that show what Google’s lawyers, at least, are confident is fair use. That information can help an everyday YouTube user in ways that more text-based and specific guides (for educators, etc.) might not.
And this collection of videos sets an example for far more than just other video creators. The set of fair uses on display can act as a living example of the predictability of fair use. Too often, the doctrine is considered hazy or indefinite or impossible to determine. And while there are lots of cases that can exist in a gray area, there’s even more cases that actually are pretty black or white. Most people have seen clearly infringing videos; this program will show a wider audience clearly non-infringing videos. That’s particularly important in the face of other countries who have yet to adopt fair use as a limit on their copyright laws, and have been told that it’s too unpredictable for them to rely upon.
This is why YouTube’s announcement is a game-changer: Copyright-based censorship strategies are no longer risk free. Now, before launching an unjustified DMCA takedown, the claimant will have to weigh the risk of going up against Google and its deep pockets in a lawsuit. (The legal environment could get even more interesting in light of a recent ruling in the Prince “dancing baby” that could make it easier for fair use victors to claim legal fees from those who removed their videos).
I don't know if I'd go that far. Again, Google is only protecting a "handful" of videos, but at the very least it may scare off some of the more egregious abuses, and that's always a good thing. Now, we just need even more platforms to recognize that fighting for your users' fair use rights is important.
At this point, we all know that the DMCA is a tool that is widely abused for censorship purposes. We have written post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post detailing this (and those were just from the first page of my search results).
Most people, once aware of this, would recognize that perhaps there's a problem with the DMCA and that it should be fixed. However, some people seem to look at that and say "hey, that's an awesome censorship tool, perhaps we should expand it to other content I don't like." That's why we see people talk about expanding it to cover revenge porn or mean people online.
Or, apparently, terrorism. Yes, terrorism. Paul Rosenzweig, who (believe it or not) really once was a high ranking official in the Department of Homeland Security thinks one way to fight ISIS is to seize their copyrights and then use the DMCA to censor them. He's not joking. Or, at least I think he's not. There's a small chance that it's really a parody, but Rosenzweig has a history of truly nutty ideas behind him, so I'm pretty sure he's serious.
That model might, with a small legislative change, be adapted to the removal of ISIS terrorist speech. All that would be required was a modification of the law to assign the copyright in all terrorist speech to a non-terrorist organization with an interest in monitoring and removing terrorist content. Here are the essential components of such a plan:
Identification of terrorist organizations to whom the law would apply;
A definition of unprotected content associated with that terrorist organization;
An extinguishing of copyright in such unprotected content; and
Transfer of that copyright to a third party.
I love that "all that would be required" because what he's really saying is that "all that would be required" is we upend basically all concepts regarding free speech and copyright just to silence some people I really don't like. No biggie.
At this point, you should probably already be banging your head on a nearby hard surface, but it gets worse. He actually then worries about how much work it would be for the government to take all these copyrights and issue all those darn takedowns, so instead he suggests handing the copyrights to a third party, which he suggests could be set up similarly to the Red Cross (?!?) and saddling them with the task of issuing takedowns. Perhaps we can name them the Silencing Cross or something along those lines.
He insists that the First Amendment isn't really a problem here because terrorist speech can be seen as "material support" of terrorism and the Supreme Court has already wiped that away.
The most salient case on point is Holder v. Humanitarian Law Project, 561 U.S. 1 (2010), a Supreme Court case that construed the USA PATRIOT Act's prohibition on providing “material support” to foreign terrorist organizations (18 U.S.C. § 2339B). The case is one of the very rare instances of First Amendment jurisprudence in which a restriction on political speech has been approved, and the only one of recent vintage.
The Humanitarian Law Project (“HLP”) had sought to provide assistance to the Kurdistan Workers’ Party in Turkey and Sri Lanka's Liberation Tigers of Tamil Eelam. According to HLP, their goal was to teach these two violent organizations how to peacefully resolve conflicts. Congress had, previously, prohibited all material aid to designated organizations that involved “training”, “expert advice or assistance,” “service,” and “personnel.” HLP argued that its assistance was protected political speech. The government countered with the argument that a categorical prohibition on speech in the form of assistance was required because even non-terrorist assistance would "legitimate" the terrorist organization, and free up its resources for terrorist activities. The Court approved the limitation on speech because it was narrowly drawn to cover only “speech to, under the direction of, or in coordination with foreign groups that the speaker knows to be terrorist organizations” and served a national interest of the highest order – combatting terrorism.
It would follow, in the wake of Humanitarian Law Project, that just as speech “to” or “under the direction of” or “in coordination” with a foreign terrorist organization may be limited, so too may the content actually published “by” the terrorist organization.
I'm not so sure that First Amendment scholars would agree with him that the shift from speech "to" to speech "by" is that simple, but that's really besides the point.
Let's go back to basics here. Congress only has limited power over creating copyright law. Here it is:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
I've read that a few times now and I really am struggling to find the part that says "and to censor terrorists."
I mean, I guess the single redeeming idea in Rosenzweig's proposal here is that it's a pretty blatant admission that copyright law is about censorship much of the time. The ISIS-insanity-freakout among political types is really kinda crazy to watch in action. First they wanted to use net neutrality to censor ISIS and now they want to use copyright law? What will they think of next? Defamation law is always popular. Perhaps we can amend Section 230 to silence terrorists. Or, I know, why don't we use the ITC. Or trade agreements. Oh wait, that's basically the MPAA's playbook to censor speech... and now surveillance state apologists can make use of it too!
Meanwhile, hey, maybe instead of trying to censor the folks at ISIS, you watch what they're saying and use that for surveillance purposes. I know, I know, crazy thought. But at the very same time we're having this debate, these very same people are arguing that we need less encryption so law enforcement and the intelligence community can see what ISIS is saying. Yet here's a way to see what they're saying and the focus is on "how do we silence such speech and make it harder to track!"
But, really, Paul, congrats -- we thought we'd heard the dumbest idea in a long time with Joe Barton's "use net neutrality to censor ISIS," but you've topped it. This is the dumbest idea we've heard in a long, long time.
from the are-there-any-good-presidential-candidates? dept
Presidential candidate Hillary Clinton gave a speech yesterday all about the fight against ISIS in the wake of the Paris attacks. While most of the attention (quite reasonably so) on the speech was about her plan to deal with ISIS, as well as her comments on the ridiculous political hot potato of how to deal with Syrian refugees, she still used the opportunity to align herself with the idiotic side of the encryption debate, suggesting that Silicon Valley has to somehow "fix" the issue of law enforcement wanting to see everything. Here's what she said:
Another challenge is how to strike the right balance of protecting privacy and security. Encryption of mobile communications presents a particularly tough problem. We should take the concerns of law enforcement and counterterrorism professionals seriously. They have warned that impenetrable encryption may prevent them from accessing terrorist communications and preventing a future attack. On the other hand, we know there are legitimate concerns about government intrusion, network security, and creating new vulnerabilities that bad actors can and would exploit. So we need Silicon Valley not to view government as its adversary. We need to challenge our best minds in the private sector to work with our best minds in the public sector to develop solutions that will both keep us safe and protect our privacy.
Now is the time to solve this problem, not after the next attack.
It does not. Weakening encryption undermines both security and privacy. There's no "balance" to be had here. You want to maximize both security and privacy and the way you do that is with strong encryption.
Also, the bit about "Silicon Valley" has to "not view government as its adversary" is another bullshit line that has been favored by James Comey and others, who keep insisting that when technologists explain to him that backdooring encryption in a manner that only "the good guys" can use it is impossible that they really mean they haven't tried hard enough. Once again, that's not it. What pretty much the entire tech community has been saying is that it's impossible to create such a thing without undermining the whole thing and making everyone less safe. Hell, here's security expert Steve Bellovin explaining this pretty clearly. He goes step by step through why it won't work, why it makes things more dangerous, why it will be abused, and why it will put us all at risk.
And the reason that Silicon Valley views the government as adversaries is because speeches like Clinton's sets them up that way. Her speech, like Comeys' past speeches are directly setting up the government as an adversary to good computer security, asking technologists to undermine their own creations and make everyone less safe for some unclear amorphous belief that it might make a few people more safe at some point in the future. So, the answer isn't scolding Silicon Valley as Hillary has chosen to do, but rather understanding reality, and recognizing that what she is directly advocating for is to harm the safety of Americans and others around the globe.
This raise serious questions about who is advising Clinton on tech policy. When she was at the State Department, it actually did a lot of really good things on encryption and protecting communications of people around the globe. It's pretty ridiculous for Clinton to undermine her own efforts with such a dumb statement in this speech.
from the imitation-is-more-than-just-flattery dept
We're back again with another in our weekly reading list posts of books we think our community will find interesting and thought provoking. Once again, buying the book via the Amazon links in this story also helps support Techdirt.
This week, we've got the wonderful book The Knockoff Economy: How Imitation Sparks Innovation by law professors Kal Raustiala and Chris Sprigman. We have written about the book before and have even hosted some excerpts from the book, but it's a really great and important read. We mentioned it earlier this week in our story about the attempts to lock up pot with intellectual property protections -- because that story reflected much of what's in the Knockoff Economy.
The key point of the book is to highlight that the very premise behind many calls for intellectual property protection doesn't stand up to much scrutiny. Defenders of the system usually insist that copyrights and patents are necessary for creating the incentives to create or to innovate in a market. Yet, Raustiala and Sprigman carefully detail a bunch of different industries that don't have intellectual property protection, and over and over again, they see the same thing: more competition and more innovation, rather than less. For many years, we've highlighted the fact that it is frequently competition that drives innovation, yet so much of our public policy is based on the fallacy that it's monopoly rights that drive innovation. Thus, the Knockoff Economy is a really useful work in highlighting that perhaps the very premise that so much intellectual property protection is based on is wrong.
That's not to say, necessarily, that copyrights or patents have no place (though I know some of you do believe that) at all in modern society. But, at the very least, we should be looking at what is the actual impact of those laws, and are they really increasing innovation or doing something else entirely.
All of this is no surprise, as just a couple of months ago the intelligence community's top lawyer flat-out admitted that he and his friends planned to wait for the next terrorist attack to push their agenda.
Of course, over the past few days, the following has happened:
So that seems to be the story so far, despite what you may have seen with hand-wringing and all sorts of freakouts in the press about encryption.
Yes, preventing terrorism is important. And it would be great if the intelligence community were actually able to do that. But it seems pretty clear that mass surveillance techniques aren't doing much to help at all, though it is diminishing the privacy of everyday citizens. Perhaps before rushing to expand the surveillance state and undermine the encryption that actually does keep us all safe, we should recognize reality, rather than the fantasy-land pronouncements of FBI Director James Comey, CIA Director John Brennan and their friends.
Famous TV news talking head Ted Koppel recently came out with a new book called Lights Out: A Cyberattack, A Nation Unprepared, Surviving the Aftermath. The premise, as you may have guessed, is that we're facing a huge risk that "cyberattackers" are going to take down the electric grid, and will be able to take it down for many weeks or months, and the US government isn't remotely prepared for it. Here's how Amazon describes the book:
Investigative reporting that reads like fiction - or maybe I just wish it was fiction. In Lights Out, Ted Koppel flashes his journalism chops to introduce us to a frightening scenario, where hackers have tapped into and destroyed the United States power grids, leaving Americans crippled. Koppel outlines the many ways our government and response teams are far from prepared for an un-natural disaster that won't just last days or weeks - but months - and also shows us how a growing number of individuals have taken it upon themselves to prepare. Whether you pick up this book to escape into a good story, or for a potentially potent look into the future, you will not be disappointed.
The book also has quotes ("blurbs" as they're called) from lots of famous people -- nearly all of whom are also famous TV news talking heads or DC insiders who have a long history of hyping up "cyber" threats. But what's not on the list? Anyone with any actual knowledge or experience in actual computer security, especially as it pertains to electric grids.
Want to know how useful the book actually is? All you really need to read is the following question and answer from an interview Koppel did with CSO Online:
Did you interview penetration testers who have experience in the electric generation/transmission sector for this book?
No, I did not.
Also in that interview, Koppel admits that he hasn't heard anything from actual information security professionals (though he admits he may have missed it since he's been on the book tour). But, still, if you're writing an entire book with a premise based entirely on information security practices, you'd think that this would be the kind of thing you'd do before you write the book, rather than after it's been published. Instead, it appears that Koppel just spoke to DC insiders who have a rather long history of totally overhyping "cyberthreats" -- often for their own profits. In another interview, Koppel insists that he didn't want to be spreading rumors -- but doesn't explain why he didn't actually speak to any technical experts.
“Going in, what I really wanted to do was make sure I wasn’t just spreading nasty rumors,” said Koppel in a phone interview.... “After talking to all these people, I satisfied my own curiosity that this not just a likelihood but almost inevitable.”
"All these people"... who apparently did not include any computer security experts. Koppel claims that this isn't a priority because Homeland Security doesn't want to "worry" the American public:
“The public would have to understand it’s a plan that will work but if you don’t have a plan, that can be more worrisome. I just hope it becomes part of the national conversation during the presidential campaign.”
What?!? Homeland Security doesn't want to worry the American public? Which Homeland Security is he talking about? The one that manhandles the American public every time they go to an airport? The same one that is constantly fearmongering about "cyber attacks" and "cyber Pearl Harbor"? Is Koppel living in some sort of alternative universe?
Is there a chance that hackers could take down electric grids and it would cause serious problems? Sure. Anything's possible, but somehow we've gotten along without a single incident ever of hackers taking down any part of the electrical grid to date. And most actual information security professionals don't seem to think it is a "likely" scenario as Koppel claims. The whole thing seems to fit into the usual category of cyberFUD from political insiders who are salivating over the ability to make tons and tons of money by peddling fear.
Is it important to protect infrastructure like the electric grids? Yes. Should we be aware of actual threats? Absolutely. But overhyping the actual threat doesn't help anyone and just spreads fear... and that fear is quickly lapped up by people who will use it to profit for themselves.
from the let's-put-the-blame-where-it-belongs dept
Over the past few days, we've been highlighting the fever pitch with which the surveillance state apologists and their friends have been trampling over themselves to blame Ed Snowden, blame encryption and demand (and probably get) new legislation to try to mandate backdoors to encryption.
And yet, as we noted yesterday, it now appears that the attackers communicated via unencrypted SMS and did little to hide their tracks. On top of that, as Ryan Gallagher at the Intercept notes, some of the attackers were already known to law enforcement and the intelligence community as possible problems. But they were still able to plan and carry out the attacks. Even more to the point, Gallagher points out that after looking at the 10 most recent high profile terrorist attacks, the same can be said for each of them:
The Intercept has reviewed 10 high-profile jihadi attacks carried out in Western countries between 2013 and 2015..., and in each case some or all of the perpetrators were already known to the authorities before they executed their plot. In other words, most of the terrorists involved were not ghost operatives who sprang from nowhere to commit their crimes; they were already viewed as a potential threat, yet were not subjected to sufficient scrutiny by authorities under existing counterterrorism powers. Some of those involved in last week’s Paris massacre, for instance, were already known to authorities; at least three of the men appear to have been flagged at different times as having been radicalized, but warning signs were ignored.
Nicholas Weaver, writing over at Lawfare, has a really fantastic article over "the limits of the panopticon" that basically puts all of this into perspective, noting (1) with so many "known radicals" to follow, there is no way for the intelligence community/law enforcement to actually get the information to predict these attacks and (2) there are plenty of ways for people who know each other to communicate, even without encryption, that won't increase suspicion.
First, the sheer volume of “known radicals” –at least 5000—makes prospective monitoring impossible. How does one effectively monitor 5000 individuals and identify who among them will pose an actual threat? After all, most never will. It didn’t matter that Salah Abdeslam used his own name and credit card when booking his hotel room. Abdeslam was simply one of thousands identified as maybe or maybe not posing a threat.
Even reducing the volume of targets may be insufficient. Assuming the authorities were able to focus on 500 or 50 individuals instead of 5000, the communication patterns of a terrorist cell are remarkably similar to those of any family or group. Unless authorities are aware that an individual is actively (rather than potentially) dangerous, electronic monitoring may provide little prospective benefit, unless they can intercept the contents of a communication that makes a threat clear.
But the communication content of an even minimally proficient terrorist provides little value. Human codes are often employed. We now know that final coordination took place using unencrypted SMS, but unless one already has already identified the terrorist cell and at least some basic details of a plot, tracking an SMS that says "On est parti on commence" (which roughly translates to “Let’s go, we’re starting”) provides little actionable intelligence.
In other words, all the calls for increased surveillance and less encryption really seem like a smoke screen by an intelligence community that failed. It's entirely possible that their job is an impossible one, but at the very least we should be dealing in that reality. Instead, the intelligence community that failed is doing everything possible to shift the blame to encryption and Snowden, rather than admitting the fact that they knew who these people were, that encryption wasn't the issue and that maybe doubling down on those policies won't help at all. Of course, it might take some of the pressure off of them for failing to prevent the attack.
Still, as we've noted, almost every case of a "prevented" attack hasn't involved actual plotters, but rather the fake cooked-up plots by the FBI itself. So, we seem to have a law enforcement and intelligence community that is terrible at stopping real plots, but really good at putting unrelated people in jail for made-up plots. And now they want more power for surveillance and to undermine the encryption that keeps us all safe?
Massively overpriced shit being marked down and called a sale yet again.
It's pay what you want -- so not sure how you can claim it's overpriced.
At this point Mike, you should just ban all non physical products from your suggested shit.
Many people (including me!) have actually found the courses pretty helpful. I'm sorry that you disagree.
Considering your site often focuses on ethical issues it tarnishes your message to have it associated with this bullshit. Which is more important, your stacksocial bucks, or not being a hypocrite? Here's a hint: being a hypocrite directly degrades your ability to be a force for good in the world.
I'm a bit confused as to what we've done here that you find so unethical?
Yes, they did. They unlinked links. That's modification.
They added context to it, which in the circumstances wasn't warranted.
They did that too. And, at the very least, they could have been more direct in that as well -- letting users know WHICH URLs were the concerning ones, which would have cleared up some of the confusion.
If I have to choose between Google warning me about malware URLs in my email and not warning me, I'm gonna choose warning me. And TechDirt *admits* that the domain *was hosting malware*, so the warning was accurate.
The site was no longer hosting malware -- as the story noted. The story was about how the domain was taken away from the malware distributor.
Asking "permission" doesn't actually solve anything though, because the artist can't prevent legal use of their music. If a politician feels that a particular song reflects the way they feel or the image they want to project, they're free to pay their money and play it to their heart's content.
Artists might not like it, and they're free to express that opinion, but the same copyright laws that they claim are indispensable for artistic creation allow their music to be used this way. Live by the copyright sword, die by the copyright sword.
Yeah, I think what Tim meant is not that they should HAVE to ask permission, but if they want to avoid giving free publicity to people who hate them, maybe ask first. It's a different sort of permission. Basically, figure out how not to give a wide open platform for someone famous to attack you.
Yes yes, correlation does not equal causation. But sometimes it is because of a causal relationship and in this case, the precise timing of the drops -- exactly matching with the copyright terms, provides a very, very, very direct and clear match with the data. The alternative explanations don't even remotely come close to explaining the data.
So, sure, it's a theory, but it's the best one so far. If you've got a better one, present it.
And, yah, I love that page and have pointed people to it in the past, but this data is not the same thing. It's not correlation in mapping two graphs (which that page frequently, if hilariously, games by changing the scale on each side), this data, REPEATEDLY, using three different data sets ALL SHOW a MAJOR SHIFT at EXACTLY the moment of the public domain cut off. That's not just "these numbers correlate." That's evidence of a serious issue.
It seems that loading copyright for all the reasons that "culture disappeared" ignores other factors for the decline in new book publications post 1910
That would be interesting if the data above was about new books being published. It is not. The first chart was about new books available from those time periods. The second chart is digitized publications. Interpreting the chart above as being about the number of new works published is just... wrong. If you look at the actual data on new works published it has generally continued to rise over time.
During this time the economic situation may have had an impact on the willingness of publisher to pay authors but this was massively out weighted by the explosion of new technologies that enabled mass advertising to have a more profound impact on culture and the growth of mass culture by in part bankrolling it. Book publishing was displaced by more appealing media that could be enjoyed together and less expensively.
Again, that has nothing to do with any of the charts above, but really, nice try.
Also, if you look at the massive changes on the chart, they date EXACTLY to copyright terms, and NOT to the specific dates/events that you mention. The US data hits a cliff at exactly the public domain cut off of 1923. The European data, you'll see, has an initial decline around the same date (countries who follow life + 70) and a second decline in the early 1940s (countries who follow life + 50).
So, yeah, nice story, but the data does not say what you think it says. You read it wrong and then made up a story about it that doesn't even fit with what you claimed. And you claim that the folks with the actual data are the desperate ones? Wow. Buy a mirror.
Of course, your quotation of the clearly non self-serving analysis of how useful using Google is compared to buying the book clearly invalidates my repeated personal experience using my lying eyes.
If Google Books snippets is enough to replace buying books for you, then I can only suggest that you must have bought a lot of books for basically no more than one or two pages of those books. That's... weird.
As noted above (did you even read my comment?), the initial fear was over the potential of a much more broad ruling that would have impacted wider cloud services. However, BECAUSE of people raising those concerns, the SCOTUS ruled narrowly (if bizarrely) and that has certainly helped to limit some of the initial concerns. Focusing on the "is it cable" question and avoiding the larger copyright issues helped (as noted above, and ignored by you).
That was true. The lack of a clear standard in the SCOTUS ruling left plenty swimming in the dark. As you or someone else notes below, it did show up in at least some other lawsuits already (thankfully, those have been decided correctly, but it did show up). Furthermore, I personally know of at least two companies that shelved plans for new cloud services because of Aereo.
Just because YOU are ignorant of the impact, doesn't mean there was none.
Still waiting for the parade of horribles you and others predicted would arise if Aereo lost to come true. I suspect I'll be waiting forever. Doesn't the FUD approach get old? I'm sure it generates clicks, but are they worth it?
It's kind of tough to demonstrate the innovations that are NOT happening thanks to fear of litigation, isn't it? The big concern is still very much there, which is that the Aereo ruling could have the potential to chill innovation in the field of cloud computing. I will say that because of the odd way in which the SCOTUS ruling came down -- focusing so much on the issue of "looks like cable" that it probably was not as bad as it COULD have been, where the ruling would have resulted in real problems for cloud providers. But it should be noted, contrary to your mocking, that a good part of the reason why SCOTUS was careful on that point was because folks raised the concerns of a bad ruling on cloud computing.
So, you're attack here seems fairly misguided. First, you're asking people to show you what hasn't happened as a result of the ruling (kind of an impossibility) and second, you ignore that the specifics of the ruling were at least somewhat more targeted than the issue that most of the pre-ruling concerns discussed.
So, no, it wasn't "FUD". Also, who the fuck thinks that writing about the intricacies of copyright law "generates clicks"? Are you daft?
Yes, the Constitution is a limiting document meant, by construction,to remove any ambiguity as to the powers of government. It seems strange to argue the powers granted to Congress were done without the belief and surety of their use. You make it seem as though the power to lay and collect taxes, to regulate commerce with foreign nations, to coin money, and to raise and support armies was granted without the expectation that Congress would do so. Yes, Congress could abolish the Post Office. However, there is little doubt, there was an expectation they would create one.
And yet there are things in Section 1, Article 8 that the Congress chooses not to do, such as "grant letters of marque and reprisal." Even if your Constitutional scholarly knowledge is lacking, you are still missing the point and moving the goalposts.
I was not arguing that Congress need not have any copyright law at all. You started off this conversation by insisting that a registration requirement was the same as Congress removing your Constitutionally granted right.
You were wrong on multiple accounts. 1. the right is not constitutionally granted. 2. Copyright law in the US was without a registration requirement for many, many years.
You are now moving the goalposts rather than admitting that you made incorrect statements.
No, it states why Congress should "To promote the Progress of Science and useful Arts" and the mechanism to do so "by securing for limited Times to Authors and Inventors the 'exclusive right' to their respective Writings and Discoveries."
You keep missing the power that all it does is grant Congress the power to do this. Not the requirement. And, furthermore, the preamble part of promoting the progress of science is there to LIMIT the power of Congress, saying that it should not issue monopolies for any other reason.
In the same section, directly preceding, Congress is granted the power "to establish post offices and post roads" Directly following,congress is granted the power "To constitute tribunals inferior to the Supreme Court," By your rationale, Congress was not obligated to establish the post office nor tribunals inferior to the supreme court. I do not agree and trust, after much debate, items in the Constitution were placed there purposely.
You are actually quite correct that the Constitution in no way mandated that Congress must do either of those things. It simply granted them the option of doing so. Congress is free to kill off the Post Office or the lower courts if it decides that's appropriate. And, the first option may actually happen one of these days. The second... not so much.
YouTube is chock-full of infringing material. How does that help creators? Many are strong-armed into choosing between constantly sending takedown notices or monetizing their content on YouTube's take-it-or-leave-it terms. How is that fair? What if they simply don't want their stuff on YouTube? Wouldn't "good copyright law" also help them protect their copyrights in a meaningful way?
Was thinking of how best to respond to this and I think I'm going to turn it into a full post, rather than just a comment... and the more I worked on it, the bigger a project it became, so it may have to wait until I have the time to really go through it all. So... stay tuned.
So, Mike, do you believe that copyright law should permit anyone to distribute any content, with or without licenses? Serious question. Thanks.
I see you're making a stink down below about this in your usual trollish manner. I'm not sure what you think you're proving, other than making yourself look foolish.
However, to be clear, you have (intentionally?) misread what I wrote above. What I am saying is that good copyright law enables new platforms to rise up that help content creators: we see tools like YouTube and SoundCloud and Vimeo and Kickstarter and Patreon and Spotify that have been built that allow *content* creators to take control over the distribution of their work, while also offering opportunities to monetize that work in many cases.
For many, many years, copyright did not do that. Instead, what it did was set up a system whereby content creators had to beg/plead with gatekeepers (i.e., labels, studios) for the chance to *give up their own copyright* just so those labels/studios would release/promote/monetize their content. That seems like a pretty bad copyright system that harms the vast majority of content creators, helping only a tiny, tiny fraction at the top, but helping those gatekeepers tremendously in the meantime.
My point (which perhaps was not too clear for someone intent solely on attacking me) was that we're seeing wonderful new platforms these days that put the power and control back into the hands of creators, and are enabling SO MANY MORE creators to create, to release, to promote, to distribute and to monetize their works. I would think you'd agree that's a good thing, unless you really don't support content creators, as you claim.
Given that, it seems clear that any copyright law should try to support the rise of such innovative platforms that enable so much more creation... rather than burdening them with liability that makes them impossible to exist, and pushes us back towards that first world of gatekeepers.
Finally, in answer to your whining below: my TIME and ATTENTION are scarcities. You have no right to them, no matter how often you act like a spoiled child in my comments. I know you've done this for years and years at this point. Most children grow up. I'm surprised you have not.
A complaint? Sure. "Criminal charges"? Hells no. I have no doubt that he filed a complaint form, just as I have no doubt that the FLPD gave it the due respect it deserves (ie. "none"), meaning I doubt that anyone listed need have any fear of travelling to Ft. Lauderdale any time soon.