Posted on Techdirt - 30 November 2015 @ 3:32am
On Friday, the Wall Street Journal's Stacy Meichtry and Joshua Robinson published an in-depth bit of reporting on the planning and operational setup of the Paris attackers, revealing a bunch of previously unknown details. The key thing, however, isn't just the total lack of anything that looks like sophisticated encryption, but the opposite. The attackers basically did nothing to hide themselves, communicating out in the open, booking houses and cars in their real names, despite some of them being on various terrorist watch lists. It discusses how Brahim Abdeslam booked a house using an online website (Homelidays -- a French service that is similar to Airbnb, though it predates Airbnb by a lot), using his own name. So did his brother, Salah Abdeslam, who booked a hotel for a bunch of the attackers (using his real name) on Booking.com.
The piece mentions, as we noted earlier, that the attackers appeared to communicate via unencrypted SMS. It also mentions how the guy who planned the attacks, Abdelhamid Abaaoud, bragged about his plans in ISIS's English-language glossy magazine months ago. Again, you'd think that this would alert the intelligence community to actually watch the guy, but again it appears he did little to hide his movements or communications.
In fact, the report notes that after Abaaoud shot up a restaurant, he went back to check out the aftermath of the attacks that he had helped put together -- and kept his mobile phone with him the whole time, making it easy to track his whereabouts:
An hour after Mr. Abaaoud finished shooting up restaurants, he emerged from a metro station in the 12th district, according to data police pulled from his cellphone. He headed west toward the sound of sirens, his path zigzagging as he returned to the scene of his crimes.
For two hours after the massacre ended, prosecutors say, Mr. Abaaoud surveyed his handiwork, at one point blending in with panicked crowds and bloodied victims streaming from the Bataclan
You can read the entire thing and note that, nowhere does the word "encryption" appear. There is no suggestion that these guys really had to hide very much at all.
So why is it that law enforcement and the intelligence community (and various politicians) around the globe are using the attacks as a reason to ban or undermine encryption? Again, it seems pretty clear that it's very much about diverting blame for their own failures
. Given how out in the open the attackers operated, the law enforcement and intelligence community failed massively in not stopping this. No wonder they're grasping at straws to find something to blame, even if it had nothing to do with the attacks.
21 Comments | Leave a Comment..
Posted on Techdirt - 25 November 2015 @ 12:38pm
The Montana Standard, a newspaper in Butte, Montana has apparently decided on a new strategy for its online commenters, requiring "real names" to be associated with every comment. We've spent plenty of time arguing why this is kind of stupid, but many websites falsely believe that anonymity leads to less friendly comments, and using "real names" will magically make people nice (in our experience, people with real names can still be insufferable jackasses, while some of our best comments come from anonymous users, but...). But, that change in policy alone isn't that big of a deal. What is a big deal is that the Standard has decided to do this retroactively. As it stands now, and as it's been in the past, when you sign up to comment, it directly asks you for both your real name and your "screenname" and states pretty clearly that this is the name that will display with your comments:
But on January 1st, all of that changes, and whatever people put in as their "real names" will show up. The Standard is allowing people who are concerned to email them before December 26th to argue for why their comments should be removed before the January 1st switch over, but it seems likely that many won't even realize this is happening. Lots of people have been using the comments on that post itself
to criticize this plan, and Paul Alan Levy has written a thorough post explaining why this is so problematic
The Standard’s retroactive application of its real name policy seems to me highly irresponsible. You can easily imagine a newspaper deciding that is not going to rely on anonymous sources in its news stories – certainly there have been media entities that have claimed to have adopted such policies. But can you imagine a paper doing so retroactively, leaving its stories online that were previously sources anonymously but replacing such categories as “inside source” with the name of a whistleblower, or replacing “highly placed official” with the name of the conniving government official speaking “candidly” about his internal adversaries under cover of source protection? “I’m sorry, Deep Throat, we have decided to tell Nixon and his henchmen who you really are.” You could have a number of unhappy sources, not to speak of some dead ones where the sources live abroad in a society or culture where dissent is not tolerated. The source’s life could be in danger even if the source lives inside the United States, if the source was talking about the Crips, or MS-13, or some militia group.
The Standard’s editor told Davis that it is publishing notice of its new policy, including the retroactive application, in both its print editions and web site, and that it “is sending emails to prior commenters, when it has valid email addresses.” (Although as of today, when I looked at the page where the site’s users register to be allowed to comment, there was no notice of any impending policy; to the contrary, the site still promises that the screen name “is the name that will be displayed next . . . for comments, blog posts, and more. Choose wisely!”) But depending on how long it has been since the Standard started accepting registrations, it is quite possible that users may have changed their email addresses, or have moved on to a new email address without ever canceling the old one, and hence they might not see the Standard’s notice. And it is also quite possible that some of the commenters may have made comments that place their economic or even physical security at risk from the individuals or companies that they criticized in online comments. Or, their comments might have revealed something about their own experiences or past conduct that they were willing to share with the public anonymously, making a valuable contribution to a discussion, but would never have been willing to provide had they known that their own names would be attached. The Standard could be putting livelihoods and more at risk through its retroactive changes.
Levy further tested the existing commenting system, discovering that it was, in fact, easy to sign up with fake "real names" -- including a test where he signed up using the name of the Standard's editor, David McCumber.
I was able to register with a completely invented name, in which I provided a real email address but no other truthful information in the various boxes on the registration page. The comment I posted is the only one that was posted on November 23, 2015 – it appears with the screen name “notmyrealname.” As a further test, I registered again today, again providing false information throughout the registration process, but this time the “real name” I provided was the name of the Standard’s editor, David McCumber, and the street address that I provided was the Standard’s own address. The comment duly appeared on the paper’s web site a few minutes later – it is there under the screen name “NotReallytheEditor.” So, presumably, this comment will appear on January 1 as having been posted by David McCumber.
Promising to keep people's names hidden, and then retroactively changing that with little notice seems like an incredibly irresponsible thing to do. One hopes that the Standard will reconsider.
41 Comments | Leave a Comment..
Posted on Techdirt - 25 November 2015 @ 11:39am
As you may have heard, last week actor Charlie Sheen announced that he is HIV positive, which got lots of news coverage. Related to that, In Touch magazine produced the non disclosure agreement (NDA) that it claims "Charlie Sheen had his sexual partners sign when they came to his house." I guess if you're a celebrity known for sleeping around, this is the kind of thing you have your lawyers cook up for you. But what struck me as interesting was that, beyond the basic NDA language, there was some copyright language concerning any images, videos or sound recordings. You can understand why Sheen (and his lawyers) don't want anyone taking pictures of him or even talking about the relationship to book or magazine writers, so they include some bizarre copyright transfer language for the partner to agree to:
It's a little difficult to read, so here are the relevant sections:
1.3 No Participation in Books or Articles. Without Your advance express written consent, I will not give or participate in any interviews, write or be a source for, any articles, books, programs, or stories about You or the Related Parties, whether truthful, fictionalized, on the record, or "off the record." If I breach these promises, My copyright in any such unauthorized material shall be automatically and immediately transferred by Me to You as of its creation and in perpetuity, and this Agreement shall constitute a valid transfer of copyright.
1.4 Images and Recordings. Without Your advance express written consent, I will not create any photographs, movies, videos, sound or image recordings or otherwise capture any depictions or likenesses of You, Your family, friends, associates or employees ("Images and Recordings"). If I breach these promises any images and Recordings I create shall be considered Confidential Information, and My copyright in them shall be deemed automatically and immediately transferred by Me to You as of its creation and in perpetuity, and this Agreement shall constitute a valid transfer of copyright. If you expressly direct Me to create any Images and Recordings, they will be Confidential Information in which I have no legal rights or interest whatsoever, including any copyright, trademark, "moral rights," patent, or other similar rights, and I convey, transfer and assign to You all of My right, title and interest (if any) of whatever kind or nature in all Images and Recordings as of their creation and in perpetuity, and this Agreement shall constitute a valid transfer of copyrights.
Of course, the "in perpetuity" is not really accurate, as you can't give up your termination rights
, even with a contractual agreement, to take back your copyrights after 35 years, but, really, that's besides the point. I do wonder how valid Section 1.3 is at all. If the partner is interviewed for a book or a magazine article, there likely isn't any copyright for Sheen's partner to transfer in the first place, as nothing is "fixed" by that partner. Furthermore, in most cases, the book or magazine author/publisher would likely have a strong fair use claim if Sheen tried to have those quotes deleted via copyright. If anything, this just seems like a way to make it sound
scary to go out and talk to a magazine or book author.
The transfer of copyright in the photos and videos at least seems a bit more legit, if still sketchy. Of course, once again, though, this shows where copyright is being used directly for censorship purposes, entirely divorced from its supposed
purpose of providing incentives to create.
Read More | 19 Comments | Leave a Comment..
Posted on Techdirt - 25 November 2015 @ 10:39am
World Intellectual Property Review (WIPR) is reporting that the European Patent Office, EPO, has threatened Roy Schestowitz with a defamation lawsuit over a blog post he did. Schestowitz writes the Techrights blog, which I personally think can go overboard with some of its stories at times. However, to argue that his stories are defamation, especially by a government agency, is crazy. Back in October, Schetowitz had a story claiming that the EPO was prioritizing patent applications from large companies like Microsoft to "foster a better esprit de service." I actually don't think the program described by the EPO actually sounds that crazy, and the EPO's response isn't that crazy either -- it's just about more efficiently handling certain patent applications to keep the office from getting swamped. Indeed, it does seem like Schestowitz may have overreacted with his interpretation of the memo. But, misinterpreting something is hardly defamation.
In fact, to argue that Schestowitz's post is defamatory is crazy. Threatening Schestowitz with a defamation claim is much crazier and dangerous than even Schestowitz's own interpretation of the EPO's memo. If you're working for a government agency, such as the EPO, you have to be willing to accept some amount of criticism, even if you disagree with it. To claim it's defamation and to threaten a lawsuit is really, really screwed up. Frankly, this calls into question what the EPO is focused on much more than any claims of favoring large companies. Also bizarre is the fact that WIPR edited its own story to remove any mention of what Schestowitz's original blog posts were about in the first place. They had originally included a sentence briefly describing the original Techrights blog post that got the EPO upset, but then deleted that part.
The EPO has been coming under a fair bit of criticism lately, and the entire organization appears to be astoundingly thin-skinned. A few months ago, the office apparently blocked access to Techrights altogether from within its network. That seems like a pretty strange move in the first place. Florian Mueller (and, yes, I know that many people here don't trust Mueller, but...) has pointed out how absolutely ridiculous the EPO can be about just about anything related to how it works:
The European Patent Office is the last dictatorship on Central European soil. Local police cannot allowed to enter the EPO's facilities without an invitation from the president. National court rulings cannot be enforced; compliance is voluntary. Employees and visitors are subjected to covert surveillance. And if employees are fired (or "suspended"), which just happened to several staff representative, they won't get their day in court for about ten years.
The EPO's leaders have a rather selective attitude toward the law. When it's about their wrongdoings, they want their organization to be a lawless, autocratic island that disrespects human rights. But when the rules of the world around the EPO come in handy, the leadership of the EPO tries to leverage them against those who dare to criticize it.
I'm having trouble thinking of any other governmental agency that has ever threatened a public critic with defamation. Basic concepts around free speech suggest that the EPO should suck it up. If it disagrees with Schestowitz's interpretation of what it's doing, then it can come out and explain its side of the story. Threatening him with defamation actually only makes me think that perhaps his interpretation hits closer to home than I originally believed.
13 Comments | Leave a Comment..
Posted on Techdirt - 25 November 2015 @ 6:30am
As you hopefully already know, we take a bit of a different view of ad blockers around here on Techdirt, recognizing that many people have very good reasons for using them, and we have no problem if you make use of them. In fact, we give you the option of turning off the ads on Techdirt separately, whether or not you use an ad blocker. And we try to make sure that the ads on Techdirt are not horrible, annoying or dangerous (and sometimes, hopefully, they're even useful). Most publications, however, continue to take a very antagonistic view towards their very own communities and readers, and have attacked ad blockers, sometimes blocking users from reading content if they have an ad blocker. Perhaps no publication has fought harder against ad blockers than German publishing giant Axel Springer, the same company that frequently blames Google for its own failure to adapt.
Axel Springer has been suing the makers of various ad blockers. So far, those cases have failed miserably, making Axel Springer look like a whiny, out-of-touch publication that refuses to get with the times. But, instead, it just keeps on suing. From TechCrunch:
German media giant Axel Springer, which operates top European newspapers like Bild and Die Welt, and who recently bought a controlling stake in Business Insider for $343 million, has a history of fighting back against ad-blocking software that threatens its publications’ business models. Now, it’s taking that fight to mobile ad blockers, too. According to the makers of the iOS content blocker dubbed “Blockr,” which is one of several new iOS 9 applications that allow users to block ads and other content that slows down web browsing, Axel Springer’s WELTN24 subsidiary took them to court in an attempt to stop the development and distribution of the Blockr software.
Specifically, explains the law firm representing Blockr, Axel Springer wanted to prohibit Blockr’s developers from being able to “offer, advertise, maintain and distribute the service” which can be used today to block ads on http://www.welt.de, including the website’s mobile version.
Isn't that nice. Rather than recognize that people don't like your ads
, you try to sue the companies serving an actual consumer need so that you can continue to piss off your readers. It's the dinosaur strategy -- rather than innovate, you sue to try to stave off the inevitable decline.
63 Comments | Leave a Comment..
Posted on Techdirt - 24 November 2015 @ 12:45pm
Did you hear that story about how ISIS is so sophisticated with encryption that they have a special "opsec" manual on computer security protocols? You might have, because last week it was all over the internet. Yahoo kicked it off with a story, claiming it was the secret manual ISIS "uses to teach its soldiers about encryption." Wired followed up with its own story, as did The Telegraph. The "manual" was "discovered" by analysts at the Combating Terrorism Center, based out of the US Military Academy at West Point. Thankfully, Buzzfeed has the details, noting that the guide, created by a cybersecurity firm in Kuwait, named Cyberkov, is actually a guide for journalists and activists to protect their communications from oppressive governments. And there's nothing particularly secret about it, as apparently it's basically just repurposed stuff from the EFF's website:
“Our guide is based on publicly available tools, instructions and best practices. The guidelines in our manual are sourced from the EFF [Electronic Frontier Foundation] and other sources of privacy organizations,” wrote CyberKov CEO Abdullah AlAli to BuzzFeed News in an email. He said his organization had no idea its guide had been repurposed by ISIS. He was surprised to see it cited in articles, many of which have been updated since they were originally posted to note the document’s origin, and “even more shocked to see the Combating Terrorism Center at West Point simply Google-Translated it and claimed it as ISIS’s.”
Now, it does appear that some folks in ISIS may have sent around versions of the guide, but it sort of undermines the idea that they had created their own special set of guidelines to avoid being tracked, when all they're doing is picking up publicly available information on security best practices.
31 Comments | Leave a Comment..
Posted on Techdirt - 24 November 2015 @ 8:21am
Look, everyone has known for quite some time that Senator Dianne Feinstein's big push for so-called "cybersecurity" legislation in the form of CISA had absolutely nothing to do with cybersecurity. It was always about giving another surveillance tool to her friends at the NSA. However, given that she was one of the most vocal in selling it as a "cybersecurity" bill (despite the fact that no cybersecurity experts actually thought the bill would help) it seems worth comparing her statements from just a month ago, with her new attacks on actual cybersecurity in the form of encryption.
Here is Feinstein just a month ago, claiming to worry about "cyberattacks" on Americans:
"Millions of personal records and hundreds of billions of dollars fall victim to cyber-attacks every year, and we’ve done little to stem the tide."
Of course, CISA does nothing to protect any of that. You know what does protect against that -- better use of encryption to keep that information from getting hacked in any useful manner.
Okay, fast forward. Following the Paris attacks, Feinstein has been among the most vocal
in claiming that we need to undermine encryption, which is pretty amazing given that she represents California (and is from San Francisco), home to tons of tech companies that actually get this and think she's completely crazy
for undermining actual cybersecurity.
Never mind that, though. Here she is this past weekend, on CBS's Face the Nation
totally attacking encryption itself
and mocking the tech companies that just a month ago she was insisting needed special government help to protect against cyberattacks. She was asked if the intelligence community has the tools it needs, and she decides to attack encryption -- even choosing to cite as a source CIA director John Brennan -- the same John Brennan who illegally spied on her staffers
and then lied about it
"I can say this. [FBI] Director [James Comey] and, I think John Brennan, would agree, that the Achilles Heel in the internet is encryption. Because there are now... it's a black web! And there's no way of piercing it. And this is even in commercial products! PlayStation, John! Which our kids use. If the two ends communicate, that's encrypted. So terrorists can use PlayStation to be able to communication and there's nothing that can be done about it."
The host, John Dickerson, then points out that the tech industry (again, mostly based in or near Feinstein's hometown, and that she's supposed to be representing) says that backdooring encryption makes us less safe and opens us up to more attack, and Feinstein brushes it off, relying on her apparent years of computer security training...
No. I don't think so. I think with a court order, with good justification, all of that can be prevented. It can be prevented in Europe, because Europe has been a major driver for more encryption. And I think that they are now seeing the results. I have visited with all of the General Counsels of the tech companies, just to try to get them to take bomb building recipes off the internet. Recipes that have been tested and we know can explode a plane. Directions. Where to sit on the plane to blow it up. We know that there are bombs that can go through magnetometers. And to put that information out on the internet, is terrible. And I sorta got 'well, pass a law.' So, we may just have to do that. But I am hopeful that the companies, most of whom are my constituents -- not most, but many -- will understand what we're facing. And we're not crying wolf. There's good reason for this. And people are dying all over the world. And I think the Sinai-Russian airliner is a classic example of a bomb that got on a plane, that blew up that plane.
Where to start with this nonsense? First, note that she doesn't actually respond to the question concerning how undermining encryption will make us all less safe and make all that information Feinstein herself claimed was under attack just a month ago more vulnerable
, other than to say that she
, personally, doesn't think that what every computer security expert
has been saying is true. Yikes.
Second, rather than focus on encryption, she pivots to her other pet projects
, claiming that the government should force internet companies to censor The Anarchist's Cookbook
. She keeps on this despite the fact that all the way back in 1997, the DOJ directly told
Feinstein that this would violate the First Amendment. From the DOJ to Feinstein:
The First Amendment would impose substantial constraints on any attempt to proscribe indiscriminately the dissemination of bombmaking information. The government generally may not, except in rare circumstances, punish persons either for advocating lawless action or for disseminating truthful information -- including information that would be dangerous if used -- that such persons have obtained lawfully.
Third, this weird infatuation with The Anarchist's Cookbook
, despite the fact that it's generally recognized as a joke for fools, where the likelihood of being able to build an actual bomb from it are minimal at best. And, while she pretends that the GCs of tech companies just sort of shrugged their shoulders about this, it's much more likely that it's because they thought she was being ridiculous trying to censor the internet in violation of the First Amendment. Whoever told her "well, pass a law" was almost certainly trying to get rid of her, knowing that any such law would be unconstitutional.
Fourth, this tangent about "bomb making instructions" online still has absolutely nothing to do with encryption or the question about how encryption makes us all much more vulnerable to attack and actually makes us all less safe.
Fifth, the comment about Europe is insane
. Again, while the attackers may have used some encryption, it's been revealed (since long before Feinstein did this interview) that they did an awful lot of communicating in the clear, including unencrypted SMS
and Facebook messenger. On top of that, what the hell does "Europe has been a major driver for more encryption" even mean? Perhaps it's true that they've been adopting more encryption to hide from the NSA's spying that Feinstein herself helped hide from everyone
Sixth: the whole PlayStation thing has been debunked
as a way that the Paris attackers communicated. They did not. Furthermore, she's just wrong that the PlayStation has end-to-end encryption. It does not
Seventh, does she honestly believe that whoever blew up that Russian airplane downloaded bomb-making instructions from the internet? Also, if it were really so easy to get such instructions and get them through security, don't you think we'd have seen a lot more airplanes blown up by now?
In summary, Feinstein (a month ago) said we should all be deathly afraid of cyberattacks, and the only way to solve it was to give the government much greater access to companies' computer systems, via CISA. And, now, she insists that encryption is an "Achilles's heel" and that actual cybersecurity experts
are lying when they say undermining encryption will put everyone at risk. Why? Because The Anarchist's Cookbook
is online and Google won't take it down.
Is it really so much to ask for politicians to actually understand technology before they go off on ridiculous, ignorant, uninformed rants about it -- often leading to even more ridiculous and dangerous
41 Comments | Leave a Comment..
Posted on Techdirt - 23 November 2015 @ 10:40am
Over the weekend, the Telegraph (which, really, is probably only the second or third worst UK tabloid), published perhaps the dumbest article ever on encryption, written by Clare Foges, who until recently, was a top speech writer for UK Prime Minister David Cameron (something left unmentioned in the article). The title of the article should give you a sense of its ridiculousness: Why is Silicon Valley helping the tech-savvy jihadists? I imagine her followups will including things like "Why is Detroit helping driving-savvy jihadists?" and "Why are farmers feeding food-savvy jihadists?"
The article is perhaps even dumber than the headline, but let's dig in.
What will it take? 129 dead on American soil? 129 killed in California? What level of atrocity, what location will it take for the Gods of Silicon Valley to wake up to the dangerous game they are playing by plunging their apps and emails ever deeper into encryption, so allowing jihadists to plot behind an impenetrable wall?
"Plunging their apps even deeper into encryption"? I don't even know what that means, but let's flip it around: How many hacked credit cards, medical information and email accounts will it take for the Gods of Silicon Valley to wake up and recognize they need to better protect
user data. Because that's what's actually happening. Encryption is not about "allowing jihadists to plot behind an impenetrable wall" it's about protecting your data
-- even that of Clare Foges -- from malicious attackers who want access to it. Or does Foges and her former boss David Cameron communicate out in the open where any passerby can snoop on their messages?
Does this mean some bad people can use encryption? Yes. But it's not as "impenetrable" as she seems to think (we'll get to her knowledge of technology and encryption in a moment). Even if you're using encryption, there is still plenty of metadata revealed. Furthermore, there have always been ways to communicate in less-than-understandable or less-than-trackable way -- and the terrorist community has used them forever
. They don't need to rely on "Silicon Valley" giants.
But, more to the point, undermining encryption makes everyone
significantly less safe. The whole idea that weakening encryption makes people more safe is profoundly ignorant
. Even more ridiculously, Foges blames Ed Snowden
Why? It goes back to Edward Snowden, the weaselly inadequate whose grasp for posterity has proved a boon for Isil. They should be gratefully chanting his name in Raqqa, for it was Snowden’s revelations about government surveillance methods that triggered this extraordinary race towards deeper encryption.
This, of course, is wrong. Stupidly, ignorantly, wrong. Again, studies have shown that post-Snowden, terrorists didn't change anything
in how they communicate. They were already
using encryption and reports suggest that they'd been using encryption going back more than a decade. Snowden's revelations only pointed out how governments were doing mass surveillance on ordinary citizens
. Everyone -- including various terrorist organizations -- already assumed (correctly) that they were spying on terrorist organizations and sympathizers. So it's not clear what Foges is claiming here, other than that she's pulling a Dana Perino
and shielding her ex-boss from criticism by blaming the whistleblower.
All this is making the job of the security services infinitely harder. FBI Director James Comey calls the challenge “going dark”. Leads are followed until they hit the brick wall of indecipherable data. A few years ago law enforcement agencies could approach Hotmail or Google with a warrant and get vital information to stop horrors unfolding. Now the data they salvage is often gobbledegook – a load of encrypted numbers that are impossible to read. They are trying to save lives but are being frustrated by encrypted technology.
This is also astoundingly ignorant and wrong. To date, the FBI and others have failed to present a single example of where encryption has actually been a problem in deciphering this information. Also, naming Hotmail and Google is wrong as well, as neither Hotmail nor Gmail currently offer end-to-end encryption in a manner that anyone really uses. Google does have a test version available, but the number of people using it is barely notable. So, yes, if law enforcement goes to Google with a valid warrant, it's going to turn over your emails.
This isn’t about privacy, it’s about profit
This may be the most ignorant statement of all. Encryption also means that these same companies cannot scan the contents of your email, for example to place ads against them. In fact, most people have noted that the reason Google hasn't really embraced end-to-end encryption in Gmail is that it would undermine
the business model of that product. But, Foges is on a roll of ignorant bullshit and she can't let little things like facts get in the way.
And, of course she concludes with the usual ridiculousness about how she's just so sure that if they put their minds and money to it, they can figure out how to fix this "problem."
The global tech industry made around $3.7 trillion last year. They employ some of the brightest people on the planet. Apple et al could, if they wanted, employ a fraction of these resources to work out how we can simultaneously keep the good guys’ data secure and keep the bad guys in plain sight. The geniuses of Silicon Valley would be more than a match for the dunderheads in the desert.
Except, overestimating your side and underestimating the enemy seems like a pretty stupid idea -- especially when you're pushing for the impossible. And the idea that you can magically "keep the good guys’ data secure and keep the bad guys in plain sight" is pretty laughable. You don't need to be an expert to recognize the ridiculousness of that statement. Who do you determine are "the good guys" and who are "the bad guys"? Is that something you can code? Because, based on this, I'd argue that Foges is "a bad guy." Is she okay with her information being passed in plain sight? And, of course, the reality is even more ridiculous because, as has been explained in great detail
in the past, encryption where "the good guys" have access is encryption that doesn't work
-- and thus it's encryption that makes us all less safe.
Asking for encryption that only protects "the good guys" is publicly asking for the impossible. It's an astoundingly ignorant question, that anyone with any amount of expertise would tell you is not a good question to ask.
On Twitter, some people have been pushing back on Foges, and her response has been... well, less than inspiring. When people have pointed out that she seems ignorant of the facts, she not only misses the point, but seems proud of her ignorance
It's fairly stunning, but Foges article gets almost everything wrong. It doesn't understand encryption. It doesn't understand what tech companies are doing. It doesn't understand how security works. It's just... wrong. When someone on Twitter confronted her about this, she insisted that she interviewed people who felt that it was possible to create such encryption, but then went silent when lots and lots of tech experts asked her to name a single technology professional who agreed with her.
Similarly, it's somewhat bizarre that the Telegraph doesn't note that Foges spent the past few years as UK Prime Minister David Cameron's chief speech writer, and still lists herself as an advisor to Cameron. Seems like something that should have been disclosed. The newspaper isn't exactly known for its accuracy, but this is an embarrassment for both Foges and the Telegraph.
96 Comments | Leave a Comment..
Posted on Techdirt - 20 November 2015 @ 7:39pm
Judge Liam O'Grady -- the same guy who helped the US government take all of Kim Dotcom's stuff, is the judge handling the wacky Rightscorp-by-proxy lawsuit against Cox Communications. The key issue: Rightscorp, on behalf of BMG and Round Hill Music flooded Cox Communications with infringement notices, trying to shake loose IP addresses as part of its shake down. Cox wasn't very happy about cooperating, and in response BMG and Round Hill sued Cox, claiming that 512(i) of the DMCA requires ISPs to kick people off the internet if they're found to be "repeat infringers." Historically, it has long been believed that 512(i) does not apply to internet access/broadband providers like Cox, but rather to online service providers who are providing a direct service on the internet (like YouTube or Medium or whatever). However, the RIAA and its friends have hinted for a while that they'd like a court to interpret 512(i) to apply to internet access providers, creating a defacto "three strikes and you lose all internet access" policy. Rightscorp (with help from BMG and Round Hill Music) have decided to put that to the test.
This is a big, big deal. If the case goes against Cox, then it would create a massive problem for the public on the internet. Accusations of infringement could potentially lead to you totally losing access to the internet, which could really destroy people's lives, given how important the internet is for work and life these days. The details of the case look like they should favor Cox pretty easily. After all, Cox pointed out that Rightscorp only had licenses from the publishes, meaning they had no copyright in the sound recording -- yet they admitted to downloading the sound recording, suggesting that, if anything, Rightscorp was a mass infringer. On top of that there was pretty strong evidence that Rightscorp does not act in good faith in how it runs its shakedown practice, telling people that they have to take their computers to the police to prove their innocence (really).
Unfortunately, as Eriq Gardner reports, Judge O'Grady has ruled against Cox on a very key point: does its current policy grant it safe harbor under the DMCA. The judge said no, though we're still waiting for the full ruling as to why.
The bigger story is O'Grady's determination that there is "no genuine issue of material fact as to whether defendants reasonably implemented a repeat-infringer policy as is required by §512(i) of the DMCA," granting a motion that Cox is not entitled to a safe harbor defense.
Now, just because you're not protected by the safe harbor it does not mean
that you are automatically guilty of infringement. There are cases where sites have not qualified for the safe harbor and still prevailed. But it does make things more difficult and complicated and
, much more importantly, opens the door to lots and lots of mischief by the RIAAs and MPAAs of the world to use this to kick people off the internet entirely based on accusations of copyright infringement. That's immensely worrisome.
O'Grady doesn't seem to think that kicking people off the internet is really a big deal. Earlier in the case, we've discovered, in the process of flat out rejecting
an attempt by Public Knowledge and EFF to file an amicus brief, Judge O'Grady made his views clear:
I read the brief.
It adds absolutely nothing helpful at all. It is a combination
of describing the horrors that one endures from losing the
Internet for any length of time. Frankly, it sounded like my
son complaining when I took his electronics away when he
watched YouTube videos instead of doing homework. And it's
That's his response to two well known public interest groups explaining to him the "real world harmful effects" of Rightscorp's copyright shake-down trolling business. But he didn't want to hear any of it. Because protecting the ability of Americans to not be the subjects of extortion schemes and to enable them to communicate and work is "hysterical" and no different from kids not doing their homework because of too much YouTube.
The details here matter, but I would imagine that Cox is likely to appeal. One hopes that the appeals court is more open to listening to the concerns over copyright trolling and kicking people off the internet.
113 Comments | Leave a Comment..
Posted on Techdirt - 20 November 2015 @ 12:48pm
The attacks in Paris were a horrible and tragic event -- and you can understand why people are angry and scared about it. But, as always, when politicians are angry and scared following a high-profile tragedy, they tend to legislate in dangerous ways. It appears that France is no exception. It has pushed through some kneejerk legislation that includes a plan to censor the internet. Specifically the Minister of the Interior will be given the power to block any website that is deemed to be "promoting terrorism or inciting terrorist acts." Of course, this seems ridiculous on many levels.
First, there are the basic concerns about free speech. Yes, I know this is France and it doesn't value free speech in the same way as the US, but it's still rather distressing just how quickly and easily the French government seems willing to adopt censorship measures. Second, what good does this actually do? If ISIS sympathizers are expressing their views publicly, doesn't that make it easier to track them and to find out what they're doing and saying? Isn't that what law enforcement should want? Focusing on censorship rather than tracking simply drives those conversations and efforts underground where they can still be used to influence people, but where it's much harder for government and law enforcement ot keep track of what's being said. It also only confirms to ISIS supporters that what they're saying must be so important and valuable if the government won't even let them say it. It's difficult to see how it does any good, and instead it opens up the possibility of widespread government censorship and the abuse of such a power.
41 Comments | Leave a Comment..
Posted on Techdirt - 20 November 2015 @ 10:41am
Back in 2013, we were impressed when the folks at Automattic (the company behind WordPress), actually filed some lawsuits against people who were abusing DMCA takedown notices just to takedown content they didn't like. Earlier this year, the company also took a strong stand against DMCA abuse by including a "Hall of Shame" in which it called out and shamed particularly egregious takedowns. At the time, we mentioned that other companies should pay attention. Fighting for your users' rights is important, but too many companies don't do it (and many just take things down on demand).
Now YouTube has stepped up a bit as well. There have been plenty of complaints about how YouTube -- and ContentID in particular -- deal with fair use. It's quite difficult for an algorithm to determine fair use, and that's part of the reason why we get nervous when copyright system defenders insist that you can automate takedown processes without collateral damage. However, Google has announced that it will promise to pay the legal fees (up to $1 million) of certain YouTubers where takedowns have been issued in cases where YouTube agrees that fair use applies:
We are offering legal support to a handful of videos that we believe represent clear fair uses which have been subject to DMCA takedowns. With approval of the video creators, we’ll keep the videos live on YouTube in the U.S., feature them in the YouTube Copyright Center as strong examples of fair use, and cover the cost of any copyright lawsuits brought against them.
We’re doing this because we recognize that creators can be intimidated by the DMCA’s counter notification process, and the potential for litigation that comes with it (for more background on the DMCA and copyright law see check out this Copyright Basics video). In addition to protecting the individual creator, this program could, over time, create a “demo reel” that will help the YouTube community and copyright owners alike better understand what fair use looks like online and develop best practices as a community.
It is absolutely true that even when video creators believe that their use is non-infringing because it's fair use, many still won't issue a counternotice, because the next step, if the copyright holder disagrees, is to go to court. And even if you have a slam dunk case, that can be both time consuming and incredibly expensive. And, of course, if you lose, it can be life-destroying expensive, thanks to the idiocy of statutory damages provisions in copyright law.
The NY Times actually has more details than Google's own post
and includes some examples.
Constantine Guiliotis, who goes by Dean and whose channel dedicated to debunking sightings of unidentified flying objects has just over 1,000 subscribers, is one of the video makers YouTube will defend. Mr. Guiliotis has received three takedown notices from copyright holders of videos that he has found online and posted to his YouTube channel, U.F.O. Theater.
In his videos, Mr. Guiliotis includes the videos he found but also provides analysis and commentary, which YouTube argues is within the guidelines of fair use rules. The site reposted the videos after its review and told Mr. Guiliotis it would defend him against any future legal action. Like the other creators YouTube has selected, Mr. Guiliotis has not been sued for his videos.
“It was very gratifying to know a company cares about fair use and to single out someone like me,” Mr. Guiliotis said.
Sherwin Siy, over at Public Knowledge, notes that Google probably won't have to spend much money, as any copyright holder who realizes that Google is backstopping the videos will probably (wisely) realize that going to court is less likely to have the desired effect (which is usually just intimidating people into taking down content). However, it's still an important move
in creating extra protection for fair use and
in helping to establish a clear bar of what's considered to be fair use:
But while this means that Google isn’t likely to spend much, if any money, in litigating these cases, the program still does two very important things. First, it does in fact protect those uploaders. By giving these videos a stamp of approval, Google’s legal team will make the sort of person who sends a bogus or careless takedown notice think even harder about filing a bogus lawsuit. That sort of reassurance can be enough encouragement for someone to put back a video. Oftentimes, someone receiving a takedown notice can shy away from exercising her rights to have it put back because doing so exposes her to a lawsuit. With this sort of protection, much of that fear disappears.
But perhaps the more useful aspect of the program is that it sets a clear example of what fair use is. As videos are added to the program, other users will have a useful set of models that show what Google’s lawyers, at least, are confident is fair use. That information can help an everyday YouTube user in ways that more text-based and specific guides (for educators, etc.) might not.
And this collection of videos sets an example for far more than just other video creators. The set of fair uses on display can act as a living example of the predictability of fair use. Too often, the doctrine is considered hazy or indefinite or impossible to determine. And while there are lots of cases that can exist in a gray area, there’s even more cases that actually are pretty black or white. Most people have seen clearly infringing videos; this program will show a wider audience clearly non-infringing videos. That’s particularly important in the face of other countries who have yet to adopt fair use as a limit on their copyright laws, and have been told that it’s too unpredictable for them to rely upon.
Jeff Roberts, over at Fortune goes even further in calling this "a game changer."
This is why YouTube’s announcement is a game-changer: Copyright-based censorship strategies are no longer risk free. Now, before launching an unjustified DMCA takedown, the claimant will have to weigh the risk of going up against Google and its deep pockets in a lawsuit. (The legal environment could get even more interesting in light of a recent ruling in the Prince “dancing baby” that could make it easier for fair use victors to claim legal fees from those who removed their videos).
I don't know if I'd go that far. Again, Google is only protecting a "handful" of videos, but at the very least it may scare off some of the more egregious abuses, and that's always a good thing. Now, we just need even more platforms to recognize that fighting for your users' fair use rights is important.
17 Comments | Leave a Comment..
Posted on Techdirt - 20 November 2015 @ 9:22am
At this point, we all know that the DMCA is a tool that is widely abused for censorship purposes. We have written post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post upon post detailing this (and those were just from the first page of my search results).
Most people, once aware of this, would recognize that perhaps there's a problem with the DMCA and that it should be fixed. However, some people seem to look at that and say "hey, that's an awesome censorship tool, perhaps we should expand it to other content I don't like." That's why we see people talk about expanding it to cover revenge porn or mean people online.
Or, apparently, terrorism. Yes, terrorism. Paul Rosenzweig, who (believe it or not) really once was a high ranking official in the Department of Homeland Security thinks one way to fight ISIS is to seize their copyrights and then use the DMCA to censor them. He's not joking. Or, at least I think he's not. There's a small chance that it's really a parody, but Rosenzweig has a history of truly nutty ideas behind him, so I'm pretty sure he's serious.
That model might, with a small legislative change, be adapted to the removal of ISIS terrorist speech. All that would be required was a modification of the law to assign the copyright in all terrorist speech to a non-terrorist organization with an interest in monitoring and removing terrorist content. Here are the essential components of such a plan:
- Identification of terrorist organizations to whom the law would apply;
- A definition of unprotected content associated with that terrorist organization;
- An extinguishing of copyright in such unprotected content; and
- Transfer of that copyright to a third party.
I love that "all that would be required" because what he's really saying is that "all that would be required" is we upend basically all concepts regarding free speech and copyright just to silence some people I really don't like. No biggie.
At this point, you should probably already be banging your head on a nearby hard surface, but it gets worse. He actually then worries about how much work it would be for the government to take all these copyrights and issue all those darn takedowns, so instead he suggests handing the copyrights to a third party, which he suggests could be set up similarly to the Red Cross (?!?) and saddling them with the task of issuing takedowns. Perhaps we can name them the Silencing Cross or something along those lines.
He insists that the First Amendment isn't really a problem here because terrorist speech can be seen as "material support" of terrorism and the Supreme Court has already wiped that away.
The most salient case on point is Holder v. Humanitarian Law Project, 561 U.S. 1 (2010), a Supreme Court case that construed the USA PATRIOT Act's prohibition on providing “material support” to foreign terrorist organizations (18 U.S.C. § 2339B). The case is one of the very rare instances of First Amendment jurisprudence in which a restriction on political speech has been approved, and the only one of recent vintage.
The Humanitarian Law Project (“HLP”) had sought to provide assistance to the Kurdistan Workers’ Party in Turkey and Sri Lanka's Liberation Tigers of Tamil Eelam. According to HLP, their goal was to teach these two violent organizations how to peacefully resolve conflicts. Congress had, previously, prohibited all material aid to designated organizations that involved “training”, “expert advice or assistance,” “service,” and “personnel.” HLP argued that its assistance was protected political speech. The government countered with the argument that a categorical prohibition on speech in the form of assistance was required because even non-terrorist assistance would "legitimate" the terrorist organization, and free up its resources for terrorist activities. The Court approved the limitation on speech because it was narrowly drawn to cover only “speech to, under the direction of, or in coordination with foreign groups that the speaker knows to be terrorist organizations” and served a national interest of the highest order – combatting terrorism.
It would follow, in the wake of Humanitarian Law Project, that just as speech “to” or “under the direction of” or “in coordination” with a foreign terrorist organization may be limited, so too may the content actually published “by” the terrorist organization.
I'm not so sure that First Amendment scholars would agree with him that the shift from speech "to" to speech "by" is that simple, but that's really besides the point.
Let's go back to basics here. Congress only has limited power over creating copyright law. Here it is:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
I've read that a few times now and I really am struggling to find the part that says "and to censor terrorists."
I mean, I guess the single redeeming idea in Rosenzweig's proposal here is that it's a pretty blatant admission that copyright law is about censorship much of the time. The ISIS-insanity-freakout among political types is really kinda crazy to watch in action. First they wanted to use net neutrality
to censor ISIS and now they want to use copyright law? What will they think of next? Defamation law is always popular. Perhaps we can amend Section 230 to silence terrorists. Or, I know, why don't we use the ITC
. Or trade agreements. Oh wait, that's basically the MPAA's playbook to censor speech... and now surveillance state apologists can make use of it too!
Meanwhile, hey, maybe instead of trying to censor the folks at ISIS, you watch what they're saying and use that for surveillance purposes
. I know, I know, crazy thought. But at the very same time we're having this debate, these very same people are arguing that we need less encryption so law enforcement and the intelligence community can see what ISIS is saying
. Yet here's a way to see what they're saying and the focus is on "how do we silence such speech and make it harder to track!"
But, really, Paul, congrats -- we thought we'd heard the dumbest idea in a long time with Joe Barton's "use net neutrality to censor ISIS," but you've topped it. This is the dumbest idea we've heard in a long, long time.
79 Comments | Leave a Comment..
Posted on Techdirt - 20 November 2015 @ 6:08am
Presidential candidate Hillary Clinton gave a speech yesterday all about the fight against ISIS in the wake of the Paris attacks. While most of the attention (quite reasonably so) on the speech was about her plan to deal with ISIS, as well as her comments on the ridiculous political hot potato of how to deal with Syrian refugees, she still used the opportunity to align herself with the idiotic side of the encryption debate, suggesting that Silicon Valley has to somehow "fix" the issue of law enforcement wanting to see everything. Here's what she said:
Another challenge is how to strike the right balance of protecting privacy and security. Encryption of mobile communications presents a particularly tough problem. We should take the concerns of law enforcement and counterterrorism professionals seriously. They have warned that impenetrable encryption may prevent them from accessing terrorist communications and preventing a future attack. On the other hand, we know there are legitimate concerns about government intrusion, network security, and creating new vulnerabilities that bad actors can and would exploit. So we need Silicon Valley not to view government as its adversary. We need to challenge our best minds in the private sector to work with our best minds in the public sector to develop solutions that will both keep us safe and protect our privacy.
Now is the time to solve this problem, not after the next attack.
It does not. Weakening encryption undermines both security and privacy
. There's no "balance" to be had here. You want to maximize both security and privacy and the way you do that is with strong encryption.
Also, the bit about "Silicon Valley" has to "not view government as its adversary" is another bullshit line that has been favored by James Comey and others, who keep insisting that when technologists explain to him that backdooring encryption in a manner that only "the good guys" can use it is impossible that they really mean they haven't tried
hard enough. Once again, that's not it. What pretty much the entire tech community has been saying is that it's impossible
to create such a thing without undermining the whole thing and making everyone less safe
. Hell, here's security expert Steve Bellovin explaining this pretty clearly
. He goes step by step through why it won't work, why it makes things more dangerous, why it will be abused, and why it will put us all at risk.
And the reason that Silicon Valley views the government as adversaries is because speeches like Clinton's sets them up that way
. Her speech, like Comeys' past speeches are directly setting up the government as an adversary to good computer security, asking technologists to undermine their own creations and make everyone less safe for some unclear amorphous belief that it might make a few people more safe at some point in the future. So, the answer isn't scolding Silicon Valley as Hillary has chosen to do, but rather understanding reality
, and recognizing that what she is directly advocating for is to harm the safety of Americans and others around the globe
This raise serious questions about who is advising Clinton on tech policy. When she was at the State Department, it actually did a lot of really good things on encryption and protecting communications of people around the globe. It's pretty ridiculous for Clinton to undermine her own efforts with such a dumb statement in this speech.
149 Comments | Leave a Comment..
Posted on Techdirt - 19 November 2015 @ 12:54pm
We're back again with another in our weekly reading list posts of books we think our community will find interesting and thought provoking. Once again, buying the book via the Amazon links in this story also helps support Techdirt.
This week, we've got the wonderful book The Knockoff Economy: How Imitation Sparks Innovation
by law professors Kal Raustiala and Chris Sprigman. We have written about the book before and have even hosted some excerpts from the book
, but it's a really great and important read. We mentioned it earlier this week in our story about the attempts to lock up pot
with intellectual property protections -- because that story reflected much of what's in the Knockoff Economy.
The key point of the book is to highlight that the very premise behind many calls for intellectual property protection doesn't stand up to much scrutiny. Defenders of the system usually insist that copyrights and patents are necessary for creating the incentives to create or to innovate in a market. Yet, Raustiala and Sprigman carefully detail a bunch of different industries that don't
have intellectual property protection, and over and over again, they see the same thing: more competition and more innovation
, rather than less. For many years, we've highlighted the fact that it is frequently competition
that drives innovation, yet so much of our public policy is based on the fallacy that it's monopoly rights
that drive innovation. Thus, the Knockoff Economy is a really useful work in highlighting that perhaps the very premise that so much intellectual property protection is based on is wrong.
That's not to say, necessarily, that copyrights or patents have no place (though I know some of you do believe that) at all in modern society. But, at the very least, we should be looking at what is the actual impact of those laws, and are they really increasing innovation or doing something else entirely.
9 Comments | Leave a Comment..
Posted on Techdirt - 19 November 2015 @ 11:48am
Okay, let's review. On Friday, a horrific and tragic series of attacks took place in Paris. And then:
- Surveillance state apologists blame Ed Snowden, insisting that he has "blood on his hands" because the terrorists must have learned how to avoid surveillance from his releases.
- Hysterical politicians blame encryption for the attacks, insisting that tech companies and basic math are clearly to blame.
- The Manhattan DA and others call for end-to-end encryption to be banned (while amusingly insisting they're not calling for a ban).
- Senator John McCain promises to outlaw end-to-end encryption despite the fact that there is still no actual evidence that encryption was the issue at all.
All of this is no surprise, as just a couple of months ago the intelligence community's top lawyer flat-out admitted that he and his friends planned to wait for the next terrorist attack
to push their agenda.
Of course, over the past few days, the following has happened:
- It turns out the attackers used unencrypted SMS to communicate. All the hand-wringing over encryption and "learning from Snowden" appears to have been exaggerated.
- There is no evidence that mass surveillance has ever stopped an attack which seems to raise some important questions about why it's such a focus.
- It turns out some of the attackers were already known to the intelligence community and law enforcement, and yet they failed to make use of existing powers and authorities to prevent the attacks.
- And, for good measure, there still remains little actual evidence that terrorists have changed anything in how they communicate post-Snowden. That last one is from a study from a year ago, but does seem relevant.
So that seems to be the story so far, despite what you may have seen with hand-wringing and all sorts of freakouts in the press about encryption.
Yes, preventing terrorism is important. And it would be great if the intelligence community were actually able to do that. But it seems pretty clear that mass surveillance techniques aren't doing much to help at all, though it is diminishing the privacy of everyday citizens. Perhaps before rushing to expand the surveillance state and undermine the encryption that actually does keep us all safe, we should recognize reality, rather than the fantasy-land pronouncements of FBI Director James Comey, CIA Director John Brennan and their friends.
52 Comments | Leave a Comment..
Posted on Techdirt - 19 November 2015 @ 10:43am
Famous TV news talking head Ted Koppel recently came out with a new book called Lights Out: A Cyberattack, A Nation Unprepared, Surviving the Aftermath. The premise, as you may have guessed, is that we're facing a huge risk that "cyberattackers" are going to take down the electric grid, and will be able to take it down for many weeks or months, and the US government isn't remotely prepared for it. Here's how Amazon describes the book:
Investigative reporting that reads like fiction - or maybe I just wish it was fiction. In Lights Out, Ted Koppel flashes his journalism chops to introduce us to a frightening scenario, where hackers have tapped into and destroyed the United States power grids, leaving Americans crippled. Koppel outlines the many ways our government and response teams are far from prepared for an un-natural disaster that won't just last days or weeks - but months - and also shows us how a growing number of individuals have taken it upon themselves to prepare. Whether you pick up this book to escape into a good story, or for a potentially potent look into the future, you will not be disappointed.
The book also has quotes ("blurbs" as they're called) from lots of famous people -- nearly all of whom are also famous TV news talking heads or
DC insiders who have a long history of hyping up "cyber" threats. But what's not on the list? Anyone with any actual knowledge or experience in actual computer security, especially as it pertains to electric grids.
Want to know how useful the book actually is? All you really need to read is the following question and answer from an interview Koppel did
with CSO Online:
Did you interview penetration testers who have experience in the electric generation/transmission sector for this book?
No, I did not.
Also in that interview, Koppel admits that he hasn't heard anything from actual information security professionals (though he admits he may have missed it since he's been on the book tour). But, still, if you're writing an entire book with a premise based entirely on information security practices, you'd think that this would be the kind of thing you'd do before
you write the book, rather than after
it's been published. Instead, it appears that Koppel just spoke to DC insiders who have a rather long history of totally overhyping "cyberthreats" -- often for their own profits. In another interview, Koppel insists that he didn't want to be spreading rumors
-- but doesn't explain why he didn't actually speak to any technical experts.
“Going in, what I really wanted to do was make sure I wasn’t just spreading nasty rumors,” said Koppel in a phone interview.... “After talking to all these people, I satisfied my own curiosity that this not just a likelihood but almost inevitable.”
"All these people"... who apparently did not include any computer security experts. Koppel claims that this isn't a priority because Homeland Security doesn't want to "worry" the American public:
“The public would have to understand it’s a plan that will work but if you don’t have a plan, that can be more worrisome. I just hope it becomes part of the national conversation during the presidential campaign.”
What?!? Homeland Security doesn't want to worry the American public? Which Homeland Security is he talking about? The one that manhandles the American public every time they go to an airport? The same one that is constantly fearmongering about "cyber attacks" and "cyber Pearl Harbor"? Is Koppel living in some sort of alternative universe?
Is there a chance that hackers could take down electric grids and it would cause serious problems? Sure. Anything's possible
, but somehow we've gotten along without a single incident ever
of hackers taking down any part of the electrical grid to date. And most actual information security professionals don't seem to think it is a "likely" scenario as Koppel claims. The whole thing seems to fit into the usual category of cyberFUD from political insiders who are salivating over the ability to make tons and tons of money by peddling fear.
Is it important to protect infrastructure like the electric grids? Yes. Should we be aware of actual threats? Absolutely. But overhyping the actual threat doesn't help anyone and just spreads fear... and that fear is quickly lapped up by people who will use it to profit for themselves.
53 Comments | Leave a Comment..
Posted on Techdirt - 19 November 2015 @ 9:21am
Over the past few days, we've been highlighting the fever pitch with which the surveillance state apologists and their friends have been trampling over themselves to blame Ed Snowden, blame encryption and demand (and probably get) new legislation to try to mandate backdoors to encryption.
And yet, as we noted yesterday, it now appears that the attackers communicated via unencrypted SMS and did little to hide their tracks. On top of that, as Ryan Gallagher at the Intercept notes, some of the attackers were already known to law enforcement and the intelligence community as possible problems. But they were still able to plan and carry out the attacks. Even more to the point, Gallagher points out that after looking at the 10 most recent high profile terrorist attacks, the same can be said for each of them:
The Intercept has reviewed 10 high-profile jihadi attacks carried out in Western countries between 2013 and 2015..., and in each case some or all of the perpetrators were already known to the authorities before they executed their plot. In other words, most of the terrorists involved were not ghost operatives who sprang from nowhere to commit their crimes; they were already viewed as a potential threat, yet were not subjected to sufficient scrutiny by authorities under existing counterterrorism powers. Some of those involved in last week’s Paris massacre, for instance, were already known to authorities; at least three of the men appear to have been flagged at different times as having been radicalized, but warning signs were ignored.
Nicholas Weaver, writing over at Lawfare, has a really fantastic article over "the limits of the panopticon"
that basically puts all of this into perspective, noting (1) with so many "known radicals" to follow, there is no way for the intelligence community/law enforcement to actually get the information to predict these attacks and (2) there are plenty of ways for people who know each other to communicate, even without encryption, that won't increase suspicion.
First, the sheer volume of “known radicals” –at least 5000—makes prospective monitoring impossible. How does one effectively monitor 5000 individuals and identify who among them will pose an actual threat? After all, most never will. It didn’t matter that Salah Abdeslam used his own name and credit card when booking his hotel room. Abdeslam was simply one of thousands identified as maybe or maybe not posing a threat.
Even reducing the volume of targets may be insufficient. Assuming the authorities were able to focus on 500 or 50 individuals instead of 5000, the communication patterns of a terrorist cell are remarkably similar to those of any family or group. Unless authorities are aware that an individual is actively (rather than potentially) dangerous, electronic monitoring may provide little prospective benefit, unless they can intercept the contents of a communication that makes a threat clear.
But the communication content of an even minimally proficient terrorist provides little value. Human codes are often employed. We now know that final coordination took place using unencrypted SMS, but unless one already has already identified the terrorist cell and at least some basic details of a plot, tracking an SMS that says "On est parti on commence" (which roughly translates to “Let’s go, we’re starting”) provides little actionable intelligence.
In other words, all the calls for increased surveillance and less encryption really seem like a smoke screen by an intelligence community that failed. It's entirely possible that their job is an impossible one, but at the very least we should be dealing in that reality. Instead, the intelligence community that failed is doing everything possible to shift the blame to encryption and Snowden, rather than admitting the fact that they knew who these people were, that encryption wasn't the issue and that maybe doubling down on those policies won't help at all. Of course, it might take some of the pressure off of them for failing to prevent the attack.
Still, as we've noted, almost every case of a "prevented" attack hasn't involved actual plotters, but rather the fake cooked-up plots by the FBI itself
. So, we seem to have a law enforcement and intelligence community that is terrible at stopping real plots, but really good at putting unrelated people in jail for made-up plots. And now they want more power for surveillance and to undermine the encryption that keeps us all safe?
49 Comments | Leave a Comment..
Posted on Techdirt - 18 November 2015 @ 10:36am
Even though the NY Times helped kick off the stupidity by publishing a nearly fact-free article (since deleted, and then replaced with an entirely different article) claiming that the Paris attackers used encryption to communicate, it appears the editorial board of the NY Times gets things exactly right with the editorial they pushed out last night: Mass Surveillance Isn't the Answer to Fighting Terrorism. Not only does it point out why expanding mass surveillance won't help much, it also points out that the people calling for it, like CIA director John Brennan and Director of National Intelligence, James Clapper, are not exactly trustworthy -- in fact, they're known liars:
It is hard to believe anything Mr. Brennan says. Last year, he bluntly denied that the C.I.A. had illegally hacked into the computers of Senate staff members conducting an investigation into the agency’s detention and torture programs when, in fact, it did. In 2011, when he was President Obama’s top counterterrorism adviser, he claimed that American drone strikes had not killed any civilians, despite clear evidence that they had. And his boss, James Clapper Jr., the director of national intelligence, has admitted lying to the Senate on the N.S.A.’s bulk collection of data. Even putting this lack of credibility aside, it’s not clear what extra powers Mr. Brennan is seeking.
This is refreshing to see, because the mainstream press has been ridiculously reticent to call these guys out for the fact that they lied. Of course, President Obama should be faulted too. In allowing both men to keep their jobs after they were caught lying, both publicly and to Congress, he set the tone that says "it's okay for you to perjure yourself before Congress and to lie to the American public about how we're violating their rights." And so, it continues.
Still, the NY Times, rightly also calls bullshit on the hand-wringing among the intelligence community with its claims about how their hands are tied if they can't get more surveillance powers:
Listening to Mr. Brennan and other officials, like James Comey, the head of the Federal Bureau of Investigation, one might believe that the government has been rendered helpless to defend Americans against the threat of future terror attacks....
In truth, intelligence authorities are still able to do most of what they did before — only now with a little more oversight by the courts and the public. There is no dispute that they and law enforcement agencies should have the necessary powers to detect and stop attacks before they happen. But that does not mean unquestioning acceptance of ineffective and very likely unconstitutional tactics that reduce civil liberties without making the public safer.
Now if only the views of the editorial board actually filtered down to the paper's reporters, who seem amazingly willing to simply act as stenographers for these officials as they lie to the public and push their agenda.
27 Comments | Leave a Comment..
Posted on Techdirt - 18 November 2015 @ 9:29am
I happen to be in Washington DC this week for some events and meetings -- and it's a... ridiculous week to be here, apparently (of course, that could be true of just about any week here). Earlier this week, we noted the pathological ridiculousness of surveillance state apologists like former NSA top lawyer Stewart Baker arguing that the Paris attacks are evidence for why the NSA should not roll back its Section 215 collection. The 215 collection is, of course, the completely unconstitutional (as declared by both an appeals court and the White House's own civil liberties board) program by which the NSA slurped up basically all phone records, claiming that Section 215 of the PATRIOT Act allowed this.
Of course, the primary sponsor of the PATRIOT Act, Rep. Jim Sensenbrenner, has flat out said that Section 215 was written to prevent that kind of mass surveillance, not to enable it. And so, earlier this year, Congress finally pushed through the USA Freedom Act, which was far from perfect, but still did put an end to the Section 215 collection as it stands (while still leaving open ways for the NSA to effectively get the same data). There was a six-month "transition period" which is about to close, meaning that we're officially mere days away from ending the specific 215 bulk collection.
Or, maybe not. As Baker hinted at, the surveillance state apologists are gleefully exploiting the Paris attacks to try to claw back this very, very minor victory against mass surveillance. Senator Tom Cotton quickly rushed out a bill to "postpone" indefinitely the transition away from the 215 program, because of the Paris attacks.
"The terrorist attacks in Paris last week are a terrible reminder of the threats we face every day. And it made clear that the President’s empty policy of tough talk and little action isn’t working against ISIS. Regrettably, these policy follies also extend to the Intelligence Community, whose hands were tied by the passage of the USA FREEDOM ACT. This legislation, along with President Obama’s unilateral actions to restrict the Intelligence Community’s ability to track terrorist communications, takes us from a constitutional, legal, and proven NSA collection architecture to an untested, hypothetical one that will be less effective. And this transition will occur less than two weeks from today, at a time when our threat level is incredibly high.
"If we take anything from the Paris attacks, it should be that vigilance and safety go hand-in-hand. Now is not the time to sacrifice our national security for political talking points. We should allow the Intelligence Community to do their job and provide them with the tools they need to keep us safe. Passing the Liberty Through Strength Act will empower the NSA to uncover threats against the United States and our allies, help keep terrorists out of the United States, and track down those responsible in the wake of the Paris terrorist attacks."
Almost everything in Cotton's statement is a lie. First of all, the FREEDOM Act hasn't even gone into effect yet, so even if it did "tie the hands" of the intelligence community, it had not done so yet. Second, the program has been declared unconstitutional by multiple courts and the administration's own review board. For him to call it constitutional suggests he has no problem flat out lying. Also the idea that it's "proven"? Need we remind you of two facts? (1) To date, the Section 215 program has never -- not even once -- been shown to have been useful
in stopping a terrorist attack. (2) It clearly didn't make a difference in dealing with the Paris attacks (and, again, notably the US has even greater surveillance powers overseas, as do the French). So to claim that this one unconstitutional and proven useless program is necessary is just... weird.
Sensenbrenner was at an event I attended last evening and said that he didn't think Cotton's ridiculous bill had much of a chance, but did note that it was hardly the end of surveillance state apologists from trying to expand unconstitutional surveillance powers. Cotton's is just the first attempt, but expect there to be many more.
18 Comments | Leave a Comment..
Posted on Techdirt - 18 November 2015 @ 8:27am
Two months ago, the Obama administration came to the conclusion that mandating backdoors to encryption through legislation was a non-starter. They seemed to recognize that it was mostly a bad idea and (more importantly) that Congress would not approve such legislation. Almost immediately, we noted that intelligence officials (almost gleefully) noted that they really just needed to wait for the next terrorist attack to restart the campaign. Here was Robert Litt, the top lawyer in the intelligence community:
Although “the legislative environment is very hostile today,” the intelligence community’s top lawyer, Robert S. Litt, said to colleagues in an August e-mail, which was obtained by The Post, “it could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement.”
There is value, he said, in “keeping our options open for such a situation.”
Given all that, it was disappointing that the Obama administration then took the cowardly way out
, and refused to take an official public stance against backdooring encryption.
Either way, with the attacks in Paris last week, it almost seems like the anti-encryption crowd was somewhat gleeful in their response. Why, here was the exact terrorist attack they needed to push their agenda
And, of course, the idea of mandated backdoors is back on the table, with Senator John McCain announcing plans to introduce just such legislation
“In the Senate Armed Services we're going to have hearings on it and we're going to have legislation,” Sen. John McCain (R-Ariz.), who chairs the committee, told reporters Tuesday, calling the status quo “unacceptable.”
Of course, that legislation was ready to go, sitting in a top drawer just waiting for this kind of situation. And now we have to waste all sorts of time responding to this idiocy even though just months ago we went through this whole debate all over again, during which it was pretty clear that backdooring encryption makes us all much less safe
. It puts everyone at greater risk, not less.
So the question remains: why do officials and politicians like Senator McCain want to undermine our safety and security? And, even more bizarrely, how is this the same John McCain who was on the other side
during the last crypto wars?
88 Comments | Leave a Comment..
More posts from Mike Masnick >>