The unintended but entirely predictable consequences from the UK's disastrous Counter-Terrorism and Security Act keep on a-coming. You will recall that this handy piece of legislation tasked teachers with weeding out possible future-terrorists amongst the young folks they are supposed to be teaching. This has devolved instead into teachers reporting children, usually children that would be peripherally identified as Muslim children, to the authorities for what aren't so much as transgressions as they are kids being kids. It has even turned some teachers into literal grammar police, because the universe is not without a sense of humor.
And now we learn that these part-teacher-part-security-agents may be incorporating art criticism into their repertoire, having reported a young Muslim boy of four years old to the authorities because of his inability to properly illustrate a cucumber.
Concerns were raised after the youngster drew a picture of a man cutting the vegetable. [The child's mother] said she feared her children would be taken away from her and added: 'But I haven't done anything wrong... It was a horrible day." Teachers and public service workers have a legal obligation to report any concerns of extremist behaviour to the authorities since July.
And here is the picture the child drew of himself cutting a cucumber.
Now, if we hold our nose and choose to forget for a moment that this is a four year old we're talking about, and not the re-animated corpse of Vincent Van Gogh, we might all agree that the picture on the left looks like a person holding a giant freaking sword, instead of a kitchen knife. The picture on the right will look like pretty much anything you want it to look like because, again, this is a four year old toddler we're talking about. So, it appears the teachers asked the child what he was attempting to draw in the picture, and the response would have been benign, except it hit one of the terrorism buzz-words, kinda sorta.
Staff in Luton told the child's mother they believed he was saying "cooker bomb" instead of "cucumber".
"[The member of staff] kept saying it was this one picture of the man cutting the cucumber....which she said to me is a 'cooker bomb', and I was baffled," she told the BBC Asian Network.
So the child, in addition to being unable to draw a cucumber sufficiently to get teachers to understand the portrayal he was attempting, also wasn't able to properly pronounce the word cucumber, and it apparently came out of his mouth close enough to "cooker bomb" for the nursery staff to freak out and into the de-radicalization program the child goes. I can't stress enough that this child is four years old.
Nor that the staff's interpretations here don't make any sense. So they believed the child was saying he was sawing into a cooker bomb with a death-sword? And that's a more plausible scenario than the staff concluding that this toddler was doing something completely innocent and wasn't articulating properly?
One wonders, as always, just how much leeway would have been afforded the boy if he had pale skin and blue eyes.
The US government has made numerous attempts to obtain source code from tech companies in an effort to find security flaws that could be used for surveillance or investigations.
The government has demanded source code in civil cases filed under seal but also by seeking clandestine rulings authorized under the secretive Foreign Intelligence Surveillance Act (FISA), a person with direct knowledge of these demands told ZDNet. We're not naming the person as they relayed information that is likely classified.
With these hearings held in secret and away from the public gaze, the person said that the tech companies hit by these demands are losing "most of the time."
That's hardly heartening. The DOJ would only go so far as to confirm this has happened before, likely because there's no way to deny it. The documents from the Lavabit case have been made public -- with the DOJ using a formerly-sealed document to hint at what could be in store for Apple if it refuses to write FBiOS for it.
Unfortunately, because of the secrecy surrounding the government's requests for source code -- and the court where those requests have been made -- it's extremely difficult to obtain outside confirmation. Whittaker contacted more than a dozen Fortune 500 companies about the unnamed official's claims and received zero comments.
A few, however, flatly denied ever having handed over source code to the US government.
Cisco said in an emailed statement: "We have not and we will not hand over source code to any customers, especially governments."
IBM referred to a 2014 statement saying that the company does not provide "software source code or encryption keys to the NSA or any other government agency for the purpose of accessing client data." A spokesperson confirmed that the statement is still valid, but did not comment further on whether source code had been handed over to a government agency for any other reason.
Cisco is likely still stinging from leaked documents showing its unwitting participation in an NSA unboxing photo shoot and has undoubtedly decided to take a stronger stance against government meddling since that point. As for IBM, its statement is a couple of years old and contains a major qualifying statement.
Previously-leaked documents somewhat confirm the existence of court orders allowing the NSA to perform its own hardware/software surgery. Presumably, the introduction of backdoors and exploits is made much easier with access to source code. Whittaker points to a Kaspersky Lab's apparent discovery of evidence pointing to the NSA being in possession of "several hard drive manufacturers'" source code -- another indication that the government's history of demanding source code from manufacturers and software creators didn't begin (or end) with Lavabit.
The government may be able to talk the FISA court into granting these requests, given that its purview generally only covers foreign surveillance (except for all the domestic dragnets and "inadvertent" collections) and national security issues. The FBI's open air battle with Apple has already proceeded far past the point that any quasi-hearing in front of the FISC would have. That's the sort of thing an actually adversarial system -- unlike the mostly-closed loop of the FISA court -- tends to result in: a give-and-take played out (mostly) in public, rather than one party saying "we need this" and the other applying ink to the stamp.
In all the discussions about Apple v. the FBI, a few people occasionally ask what would happen if Apple's engineers just refused to write the code demanded (some also ask about writing the code, but purposely messing it up). And now it appears that at least some Apple engineers are thinking about just this scenario. According to the NY Times:
Apple employees are already discussing what they will do if ordered to help law enforcement authorities. Some say they may balk at the work, while others may even quit their high-paying jobs rather than undermine the security of the software they have already created, according to more than a half-dozen current and former Apple employees.
As the NY Times notes, these details certainly add some pretty hefty weight to the First Amendment arguments about "compelled speech" that Apple has made (and that the EFF doubled down on in its amicus brief). As for what then would happen... that's up to the court, but it's likely that the court would find Apple in contempt and/or start fining it. But that still leaves open the question of how does it comply if not a single engineer is willing to help out.
This particular legal dispute gets more interesting day by day...
Not surprisingly, Oliver's take is much clearer and much more accurate than many mainstream press reports on the issues in the case, appropriately mocking the many law enforcement officials who seem to think that, just because Apple employs smart engineers, they can somehow do the impossible and "safely" create a backdoor into an encrypted iPhone that won't have dangerous consequences. He even spends a bit of time reviewing the original Crypto Wars over the Clipper Chip and highlights cryptographer Matt Blaze's contribution in ending those wars by showing that the Clipper Chip could be hacked.
But the biggest contribution to the debate -- which I hope that people pay most attention to -- is the point that Oliver made in the end with his faux Apple commercial. Earlier in the piece, Oliver noted that this belief among law enforcement that Apple engineers can somehow magically do what they want is at least partially Apple's own fault, with its somewhat overstated marketing. So, Oliver's team made a "more realistic" Apple commercial which noted that Apple is constantly fighting security cracks and vulnerabilities and is consistently just half a step ahead of hackers with malicious intent (and, in many cases, half a step behind them).
This is the key point: Building secure products is very, very difficult and even the most secure products have security vulnerabilities in them that need to be constantly watched and patched. And what the government is doing here is not only asking Apple to not patch a security vulnerability that it has found, but actively forcing Apple to make a new vulnerability and then effectively forcing Apple to keep it open. For all the talk of how Apple can just create the backdoor just this once and throw it away, this more like asking Apple to set off a bomb that blows the back off all houses in a city, and then saying, "okay, just throw away the bomb after you set it off."
Hopefully, as in cases like net neutrality, Oliver's piece does it's job in informing the public what's really going on.
This is not all that surprising, but President Obama, during his SXSW keynote interview, appears to have joined the crew of politicians making misleading statements pretending to be "balanced" on the question of encryption. The interview (the link above should start at the very beginning) talks about a variety of issues related to tech and government, but eventually the President zeroes in on the encryption issue. The embed below should start at that point (if not, it's at the 1 hour, 16 minute mark in the video). Unfortunately, the interviewer, Evan Smith of the Texas Tribune, falsely frames the issue as one of "security v. privacy" rather than what it actually is -- which is "security v. security."
In case you can't watch that, the President says he won't comment directly on the Apple legal fights, but then launches into the standard politician talking point of "yes, we want strong encryption, but bad people will use it so we need to figure out some way to break in."
If you watch that, the President is basically doing the same thing as all the Presidential candidates, stating that there's some sort of equivalency on both sides of the debate and that we need to find some sort of "balanced" solution short of strong encryption that will somehow let in law enforcement in some cases.
This is wrong. This is ignorant.
To his at least marginal credit, the President (unlike basically all of the Presidential candidates) did seem to acknowledge the arguments of the crypto community, but then tells them all that they're wrong. In some ways, this may be slightly better than those who don't even understand the actual issues at all, but it's still problematic.
Let's go through this line by line.
All of us value our privacy. And this is a society that is built on a Constitution and a Bill of Rights and a healthy skepticism about overreaching government power. Before smartphones were invented, and to this day, if there is probable cause to think that you have abducted a child, or that you are engaging in a terrorist plot, or you are guilty of some serious crime, law enforcement can appear at your doorstep and say 'we have a warrant to search your home' and they can go into your bedroom to rifle through your underwear to see if there's any evidence of wrongdoing.
Again, this is overstating the past and understating today's reality. Yes, you could always get a warrant to go "rifle through" someone's underwear, if you could present probable cause that such a search was reasonable to a judge. But that does not mean that the invention of smartphones really changed things so dramatically as President Obama presents here. For one, there has always been information that was inaccessible -- such as information that came from an in-person conversation or information in our brains or information that has been destroyed.
In fact, as lots of people have noted, today law enforcement has much more recorded evidence that it can obtain, totally unrelated to the encryption issue. This includes things like location information or information on people you called. That information used to not be available at all. So it's hellishly misleading to pretend that we've entered some new world of darkness for law enforcement when the reality is that the world is much, much brighter.
And we agree on that. Because we recognize that just like all our other rights, freedom of speech, freedom of religion, etc. there are going to be some constraints that we impose in order to make sure that we are safe, secure and living in a civilized society. Now technology is evolving so rapidly that new questions are being asked. And I am of the view that there are very real reasons why we want to make sure that government cannot just willy nilly get into everyone's iPhones, or smartphones, that are full of very personal information and very personal data. And, let's face it, the whole Snowden disclosure episode elevated people's suspicions of this.
That was a real issue. I will say, by the way, that -- and I don't want to go to far afield -- but the Snowden issue, vastly overstated the dangers to US citizens in terms of spying. Because the fact of the matter is that actually that our intelligence agencies are pretty scrupulous about US persons -- people on US soil. What those disclosures did identify were excesses overseas with respect to people who are not in this country. A lot of those have been fixed. Don't take my word for it -- there was a panel that was constituted that just graded all the reforms that we set up to avoid those charges. But I understand that that raised suspicions.
Again, at least some marginal kudos for admitting that this latest round was brought on by "excesses" (though we'd argue that it was actually unconstitutional, rather than mere overreach). And nice of him to admit that Snowden actually did reveal such "excesses." Of course, that raises a separate question: Why is Obama still trying to prosecute Snowden when he's just admitted that what Snowden did was clearly whistleblowing, in revealing questionable spying?
Also, the President is simply wrong that it was just about issues involving non-US persons. The major reform that has taken place wasn't about US persons at all, but rather about Section 215 of the PATRIOT Act, which was used almost entirely on US persons to collect all their phone records. So it's unclear why the President is pretending otherwise. The stuff outside of the US is governed by Executive Order 12333, and there's been completely no evidence that the President has changed that at all. I do agree, to some extent, that many do believe in an exaggerated view of NSA surveillance, and that's distracting. But the underlying issues about legality and constitutionality -- and the possibilities for abuse -- absolutely remain.
But none of that actually has to do with the encryption fight, beyond the recognition -- accurately -- that the government's actions, revealed by Snowden, caused many to take these issues more seriously. And, on that note, it would have been at least a little more accurate for the President to recognize that it wasn't Snowden who brought this on the government, but the government itself by doing what it was doing.
So we're concerned about privacy. We don't want government to be looking through everybody's phones willy-nilly, without any kind of oversight or probable cause or a clear sense that it's targeted who might be a wrongdoer.
What makes it even more complicated is that we also want really strong encryption. Because part of us preventing terrorism or preventing people from disrupting the financial system or our air traffic control system or a whole other set of systems that are increasingly digitalized is that hackers, state or non-state, can just get in there and mess them up.
So we've got two values. Both of which are important.... And the question we now have to ask is, if technologically it is possible to make an impenetrable device or system where the encryption is so strong that there's no key. There's no door at all. Then how do we apprehend the child pornographer? How do we solve or disrupt a terrorist plot? What mechanisms do we have available to even do simple things like tax enforcement? Because if, in fact, you can't crack that at all, government can't get in, then everybody's walking around with a Swiss bank account in their pocket. So there has to be some concession to the need to be able get into that information somehow.
The answer to those questions in that final paragraph are through good old fashioned detective work. In a time before smartphones, detectives were still able to catch child pornographers or disrupt terrorist plots. And, in some cases, the government failed to stop either of those things. But it wasn't because strong enforcement stymied them, but because there are always going to be some plots that people are able to get away with. We shouldn't undermine our entire security setup just because there are some bad people out there. In fact, that makes us less safe.
Also: tax enforcement? Tax enforcement? Are we really getting to the point that the government wants to argue that we need to break strong encryption to better enforce taxes? Really? Again, there are lots of ways to go after tax evasion. And, yes, there are lots of ways that people and companies try to hide money from the IRS. And sometimes they get away with it. To suddenly say that we should weaken encryption because the IRS isn't good enough at its job just seems... crazy.
Now, what folks who are on the encryption side will argue, is that any key, whatsoever, even if it starts off as just being directed at one device, could end up being used on every device. That's just the nature of these systems. That is a technical question. I am not a software engineer. It is, I think, technically true, but I think it can be overstated.
This is the part that's most maddening of all. He almost gets the point right. He almost understands. The crypto community has been screaming from the hills for ages that introducing any kind of third party access to encryption weakens it for all, introducing vulnerabilities that ensure that those with malicious intent will get in much sooner than they would otherwise. The President is mixing up that argument with one of the other arguments in the Apple/FBI case, about whether it's about "one phone" or "all the phones."
But even assuming this slight mixup is a mistake, and that he does recognize the basics of the arguments from the tech community, to have him then say that this "can be overstated" is crazy. A bunch of cryptography experts -- including some who used to work for Obama -- laid out in a detailed paper the risks of undermining encryption. To brush that aside as some sort of rhetorical hyperbole -- to brush aside the realities of cryptography and math -- is just crazy.
Encryption expert Matt Blaze (whose research basically helped win Crypto War 1.0) responded to this argument by noting that the "nerd harder, nerds" argument fundamentally misunderstands the issue:
Figuring out how to build the reliable, secure systems required to "compromise" on crypto has long been a central problem in CS.
If you can't read that, Blaze is basically saying that all crypto includes backdoors -- they're known as vulnerabilities. And the key focus in crypto is closing those backdoors, because leaving them open is disastrous. And yet the government is now demanding that tech folks purposely put in more backdoors and not close them, without recognizing the simple fact that vulnerabilities in crypto always lead to disastrous results.
So the question now becomes that, we as a society, setting aside the specific case between the FBI and Apple, setting aside the commercial interests, the concerns about what could the Chinese government do with this, even if we trust the US government. Setting aside all those questions, we're going to have to make some decisions about how do we balance these respective risks. And I've got a bunch of smart people, sitting there, talking about it, thinking about it. We have engaged the tech community, aggressively, to help solve this problem. My conclusions so far is that you cannot take an absolutist view on this. So if your argument is "strong encryption no matter what, and we can and should in fact create black boxes," that, I think, does not strike the kind of balance that we have lived with for 200, 300 years. And it's fetishizing our phones above every other value. And that can't be the right answer.
This is not an absolutist view. It is not an absolutist view to say that anything you do to weaken the security of phones creates disastrous consequences for overall security, far beyond the privacy of individuals holding those phones. And, as Julian Sanchez rightly notes, it's ridiculous that it's the status quo on the previous compromise that is now being framed as an "absolutist" position:
CALEA--with obligations on telecoms to assist, but user-side encryption protected--WAS the compromise. Now that's "absolutism".
Also, the idea that this is about "fetishizing our phones" is ridiculous. No one is even remotely suggesting that. No one is even suggesting -- as Obama hints -- that this is about making phones "above and beyond" what other situations are. It's entirely about the nature of computer security and how it works. It's about the risks to our security in creating deliberate vulnerabilities in our technologies. To frame that as "fetishizing our phones" is insulting.
There's a reason why the NSA didn't want President Obama to carry a Blackberry when he first became President. And there's a reason the President wanted a secure Blackberry. And it's not because of fetishism in any way, shape or form. It's because securing data on phones is freaking hard and it's a constant battle. And anything that weakens the security puts people in harm's way.
I suspect that the answer is going to come down to how do we create a system where the encryption is as strong as possible. The key is as secure as possible. It is accessible by the smallest number of people possible for a subset of issues that we agree are important. How we design that is not something that I have the expertise to do. I am way on the civil liberties side of this thing. Bill McCraven will tell you that I anguish a lot over the decisions we make over how to keep this country safe. And I am not interested in overthrowing the values that have made us an exceptional and great nation, simply for expediency. But the dangers are real. Maintaining law and order and a civilized society is important. Protecting our kids is important.
You suspect wrong. Because while your position sounds reasonable and "balanced" (and I've seen some in the press describe President Obama's position here as "realist"), it's actually dangerous. This is the problem. The President is discussing this like it's a political issue rather than a technological/math issue. People aren't angry about this because they're "extremists" or "absolutists" or people who "don't want to compromise." They're screaming about this because "the compromise" solution is dangerous. If there really were a way to have strong encryption with a secure key where only a small number of people could get in on key issues, then that would be great.
But the key point that all of the experts keep stressing is: that's not reality. So, no the President's not being a "realist." He's being the opposite.
So I would just caution against taking an absolutist perspective on this. Because we make compromises all the time. I haven't flown commercial in a while, but my understanding is that it's not great fun going through security. But we make the concession because -- it's a big intrusion on our privacy -- but we recognize that it is important. We have stops for drunk drivers. It's an intrusion. But we think it's the right thing to do. And this notion that somehow our data is different and can be walled off from those other trade-offs we make, I believe is incorrect.
Again, this is not about "making compromises" or some sort of political perspective. And the people arguing for strong encryption aren't being "absolutist" about it because they're unwilling to compromise. They're saying that the "compromise" solution means undermining the very basis of how we do security and putting everyone at much greater risk. That's ethically horrific.
And, also, no one is saying that "data is different." There has always been information that is "walled off." What people are saying is that one consequence of strong encryption is that it has to mean that law enforcement is kept out of that information too. That does not mean they can't solve crimes in other ways. It does not mean that they don't get access to lots and lots of other information. It just means that this kind of content is harder to access, because we need it to be harder to access to protect everyone.
It's not security v. privacy. It's security v. security, where the security the FBI is fighting for is to stop the 1 in a billion attack and the security everyone else wants is to prevent much more likely and potentially much more devastating attacks.
Meanwhile, of all the things for the President to cite as an analogy, TSA security theater may be the worst. Very few people think it's okay, especially since it's been shown to be a joke. Setting that up as the precedent for breaking strong encryption is... crazy. And, on top of that, using the combination of TSA security and DUI checkpoints as evidence for why we should break strong encryption with backdoors again fails to recognize the issue at hand. Neither of those undermine an entire security setup.
We do have to make sure, given the power of the internet and how much our lives are digitalized, that it is narrow and that it is constrained and that there's oversight. And I'm confident this is something that we can solve, but we're going to need the tech community, software designers, people who care deeply about this stuff, to help us solve it. Because what will happen is, if everybody goes to their respective corners, and the tech community says "you know what, either we have strong perfect encryption, or else it's Big Brother and Orwellian world," what you'll find is that after something really bad happens, the politics of this will swing and it will become sloppy and rushed and it will go through Congress in ways that have not been thought through. And then you really will have dangers to our civil liberties, because the people who understand this best, and who care most about privacy and civil liberties have disengaged, or have taken a position that is not sustainable for the general public as a whole over time.
I have a lot of trouble with the President's line about everyone going to "their respective corners," as it suggests a ridiculous sort of tribalism in which the natural state is the tech industry against the government and even suggests that the tech industry doesn't care about stopping terrorism or child pornographers. That, of course, is ridiculous. It's got nothing to do with "our team." It has to do with the simple realities of encryption and the fact that what the President is suggesting is dangerous.
Furthermore, it's not necessarily the "Orwellian/big brother" issue that people are afraid of. That's a red herring from the "privacy v. security" mindset. People are afraid of this making everyone a lot less safe. No doubt, the President is right that if there's "something really bad" happening then the politics moves in one way -- but it's pretty ridiculous for him to be saying that, seeing as the latest skirmish in this battle is being fought by his very own Justice Department, he's the one who jumped on the San Bernardino attacks as an excuse to push this line of argument.
If the President is truly worried about stupid knee-jerk reactions following "something bad" happening, rather than trying to talk about "balance" and "compromise," he could and should be doing more to fairly educate the American public, and to make public statements about this issue and how important strong encryption is. Enough of this bogus "strong encryption is important, but... the children" crap. The children need strong encryption. The victims of crimes need encryption. The victims of terrorists need encryption. Undermining all that because just a tiny bit of information is inaccessible to law enforcement is crazy. It's giving up the entire ballgame to those with malicious intent, just so that we can have a bit more information in a few narrow cases.
President Obama keeps mentioning trade-offs, but it appears that he refuses to actually understand the trade-offs at issue here. Giving up on strong encryption is not about finding a happy middle compromise. Giving up on strong encryption is putting everyone at serious risk.
Earlier this year Netflix surprised everybody by announcing it was expanding into 130 additional countries, bringing its total footprint to 190 markets. But alongside the announcement came the less-welcome news that Netflix was also planning to crack down more on "content tourism," or the act of using a VPN to trick Netflix into letting you watch content specifically licensed for other countries. If you take a look at what's available per country, the motivation to use a VPN to watch content not available in your market becomes abundantly obvious.
And while the press and public engaged in a lot of hand-wringing about Netflix's decision to crack down on VPN use, it really wasn't much of a problem for most VPN providers to bypass Netflix's restrictions. And indeed, if you'd been paying attention, you would have noticed Netflix basically admitting that this "crack down" wouldn't be much of one, since even the company realized it was largely futile:
"We do apply industry standard technologies to limit the use of proxies,” (Netflix chief product officer Neil) Hunt says. “Since the goal of the proxy guys is to hide the source it’s not obvious how to make that work well. It’s likely to always be a cat-and-mouse game. [We] continue to rely on blacklists of VPN exit points maintained by companies that make it their job. Once [VPN providers] are on the blacklist, it’s trivial for them to move to a new IP address and evade."
So why is Netflix engaging in a practice it realizes is largely pointless? To try and calm global broadcasting partners terrified by the fact that Netflix is re-writing the rules of global television and eroding the power of global media empires unchallenged for the better part of a generation. Netflix still needs to strike licensing deals with many of these companies, and to do so these broadcasters need to see Netflix as a partner, not a threat. So to keep these companies' executives calm, Netflix is basically giving a used-car-salesman-esque wink and saying "sure, we'll make sure your outdated regional restrictions still hold," even though Netflix's publicly-stated goal is demolishing region restrictions completely.
All of that said, Netflix's "crack down" on VPNs still has a notably negative impact on global user privacy and security. And as Dan Gillmor at Slate noted this week it's just downright annoying for the millions of paying customers that use a VPN everyday as a part of their routine security and privacy arsenal:
"No doubt this pleases the Hollywood studios, the control freaks of copyright. From this video watcher’s perspective, it’s beyond annoying. I don’t download Hollywood movies or TV shows from torrent sites. I pay, willingly, for streaming and DVD rentals and, for some special films, an outright DVD purchase. Yet I’m being punished when I stream video because I also want security. So are countless others who want to do the right thing. Tens of thousands have signed an online petition asking Netflix to reconsider."
It's unlikely that Netflix plans to do much about this in the short term. It knows most VPN providers will continue to provide workarounds for customers who know better, and is apparently willing to alienate and annoy customers unwilling or unable to switch VPN providers for the temporary, artificial benefit of its broadcaster relationships. If Netflix continues to be successful the good news will be that regional restrictions will die; the bad news is it's relatively clear the company doesn't give a damn about the repercussions as we wait the decade or longer it's going to take Netflix to actually accomplish this.
As so many have tried to frame the Apple v. FBI fight as one of "privacy v. security," the fact is that it's really about security v. security, where it really comes down to what are you more afraid of: the off-chance that someone will secretly plan a terrorist attack on an encrypted iPhone, or the much more likely issue with millions of phones being stolen or hacked into by criminals looking to swipe your private information. Apple's VP of software engineering, Craig Federighi, had now taken to the pages of the Washington Post to try to highlight this issue, and explain that the FBI and DOJ are really trying to make everyone a lot less safe.
But the threat to our personal information is just the tip of the iceberg. Your phone is more than a personal device. In today’s mobile, networked world, it’s part of the security perimeter that protects your family and co-workers. Our nation’s vital infrastructure -- such as power grids and transportation hubs -- becomes more vulnerable when individual devices get hacked. Criminals and terrorists who want to infiltrate systems and disrupt sensitive networks may start their attacks through access to just one person’s smartphone.
And he also has a good response to those, like Manhattan DA Cyrus Vance, who insist that they just want Apple "to go back" to the way they had security on phones prior to iOS 8. In other words, make everyone less secure. Their argument is that if that was okay a few years ago, why isn't it okay now. And the answer is that security holes are found over time and they make systems less and less secure. So taking a step back is not just like going back a couple of years, but much, much worse, because now lots of people know how to get past the security features:
Of course, despite our best efforts, nothing is 100 percent secure. Humans are fallible. Our engineers write millions of lines of code, and even the very best can make mistakes. A mistake can become a point of weakness, something for attackers to exploit. Identifying and fixing those problems are critical parts of our mission to keep customers safe. Doing anything to hamper that mission would be a serious mistake.
That’s why it’s so disappointing that the FBI, Justice Department and others in law enforcement are pressing us to turn back the clock to a less-secure time and less-secure technologies. They have suggested that the safeguards of iOS 7 were good enough and that we should simply go back to the security standards of 2013. But the security of iOS 7, while cutting-edge at the time, has since been breached by hackers. What’s worse, some of their methods have been productized and are now available for sale to attackers who are less skilled but often more malicious.
And, as he notes, the FBI's demands in the San Bernardino case are akin to doing the same thing to the security of iOS 8: creating a vulnerability that will almost certainly "spread around the world in the blink of an eye." It's a good, straightforward piece explaining why the FBI and DOJ's demands are so dangerous here.
Among the many questions swirling around the challenge to U.S. Magistrate Judge Sheri Pym's
Order that Apple create software to bypass the iPhone passcode screen, a matter of paramount
public interest may have been overlooked: Even if the government prevails in compelling Apple
to bypass these iPhone security features: (A) evidence for use in a criminal trial obtained in this
way will be challenged under the Daubert standard (described below) and the evidence may be
held to be inadmissible at trial; and (B) the Daubert challenge may require disclosure of Apple's
iPhone unlocking software to a number of third parties who would require access to it in order to
bring the Daubert challenge and who may not secure the new software adequately. To state that
neither consequence would be in the public interest would be an understatement in the extreme.
The Daubert challenge would arise because any proffered evidence from the subject iPhone
would have been obtained by methodology utilizing software that had never been used before to
obtain evidence in a criminal trial. The Supreme Court, in Daubert v. Merrill-Dow
Pharmaceutical-Dow Pharmaceuticals, Inc., held that new methodologies from which proffered
evidence is derived must, when challenged, be substantiated by expert scientific testimony in
order to be admissible. In Daubert, the court stated that the criteria that must be utilized when
faced with a defense challenge to scientific testimony and evidence are:
Can the methodology used to reach the expert's conclusion (the new software here)
be tested and verified?
Have the methodology and software been peer-reviewed and has the review been
published in a peer-reviewed journal?
Do the techniques used to reach the conclusion (here, to obtain the evidence) have an
ascertainable error rate?
Has the methodology used to generate the conclusion (the evidence) been generally
accepted by the relevant scientific community?
Under the Daubert standards, introduction of evidence from the iPhone, electronic
communications and data stored in the phone, would require the testimony of an expert witness
to, among other things:
establish the integrity of the data (and its reliability) throughout the chain of custody;
explain whether any person or software could modify the data coming off of the phone;
verify that the data that came off the phone as delivered by Apple and held by law
enforcement was the data that had originally been on the phone;
explain the technical measures, such as the digital signatures attached to the data, used
ensure that no tampering has occurred and their likely error rates.
Such an expert would, in preparation for his or her testimony, require access to and examination
of the software, as it is inconceivable that defense counsel would simply accept the testimony of
the Apple personnel without also demanding that their own, third-party, experts have access to
In addition, defense counsel would undoubtedly demand the right for their own third-party
experts to have access not only to the source code, but to further demand the right to simulate the
testing environment and run this code on their own systems in order to confirm the veracity of
evidence. This could easily compromise the security of the new unlocking code, as argued by in
the amicus brief filed with Judge Pym by Jennifer Granick and Riana Pfefferkorn from
Stanford's Center for Internet and Society (also covered previously by Techdirt):
There is also a danger that the Custom Code will be lost or stolen. The more often Apple must use
the forensic capability this Court is ordering it to create, the more people have to have access to
it. The more people who have access to the Custom Code, the more likely it will leak. The software
will be valuable to anyone eager to bypass security measures on one of the most secure
smartphones on the market. The incentive to steal the Custom Code is huge. The Custom Code
would be invaluable to identity thieves, blackmailers, and those engaged in corporate espionage
and intellectual property theft, to name a few.
Ms. Granick and Ms. Pfefferkorn may not have contemplated demands by defense counsel to
examine the software on their own systems and according to their own terms, but their logic
applies with equal force to evidentiary challenges to the new code: The risk of the software
becoming public increases when it is examined by multiple defense counsel and their experts, on
their own systems, with varying levels of technical competency. Fundamentally, then, basic
criminal trial processes such as challenges to expert testimony and evidence that results from that
testimony based on this new software stand in direct tension with the public interest in the
secrecy and security of the source code of the new iPhone unlocking software.
At best, none of these issues can be resolved definitively at this time because the software to
unlock the phone has not been written. But the government's demand that the court force Apple
to write software that circumvents its own security protocols maybe shortsighted as a matter of
trial strategy, in that any evidence obtained by that software may be precluded following a
Daubert inquiry. Further, the public interest may be severely compromised by a court order
directing that Apple to write the subject software because the due process requirements for
defense counsel and their experts to access the software and Apple's security protocols may
compromise the secrecy necessary to prevent the proposed workaround from becoming available
to hackers, foreign governments and others. No matter what safeguards are ordered by a court,
security of the new software may be at considerable risk because it is well known that no
security safeguards are impregnable.
The government may be well advised to heed the adage, "Be careful what you ask for. You may
just get it." Its victory in the San Bernardino proceedings may be worse than Pyrrhic. It could be
Kenneth N. Rashbaum is a Partner at Barton, LLP in New York, where he heads the Privacy and
Cybersecurity Practice. He is an Adjunct Professor of Law at Fordham University School of
Law, Chair of the Disputes Division of the American Bar Association Section of International
Law, Co-Chair of the ABA Section of International Law Privacy, E-Commerce and Data
Security Committee and a member of the Section Council. You can follow Ken @KenRashbaum
Liberty McAteer is an Associate at Barton LLP. A former front-end web developer, he advises
software developers and e-commerce organizations on data protection, cybersecurity and
privacy, including preparation of security and privacy protocols and information security terms
in licensing agreements, service level agreements and website terms of service. You can follow
from the all-in-service-of-future-writs-and-exploitations dept
The FBI's attempt to force Apple to help it break into an iPhone hasn't been going well. A lot of that has to do with the FBI itself, which hasn't exactly been honest in its portrayal of the case. It tried to fight off claims that it was trying to set precedent by claiming it was just about this one phone… which worked right up until it dropped details about twelve other phones it couldn't break into.
Comey's protestations of "no precedent" were further undermined by law enforcement groups filing briefs in support of the FBI that basically stated they, too, would like Apple to be forced to comply with orders like these. And then there was the whole thing about some "dormant cyber pathogen" that was basically laughed off the internet within hours of its appearance.
There were also claims that Apple has done this sort of thing 70 times in the past but was just being inexplicably obstinate this time for reasons the FBI could not comprehend. But that wasn't true either. Apple does provide law enforcement with access to data it can retrieve from its end -- which is nothing like writing software that would allow the FBI (and anyone else who gets their hands on it -- or who makes similar demands following an FBI win) to bypass the security features of its phones.
The FBI has been unable to make attempts to determine the passcode to access the SUBJECT DEVICE because Apple has written, or “coded,” its operating systems with a user-enabled “auto-erase function” that would, if enabled, result in the permanent destruction of the required encryption key material after 10 failed attempts at the [sic] entering the correct passcode (meaning that, after 10 failed attempts, the information on the device becomes permanently inaccessible)…
That's not what actually happens, Gillmor points out. All data is not erased once 10 failed attempts are recorded. An agency with as many technically-astute employees -- as well as access to a variety of data recovery and software forensic tools -- should know -- or likely does know -- that it doesn't work this way. The phone doesn't erase all of the data, nor does it make it "permanently inaccessible." Instead, it just destroys one of the keys to the data.
The key that is erased in this case is called the “file system key”—and (unlike the hardwired “UID” key that we discussed in our previous blog post) it is not burned into the phone’s processor, but instead merely stored in what Apple calls “Effaceable Storage,” which is just a term for part of the flash memory of the phone designed to be easily erasable.
The data is still intact. The front door isn't. But the FBI can work around this by preventing the key from being destroyed in the first place -- without Apple's help.
So the file system key (which the FBI claims it is scared will be destroyed by the phone’s auto-erase security protection) is stored in the Effaceable Storage on the iPhone in the “NAND” flash memory. All the FBI needs to do to avoid any irreversible auto erase is simple to copy that flash memory (which includes the Effaceable Storage) before it tries 10 passcode attempts. It can then re-try indefinitely, because it can restore the NAND flash memory from its backup copy.
Even if the FBI fails in its attempts to brute force the code, the data on the phone remains intact. By working with a copy of the flash memory, the FBI can restore the phone to its "10 guesses" state repeatedly until it finally guesses the code.
The FBI can simply remove this chip from the circuit board (“desolder” it), connect it to a device capable of reading and writing NAND flash, and copy all of its data. It can then replace the chip, and start testing passcodes. If it turns out that the auto-erase feature is on, and the Effaceable Storage gets erased, they can remove the chip, copy the original information back in, and replace it. If they plan to do this many times, they can attach a “test socket” to the circuit board that makes it easy and fast to do this kind of chip swapping.
It's literally unbelievable that the FBI doesn't have access to the tools to perform this or the expertise to get it done. Which leads Gillmor back to the inescapable conclusion: this isn't about one iPhone or even twelve of them. This is about convincing a judge to read the All Writs Act the way the FBI would like it to be read -- a reading that would not only push the envelope for what it can demand from unrelated parties in the future, but that would also give it software to modify and exploit.
If it gets to that point, device users are going to have to start eyeing software/firmware updates very suspiciously.
The FBI wants to weaken the ecosystem we all depend on for maintenance of our all-too-vulnerable devices. If they win, future software updates will present users with a troubling dilemma. When we're asked to install a software update, we won’t know whether it was compelled by a government agency (foreign or domestic), or whether it truly represents the best engineering our chosen platform has to offer.
This is the end game for the FBI, even though it doesn't appear to realize the gravity of the situation. To it, Apple is the obstacle standing between it and the wealth of information it imagines might possibly be on that phone. Even is Apple is forced into compliance and the phone contains nothing of use, it will still have its precedent and its hacking tool and we'll be headed towards a world where patch notes contain warrant canaries.
Blake Ross (boy genius Firefox founder and later Facebook product guy) has written a somewhat bizarre and meandering -- but totally worth reading -- article about the whole Apple v. FBI fight, entitled (believe it or not): Mr. Fart's Favorite Colors. There are a few very good points in there, about the nature of programming, security and the government (some of which even make that title make sense). But I'm going to skip over the farts and colors and even his really excellent description of the ridiculousness of TSA security theater in airports, and leap forward to a key point raised in the article, focused on airplane security, which presents a really good analogy for the iPhone encryption fight. He points out that the only thing that has truly helped stop another 9/11-style plane hijacking (as Bruce Schneier points out repeatedly) is not the TSA security theater, but reinforced, locked cockpit doors that make it impossible for people in the cabin to get into the cockpit.
However, Ross notes, there are scenarios in which those in the cockpit need to leave the cockpit (usually to use the bathroom), and therein lies an interesting security challenge for those designing the security of the planes. How do you let that pilot (or another crew member) back in, but not a bad guy? Here's the solution that airlines have come up with, as described by Ross (or you can read the NY Times version, which is a little drier):
When the pooping pilot wants to reenter the cockpit, he calls the flying pilot on the intercom to buzz him in.
If there’s no answer, the outside pilot enters an emergency keycode. If the flying pilot doesn’t deny the request within 30 seconds, the door unlocks.
The flying pilot can flip a switch to disable the emergency keypad for 5 to 20 minutes (repeatedly).
Like Asimov’s three laws, these checks and balances try to approximate safety while accounting for contingencies. If the flying pilot risked Delta’s gefilte fish and passed out, you want to make sure the other pilot can still re-enter. But add all the delays and overrides and backstops you want; you still have to make a fundamental decision. Who controls entry: the people on the inside, or the people on the outside?
Governments decided that allowing crew members to fully override the flying pilot using a key code would be insecure, since it would be too easy for that code to leak. Thus, there is nothing the outside pilot can do — whether electronically or violently — to open the door if the flying pilot is both conscious and malicious.
And as Ross notes, this is a pretty reasonable tradeoff in nearly all circumstances. It's quite difficult for someone bad to get in, and yet those in the cockpit can mostly be okay with leaving and getting back in even if a pilot remaining in the cockpit suddenly drops dead. But, there is still one scenario in which that security gets totally messed up -- and it's with Germanwings Flight 9525 almost a year ago, in which a mentally ill co-pilot locked the captain out of the cockpit and then deliberately crashed the plane into a mountain.
As Time Magazine noted, this is the tricky part of security systems: "sometimes it’s important to keep people out; sometimes it’s important to get inside."
And, of course, there's a little of that in the Apple v. FBI fight. The FBI is arguing that it's important to let people in, because 14 people died after a husband and wife killed 14 people and wounded more. But lots of other people are pointing out that there are much bigger security benefits in keeping people out. And that's why this is really a debate about "security v. security" rather than "security v. privacy."
Strong encryption on devices is like that locked cockpit door. Under most scenarios, it keeps people much safer. It's a useful and powerful security feature. But, yes, in some cases -- such as that of the suicidal Germanwings co-pilot -- it is less secure. And, there do seem to be ways to mitigate that kind of risk without harming the wider security (many airlines now require that even if someone leaves the cockpit, a second crew-member must be present in the cockpit). But, in the end, we look at the likelihood and probability of the need for such security solutions. And it's not hard to realize that, in the grand scheme of things, locking people out protects many, many, many more people from the rare instances of suicidal co-pilots (and or quasi-terrorist attacks).
And that's the real issue here. Strong encryption on our devices is much more likely to lead to much more protection and security for many more people than without such encryption. Nearly all of us are likely to be safer because of strong encryption. But, that might not include everyone. Yes, there will be some instances -- though likely few and far between -- where such encryption allows someone to secretly plan and (potentially) get away with some sort of heinous act. And it will be reasonable and expected that people will whine and complain about how the security feature got in the way of stopping that attack. But the likelihood of that is much, much smaller, than the very real possibility of attacks on weak phones affecting many of us.
Or, as Ross concludes (in a way that makes even more sense if you read the whole piece...):
Unfortunately it’s not that complicated, which means it’s not that simple. Unbreakable phones are coming. We’ll have to decide who controls the cockpit: The captain? Or the cabin? Either choice has problems, but — I’m sorry, Aunt Congress — you crash if you pick 2.
But when you have people like the technically ignorant San Bernardino District Attorney Michael Ramos insisting that he needs to be able to get into that iPhone, just recognize that he's arguing that we should unlock cockpit doors just in case there's a suicidal co-pilot in there, without recognizing how frequently such unlocked cockpit doors will be used by others who wish to do even more harm.