Of Cockpits And Phone Encryption: Tradeoffs And Probabilities
from the think-this-through dept
Blake Ross (boy genius Firefox founder and later Facebook product guy) has written a somewhat bizarre and meandering — but totally worth reading — article about the whole Apple v. FBI fight, entitled (believe it or not): Mr. Fart’s Favorite Colors. There are a few very good points in there, about the nature of programming, security and the government (some of which even make that title make sense). But I’m going to skip over the farts and colors and even his really excellent description of the ridiculousness of TSA security theater in airports, and leap forward to a key point raised in the article, focused on airplane security, which presents a really good analogy for the iPhone encryption fight. He points out that the only thing that has truly helped stop another 9/11-style plane hijacking (as Bruce Schneier points out repeatedly) is not the TSA security theater, but reinforced, locked cockpit doors that make it impossible for people in the cabin to get into the cockpit.
However, Ross notes, there are scenarios in which those in the cockpit need to leave the cockpit (usually to use the bathroom), and therein lies an interesting security challenge for those designing the security of the planes. How do you let that pilot (or another crew member) back in, but not a bad guy? Here’s the solution that airlines have come up with, as described by Ross (or you can read the NY Times version, which is a little drier):
- When the pooping pilot wants to reenter the cockpit, he calls the flying pilot on the intercom to buzz him in.
- If there?s no answer, the outside pilot enters an emergency keycode. If the flying pilot doesn?t deny the request within 30 seconds, the door unlocks.
- The flying pilot can flip a switch to disable the emergency keypad for 5 to 20 minutes (repeatedly).
Like Asimov?s three laws, these checks and balances try to approximate safety while accounting for contingencies. If the flying pilot risked Delta?s gefilte fish and passed out, you want to make sure the other pilot can still re-enter. But add all the delays and overrides and backstops you want; you still have to make a fundamental decision. Who controls entry: the people on the inside, or the people on the outside?
Governments decided that allowing crew members to fully override the flying pilot using a key code would be insecure, since it would be too easy for that code to leak. Thus, there is nothing the outside pilot can do???whether electronically or violently???to open the door if the flying pilot is both conscious and malicious.
And as Ross notes, this is a pretty reasonable tradeoff in nearly all circumstances. It’s quite difficult for someone bad to get in, and yet those in the cockpit can mostly be okay with leaving and getting back in even if a pilot remaining in the cockpit suddenly drops dead. But, there is still one scenario in which that security gets totally messed up — and it’s with Germanwings Flight 9525 almost a year ago, in which a mentally ill co-pilot locked the captain out of the cockpit and then deliberately crashed the plane into a mountain.
As Time Magazine noted, this is the tricky part of security systems: “sometimes it?s important to keep people out; sometimes it?s important to get inside.”
And, of course, there’s a little of that in the Apple v. FBI fight. The FBI is arguing that it’s important to let people in, because 14 people died after a husband and wife killed 14 people and wounded more. But lots of other people are pointing out that there are much bigger security benefits in keeping people out. And that’s why this is really a debate about “security v. security” rather than “security v. privacy.”
Strong encryption on devices is like that locked cockpit door. Under most scenarios, it keeps people much safer. It’s a useful and powerful security feature. But, yes, in some cases — such as that of the suicidal Germanwings co-pilot — it is less secure. And, there do seem to be ways to mitigate that kind of risk without harming the wider security (many airlines now require that even if someone leaves the cockpit, a second crew-member must be present in the cockpit). But, in the end, we look at the likelihood and probability of the need for such security solutions. And it’s not hard to realize that, in the grand scheme of things, locking people out protects many, many, many more people from the rare instances of suicidal co-pilots (and or quasi-terrorist attacks).
And that’s the real issue here. Strong encryption on our devices is much more likely to lead to much more protection and security for many more people than without such encryption. Nearly all of us are likely to be safer because of strong encryption. But, that might not include everyone. Yes, there will be some instances — though likely few and far between — where such encryption allows someone to secretly plan and (potentially) get away with some sort of heinous act. And it will be reasonable and expected that people will whine and complain about how the security feature got in the way of stopping that attack. But the likelihood of that is much, much smaller, than the very real possibility of attacks on weak phones affecting many of us.
Or, as Ross concludes (in a way that makes even more sense if you read the whole piece…):
Unfortunately it?s not that complicated, which means it?s not that simple. Unbreakable phones are coming. We?ll have to decide who controls the cockpit: The captain? Or the cabin? Either choice has problems, but???I?m sorry, Aunt Congress???you crash if you pick 2.
But when you have people like the technically ignorant San Bernardino District Attorney Michael Ramos insisting that he needs to be able to get into that iPhone, just recognize that he’s arguing that we should unlock cockpit doors just in case there’s a suicidal co-pilot in there, without recognizing how frequently such unlocked cockpit doors will be used by others who wish to do even more harm.