Oooohhhhh, that makes it real easy then. Just EO-up a new branch of the military and presto! No need to worry about those pesky laws and acts that were only created to protect terrorists anyway. Why didn't I think of that?
So far they haven't been ordered; the court has given Apple 5 days to make a response as to why it would be "too burdensome". I will wait till I hear the court's reasoning on the eventual order (or lack of it) before I start blaming the court system in this case.
You raise a lot of good points, but trusting the people in government - and even the government itself - to not abuse this in the future is foolish and has already been proven unsound.
I am *almost* hoping Apple loses this battle; it would just force Apple to commit to the next step of wiping the key on firmware update. I think it would be possible. Perhaps if the correct pin is entered, the decrypted data key could be reencrypted with a temporary ephemeral key and stored in the main CPU registers so it can be fed back to the firmware after the flash. This would be extremely complicated, very brittle, and therefore difficult to get right.
I worry though that in losing this battle there might be some overreaching order by the government prohibiting Apple (and any other US based company) from progressing in security. That would be the final blow. So I am still also kinda hoping they win and we can get a good precedent about how much a court can compel action. I doubt that we would get a good precedent out of this though.
Apple owns the code, the government owns the phone; the dead guy was just the user. (I need to rewatch Cory Doctorow's "Civil War on General Purpose Computing". He raised a lot of these types of questions.) I think if the court rules against Apple, it will probably be on these grounds, and by virtue of the necessity of the method of execution, compelled action will be deemed justifiable. That would open up so many avenues for government overreach, it's frightening. Canary warnings could be made useless - the government could just compel your action to sign the "everything is still OK" message. Journalists could be compelled to promulgate government propaganda to hide illegal activities. Corporations could be compelled to install spyware on their machines to spy on their employees. A pharmacist could be compelled to supply the government with the drugs necessary to execute convicts. And of course any tech company could be compelled to sign and / or install a backdoor into the software or devices they produce and / or package. Hopefully, some of these things may be explicitly deemed out of bounds. But how much will fall in bounds from the collateral damage? Because if compelled action is deemed justifiable across the board - or even to a significant extent - then this experiment in self-governance is over. Once you combine compelled action with our existing framework of secret courts and gag orders, there is no way for the government to regain the trust and legitimacy necessary to govern by the will of the people. It will have been irrevocably shattered. Compelled inaction has done enough damage already.
I am kinda glad this is happening though; this is one of the key debates that has needed to happen for a while now. Unfortunately it's coming up on the election circus show, so none of the talking heads with the most focus - the candidates - can be trusted to speak the truth on this. I am extremely disappointed in Sanders, but not terribly surprised. I don't recall which issue it was, but he has been silent on it.
I would love to see that. I've seen a few presentations (DEFCON and the like) about defeating shoddy TPM chips. It's quite fascinating. The trust zone is essentially a TPM chip, so I'd imagine the same sorts of hardware tamper resistances have been built into it (which is why you can't just pop the cover off and hook up a jtag).
I don't know if their trust zone is or is not breakable, but the consensus is that if you physically tamper with it to physically "read" the bits, the data retrieval is less than 100%. And of course the process physically destroys the chip. No second chances. And the trust zone is actually part of the main CPU die, so that may or may not complicate matters a bit.
But as far as software / firmware goes, yeah the trust zone is the weak link. This whole article talks about how we were all mistaken in one assumption. The common wisdom would say that the trust zone should wipe the data encryption key if told to reflash itself. But it doesn't. Normal operation wouldn't be affected, because the flash rewriter could feed the data encryption key back to it.
Ok; here's a quick (probably a little oversimplified) rundown. [I'm only really speaking to the modern implementation, there are likely details that are wrong about the specific phone in question.] It doesn't really matter that it knows or doesn't know that it is running in an emulator.
There's two keys involved. The key encryption key, and the data encryption key. The data encryption key is itself encrypted and stored in the secure portion of the CPU - the "trust zone" or "secure enclave". The data encryption key is used to encrypt and decrypt all the "main" information on the phone.
The key encryption key is used to encrypt / decrypt the data encryption key. The key encryption key is generated using several inputs - the PIN and some hardware identifiers. Some of these hardware identifiers are only stored in the trust zone. The boot code calls upon the trust zone and feeds it user input (the PIN code), the trust zone combines that with hardware identifiers, and runs it through some super expensive (in terms of time and RAM) math to recreate the key encryption key. The key encryption key is used to decrypt the data encryption key and some known string of bits. The data encryption key is handed back to the boot code after the known string of bits is verified as decrypted correctly. If the verification fails, the trust zone doesn't hand back anything. 10 wrong attempts and the trust zone wipes the encrypted data encryption key.
So far everyone agrees that it is practically impossible to extract the necessary hardware identifiers and/or the encrypted data encryption key from the trust zone. In short, cloning / emulating / simulating will not work, because necessary components to recreating the key encryption key will not be available. This is why it doesn't matter if the code knows whether or not it's being emulated. All it has to do is attempt to decrypt something known beforehand; if the inputs were wrong (user PIN and / or machine identifiers), the decryption simply fails to decrypt a known value.
Running an old coinop is in no way comparable because those old coinops did not have 1) data that could not be feasibly replicated, and therefore 2) code that used mathematics that would only yield a correct value if run in the correct environment. If someone could clone the trust zone (with all the data), then this whole argument would be moot - the government could take any similar iphone and clone the trust zone (and data) to it.
Yes, that would be the same Apple. And you can point out the hypocrisy. But that doesn't change the fact that *this* argument is about whether or not Apple should be forced to wrest control from the user. I don't blame you for being cynical, but your viewpoint contradicts the official narrative in this instance:
Mr. Cook and other Apple executives resolved not only to lock up customer data, but to do so in a way that would put the keys squarely in the hands of the customer, not the company.
It would have been interesting to know what Jobs would have done about a great many things starting with the Snowden revelations.
As I understand it, the court cannot extend the punishment beyond what is required to coerce the individual into complying, correct? So if the person refuses and also wipes their access from the system such that they cannot comply (basically nukes their profile from whatever database is used to store their login and permissions), what happens? Surely there would be some minimal sentence involved, but how severe would it be?
That's an interesting article to be sure, but I think it's falling into the false dichotomy fallacy. Apple isn't fighting to gain more control - "to become our lord and master" (if I may abuse the wording therein), but to allow the individual to *retain* control. Apple is arguing that it should not be forced to wrest control out of the hands of an individual.
And that is a true dichotomy. Either the individual has complete control, or they don't. They can, individually, choose to cede control, but it should still be their control to cede.
And if I understand correctly, there are still things that the US government cannot just 'writ' to obtain. If a person is ordered to divulge information by the judge, and the person refuses, the judge may hold the individual in contempt and impose a sentence for the purpose of coercion. But no amount of legal coercion can *force* a person to divulge the information. That information is out of the judge's hands for so long as the contemnor resists the coercion. To say that nothing should be out of reach of the judge is to allow the suspending the rights of the individual and applying greater coercion than our current legal system affords.
DRM suffers from a different logical flaw in that it is impossible to both show someone something (the movie, the movie decryption key) and subsequently hide it from them. In this case, the FBI et al have not been "shown" anything yet.
Part of me desperately wants to educate these clueless politicians enough so they understand that a hyperlink is nothing more than the name of a webpage. That would make this law like enforcing "copyright protection" against people talking about movies, songs, books, etc etc.
But then I recoil in horror to think they might try to build this backwards philosophy predicated on publicity rights....
This is exactly the danger when people say not to worry about ridiculous laws "because it's not enforceable" or "they won't bust you unless you're an asshole". Shit like this becomes a convenient tool to be used for petty vengeance.
that's not quite right. if the system works properly, both sides receive justice, it's just that one side generally doesn't like it. but you are right that equating the outcome of "the victims not being vindicated" with that of "the victims not receiving justice" is redefining "justice" to mean "vengeance".
that's a nice theory, but the reason encryption is so easily tied to horrible things is because people naturally want to hide the horrible things they do from others (notably law enforcement), and encryption is the tool to use to hide details of communication. I'm having a hard time seeing how to tie the *lack* of encryption to any existing natural tendency in connection with people doing horrible things.