I would love to see that. I've seen a few presentations (DEFCON and the like) about defeating shoddy TPM chips. It's quite fascinating. The trust zone is essentially a TPM chip, so I'd imagine the same sorts of hardware tamper resistances have been built into it (which is why you can't just pop the cover off and hook up a jtag).
I don't know if their trust zone is or is not breakable, but the consensus is that if you physically tamper with it to physically "read" the bits, the data retrieval is less than 100%. And of course the process physically destroys the chip. No second chances. And the trust zone is actually part of the main CPU die, so that may or may not complicate matters a bit.
But as far as software / firmware goes, yeah the trust zone is the weak link. This whole article talks about how we were all mistaken in one assumption. The common wisdom would say that the trust zone should wipe the data encryption key if told to reflash itself. But it doesn't. Normal operation wouldn't be affected, because the flash rewriter could feed the data encryption key back to it.
Ok; here's a quick (probably a little oversimplified) rundown. [I'm only really speaking to the modern implementation, there are likely details that are wrong about the specific phone in question.] It doesn't really matter that it knows or doesn't know that it is running in an emulator.
There's two keys involved. The key encryption key, and the data encryption key. The data encryption key is itself encrypted and stored in the secure portion of the CPU - the "trust zone" or "secure enclave". The data encryption key is used to encrypt and decrypt all the "main" information on the phone.
The key encryption key is used to encrypt / decrypt the data encryption key. The key encryption key is generated using several inputs - the PIN and some hardware identifiers. Some of these hardware identifiers are only stored in the trust zone. The boot code calls upon the trust zone and feeds it user input (the PIN code), the trust zone combines that with hardware identifiers, and runs it through some super expensive (in terms of time and RAM) math to recreate the key encryption key. The key encryption key is used to decrypt the data encryption key and some known string of bits. The data encryption key is handed back to the boot code after the known string of bits is verified as decrypted correctly. If the verification fails, the trust zone doesn't hand back anything. 10 wrong attempts and the trust zone wipes the encrypted data encryption key.
So far everyone agrees that it is practically impossible to extract the necessary hardware identifiers and/or the encrypted data encryption key from the trust zone. In short, cloning / emulating / simulating will not work, because necessary components to recreating the key encryption key will not be available. This is why it doesn't matter if the code knows whether or not it's being emulated. All it has to do is attempt to decrypt something known beforehand; if the inputs were wrong (user PIN and / or machine identifiers), the decryption simply fails to decrypt a known value.
Running an old coinop is in no way comparable because those old coinops did not have 1) data that could not be feasibly replicated, and therefore 2) code that used mathematics that would only yield a correct value if run in the correct environment. If someone could clone the trust zone (with all the data), then this whole argument would be moot - the government could take any similar iphone and clone the trust zone (and data) to it.
Yes, that would be the same Apple. And you can point out the hypocrisy. But that doesn't change the fact that *this* argument is about whether or not Apple should be forced to wrest control from the user. I don't blame you for being cynical, but your viewpoint contradicts the official narrative in this instance:
Mr. Cook and other Apple executives resolved not only to lock up customer data, but to do so in a way that would put the keys squarely in the hands of the customer, not the company.
It would have been interesting to know what Jobs would have done about a great many things starting with the Snowden revelations.
As I understand it, the court cannot extend the punishment beyond what is required to coerce the individual into complying, correct? So if the person refuses and also wipes their access from the system such that they cannot comply (basically nukes their profile from whatever database is used to store their login and permissions), what happens? Surely there would be some minimal sentence involved, but how severe would it be?
That's an interesting article to be sure, but I think it's falling into the false dichotomy fallacy. Apple isn't fighting to gain more control - "to become our lord and master" (if I may abuse the wording therein), but to allow the individual to *retain* control. Apple is arguing that it should not be forced to wrest control out of the hands of an individual.
And that is a true dichotomy. Either the individual has complete control, or they don't. They can, individually, choose to cede control, but it should still be their control to cede.
And if I understand correctly, there are still things that the US government cannot just 'writ' to obtain. If a person is ordered to divulge information by the judge, and the person refuses, the judge may hold the individual in contempt and impose a sentence for the purpose of coercion. But no amount of legal coercion can *force* a person to divulge the information. That information is out of the judge's hands for so long as the contemnor resists the coercion. To say that nothing should be out of reach of the judge is to allow the suspending the rights of the individual and applying greater coercion than our current legal system affords.
DRM suffers from a different logical flaw in that it is impossible to both show someone something (the movie, the movie decryption key) and subsequently hide it from them. In this case, the FBI et al have not been "shown" anything yet.
Part of me desperately wants to educate these clueless politicians enough so they understand that a hyperlink is nothing more than the name of a webpage. That would make this law like enforcing "copyright protection" against people talking about movies, songs, books, etc etc.
But then I recoil in horror to think they might try to build this backwards philosophy predicated on publicity rights....
This is exactly the danger when people say not to worry about ridiculous laws "because it's not enforceable" or "they won't bust you unless you're an asshole". Shit like this becomes a convenient tool to be used for petty vengeance.
that's not quite right. if the system works properly, both sides receive justice, it's just that one side generally doesn't like it. but you are right that equating the outcome of "the victims not being vindicated" with that of "the victims not receiving justice" is redefining "justice" to mean "vengeance".
that's a nice theory, but the reason encryption is so easily tied to horrible things is because people naturally want to hide the horrible things they do from others (notably law enforcement), and encryption is the tool to use to hide details of communication. I'm having a hard time seeing how to tie the *lack* of encryption to any existing natural tendency in connection with people doing horrible things.
unfortunately that's rarely how it works. bad laws based on bad philosophy almost never get repealed - they just get "refined" to not apply to this one hyperspecific circumstance, or they get broadened and create the potential for much more collateral damage. it reduces to trying to force the legislative body that they made a mistake (or acted maliciously).
I believe it's already been decided that FOIA requests are themselves not subject to FOIA - otherwise you could request all the documents relating to the strategy notes of how they planned to foil your FOIA.
> The crime is not failing to get a warrant. The crime is lying about the situation on the stand, or to the judge when you give a sworn statement to ask for a warrant.
As this act was carried out by a member of the law enforcement community, I think, and correct me if I am wrong, that it is, technically, by definition, not a crime. At least that's the only justification I can think of for charges not being immediately filed against this fine upstanding officer of the law.
> But that could easily be solved by posting the same document in its original format at the agency's website.
They don't even have to go that far. All they need to do is sign the damn thing. You would think (if you are a sane, rational being) that there should be a government agency charged with the mandate of improving safe and effective information technology practices (perhaps some sort of national cybersecurity division headed by some sort of national security agency), and could easily train other government agencies on how to use strong encrypti--- oh, I think I see the problem.