You are implying that the two are somehow exclusive, as if setting an impossible goal somehow means that not achieving it is not a failure.
It's both. The reason it's a failure is because it's impossible. Unfortunately, other than simple statistics, this is not a mathematical conundrum; the only way we know it's impossible is by looking at all the results of trying to achieve it. Eventually, we can hope, everyone will conclude that it is indeed impossible; until then all we can do is point out the failures and make reasoned arguments that the only outcome of the current course of actions is failure.
> > While today’s data is encouraging, the challenges facing us are significant. The consumption of music is skyrocketing, but revenues for creators have not kept pace.
There you have it. They want you to pay every time you hear a piece of music. If the technology existed that would let them constantly monitor your brain and extract a fee from your account every time your auditory cortex registered a covered work, they would fight tirelessly to subject every person to it.
> Being forced to hand over the keys so FBI-OS can sign it's code with them is equivalent to being forced to endorse FBI-OS. That would be compelled speech....
That's arguable. It remains to be seen if a court would see it that way. Remember Ladar Levison. The fact that HTTPS connections are designed to guarantee authenticity and require digital signatures to attest to the fact might be enough to equate a digital signature with consent or endorsement. If the court buys that equivalency, and accepts the precedent as valid, then compelled speech is a done deal.
If we transfer from cyberspace to meatspace, the government can compel you to sign a false confession. Hell they may as well be allowed to compel you to endorse or vote for someone of their choosing. Truly terrifying.
On the other hand, if the courts block this type of compelled speech, I can see the FBI demanding that all future systems that accept software updates must not refuse an unsigned software update, or that they accept one signed by a different key. Imagine all future TPM chips must have a government key baked in.
On a third hand, the court may not make that leap to equivalency. In which case we (the People) could lose this battle. To combat that legal attack in the future, the tech community could add a cert flag that makes the endorsement explicit. Kinda like how the linux kernel code authors signal their intent as to which license a symbol is usable under (SYMBOL_EXPORT vs SYMBOL_EXPORT_GPL or something like that). These methods have not been tested in court however, and they may be shot down.
> This argument is particularly maddening: basically continuing the ridiculous line of thinking that protecting user privacy is some sort of deliberate marketing strategy against the government
At this point, the two are kinda synonymous; protecting my privacy necessarily pits all involved against the government, and it's entirely due to tactics like this.
> But again, Apple has always been willing to respond to legitimate government requests for information that it has access to. That's why that same chart shows that it complied with 81% of US requests as well. But that says absolutely nothing about the requirement to build a special system to hack in and access data that it does not currently have access to.
The tech companies are getting raped here, and the perpetrator is claiming to the judge that it's not rape 'cause "they gave me a handjob before; consent was already given!".
I hesitate a bit to draw a comparison to rape here. But seriously. We, The People, are getting fucked.
You know what, he's got a point. Think about that next time the words "slippery slope fallacy" spring to mind when you see an argument about government overreach. Slippery slope? Yeah. Fallacy? Not when the government greases the skids and hops behind the wheel....
Oooohhhhh, that makes it real easy then. Just EO-up a new branch of the military and presto! No need to worry about those pesky laws and acts that were only created to protect terrorists anyway. Why didn't I think of that?
So far they haven't been ordered; the court has given Apple 5 days to make a response as to why it would be "too burdensome". I will wait till I hear the court's reasoning on the eventual order (or lack of it) before I start blaming the court system in this case.
You raise a lot of good points, but trusting the people in government - and even the government itself - to not abuse this in the future is foolish and has already been proven unsound.
I am *almost* hoping Apple loses this battle; it would just force Apple to commit to the next step of wiping the key on firmware update. I think it would be possible. Perhaps if the correct pin is entered, the decrypted data key could be reencrypted with a temporary ephemeral key and stored in the main CPU registers so it can be fed back to the firmware after the flash. This would be extremely complicated, very brittle, and therefore difficult to get right.
I worry though that in losing this battle there might be some overreaching order by the government prohibiting Apple (and any other US based company) from progressing in security. That would be the final blow. So I am still also kinda hoping they win and we can get a good precedent about how much a court can compel action. I doubt that we would get a good precedent out of this though.
Apple owns the code, the government owns the phone; the dead guy was just the user. (I need to rewatch Cory Doctorow's "Civil War on General Purpose Computing". He raised a lot of these types of questions.) I think if the court rules against Apple, it will probably be on these grounds, and by virtue of the necessity of the method of execution, compelled action will be deemed justifiable. That would open up so many avenues for government overreach, it's frightening. Canary warnings could be made useless - the government could just compel your action to sign the "everything is still OK" message. Journalists could be compelled to promulgate government propaganda to hide illegal activities. Corporations could be compelled to install spyware on their machines to spy on their employees. A pharmacist could be compelled to supply the government with the drugs necessary to execute convicts. And of course any tech company could be compelled to sign and / or install a backdoor into the software or devices they produce and / or package. Hopefully, some of these things may be explicitly deemed out of bounds. But how much will fall in bounds from the collateral damage? Because if compelled action is deemed justifiable across the board - or even to a significant extent - then this experiment in self-governance is over. Once you combine compelled action with our existing framework of secret courts and gag orders, there is no way for the government to regain the trust and legitimacy necessary to govern by the will of the people. It will have been irrevocably shattered. Compelled inaction has done enough damage already.
I am kinda glad this is happening though; this is one of the key debates that has needed to happen for a while now. Unfortunately it's coming up on the election circus show, so none of the talking heads with the most focus - the candidates - can be trusted to speak the truth on this. I am extremely disappointed in Sanders, but not terribly surprised. I don't recall which issue it was, but he has been silent on it.
I would love to see that. I've seen a few presentations (DEFCON and the like) about defeating shoddy TPM chips. It's quite fascinating. The trust zone is essentially a TPM chip, so I'd imagine the same sorts of hardware tamper resistances have been built into it (which is why you can't just pop the cover off and hook up a jtag).
I don't know if their trust zone is or is not breakable, but the consensus is that if you physically tamper with it to physically "read" the bits, the data retrieval is less than 100%. And of course the process physically destroys the chip. No second chances. And the trust zone is actually part of the main CPU die, so that may or may not complicate matters a bit.
But as far as software / firmware goes, yeah the trust zone is the weak link. This whole article talks about how we were all mistaken in one assumption. The common wisdom would say that the trust zone should wipe the data encryption key if told to reflash itself. But it doesn't. Normal operation wouldn't be affected, because the flash rewriter could feed the data encryption key back to it.
Ok; here's a quick (probably a little oversimplified) rundown. [I'm only really speaking to the modern implementation, there are likely details that are wrong about the specific phone in question.] It doesn't really matter that it knows or doesn't know that it is running in an emulator.
There's two keys involved. The key encryption key, and the data encryption key. The data encryption key is itself encrypted and stored in the secure portion of the CPU - the "trust zone" or "secure enclave". The data encryption key is used to encrypt and decrypt all the "main" information on the phone.
The key encryption key is used to encrypt / decrypt the data encryption key. The key encryption key is generated using several inputs - the PIN and some hardware identifiers. Some of these hardware identifiers are only stored in the trust zone. The boot code calls upon the trust zone and feeds it user input (the PIN code), the trust zone combines that with hardware identifiers, and runs it through some super expensive (in terms of time and RAM) math to recreate the key encryption key. The key encryption key is used to decrypt the data encryption key and some known string of bits. The data encryption key is handed back to the boot code after the known string of bits is verified as decrypted correctly. If the verification fails, the trust zone doesn't hand back anything. 10 wrong attempts and the trust zone wipes the encrypted data encryption key.
So far everyone agrees that it is practically impossible to extract the necessary hardware identifiers and/or the encrypted data encryption key from the trust zone. In short, cloning / emulating / simulating will not work, because necessary components to recreating the key encryption key will not be available. This is why it doesn't matter if the code knows whether or not it's being emulated. All it has to do is attempt to decrypt something known beforehand; if the inputs were wrong (user PIN and / or machine identifiers), the decryption simply fails to decrypt a known value.
Running an old coinop is in no way comparable because those old coinops did not have 1) data that could not be feasibly replicated, and therefore 2) code that used mathematics that would only yield a correct value if run in the correct environment. If someone could clone the trust zone (with all the data), then this whole argument would be moot - the government could take any similar iphone and clone the trust zone (and data) to it.
Yes, that would be the same Apple. And you can point out the hypocrisy. But that doesn't change the fact that *this* argument is about whether or not Apple should be forced to wrest control from the user. I don't blame you for being cynical, but your viewpoint contradicts the official narrative in this instance:
Mr. Cook and other Apple executives resolved not only to lock up customer data, but to do so in a way that would put the keys squarely in the hands of the customer, not the company.
It would have been interesting to know what Jobs would have done about a great many things starting with the Snowden revelations.