Flaw Discovered In Apple iMessage Encryption, Reminding Us That Compelled Backdoors Are Idiotic

from the encryption-is-hard dept

One of the points that seems to be widely misunderstood by people who don’t spend much time in computer security worlds, is that building secure encryption systems is really hard and almost everything has some sort of vulnerability somewhere. This is why it’s a constant struggle by security researchers, cryptographers and security engineers to continually poke holes in encryption, and try to fix up and patch systems. It’s also why the demand for backdoors is idiotic, because they probably already exist in some format. But purposely building in certain kinds of backdoors that can’t be closed by law almost certainly blasts open much larger holes for those with nefarious intent to get in.

Case in point: over the weekend, computer science professor Matthew Green and some other researchers announced that they’d discovered a serious hole in the encryption used for Apple’s iMessage platform, allowing a sophisticated hacker to access encrypted messages and pictures. And, Green, who has been vocal about the ridiculousness of the DOJ’s request against Apple, notes how this is yet more evidence that the DOJ’s request is a bad idea:

?Even Apple, with all their skills ? and they have terrific cryptographers ? wasn?t able to quite get this right,? said Green, whose team of graduate students will publish a paper describing the attack as soon as Apple issues a patch. ?So it scares me that we?re having this conversation about adding back doors to encryption when we can?t even get basic encryption right.?

It’s worth noting that the flaw that he and his team found would not have helped the FBI get what it wants off of Syed Farook’s iPhone, but it’s still a reminder of just how complex cryptography currently is, at a time when people are trying to keep everyone out. Offer up any potential backdoor, and you’re almost certainly blasting major holes throughout the facade.

Apple is getting ready to push out a software update that will fix the flaw shortly. And this, alone, is yet another reason why the DOJ’s case is so dangerous — since the method it wants to use to get into Farook’s phone is via its capabilities to push software updates. Patching software holes is a major reason to accept regular software updates, but the FBI is now trying to co-opt that process to install unsafe code. That, in turn, may prompt people to avoid software updates altogether, which in most cases will make them less safe.

Filed Under: , , , , , , ,
Companies: apple

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Flaw Discovered In Apple iMessage Encryption, Reminding Us That Compelled Backdoors Are Idiotic”

Subscribe: RSS Leave a comment
9 Comments
DannyB (profile) says:

It's still not the same thing

In this case, a software implementation can be patched to make it more secure. Or less secure.

The issue with the FBI is that they want to conscript Apple to build something unprecedented. That can only be done because it is possible to patch the firmware of the component which holds the secret keys.

I’m sure Apple and others are working on ways that the hardware component which keeps the secrets a secret cannot have patches. The simpler this component, the easier to get it right the first time so that patches are never needed. I don’t think Apple ever believe it would need to patch this component of the system, yet it was possible.

If the secure component that imposes the time delays and maximum bad password attempts, cannot be patched, then what will the FBI do in the future when there really is no way to fix this with a software update?

Will the world come to an end because a few bad people can use iPhones? Probably not any more likely than if they used other devices to communicate privately.

Anonymous Coward says:

Re: It's still not the same thing

… what will the FBI do in the future when there really is no way to fix this with a software update?

Nicholas Weaver claims that the baseband processor has the capability to overwrite iOS in memory.

How the Feds Could Get Into iPhones Without Apple’s Help” by Kim Zetter, Wired, Mar 2, 2016

“Once you have the baseband exploited you’re able to bypass all that bruteforce protection and just try all the passwords that you want,” Weaver says. “If you take over the baseband, you have the ability to write to memory, which means you can take over the running operating system. And because the phone is running but locked, you take over that running but locked operating system and now you can do what the FBI wants to do, where you just keep trying PINs against the secure enclave until you get in…So you corrupt the root operating system to say, don’t do these protections.”

What Weaver doesn’t say is that baseband processor is on physically separate silicon than the application processor. It is logically attached to the application processor by a physical, external bus.

When an attacker has complete physical control over the hardware, the baseband processor may be physically removed from the iPhone, and another device connected in its place. That replacement device might be built by that attacker, who would then naturally have full physical and logical control over it.

Apple might counter by encrypting and/or authenticating the signals on the external bus. But the iPhone is a battery-operated, consumer device.

Median Wilfred says:

Knock -on consquences of key escrow or similar

Suppose that western civilization decides that some kind of Golden Key cryptography, and/or Legal Assistance backdoors are a Grrreat Idea(TM). I was thinking about the likely consequences of such practices.

First, open source goes away legally. Especially open source crypto implementations – they might not have the key escrow feature or the law enforcement access feature. I believe that logically you follow this and all but the trivial open source becomes illegal or legally questionable. Do we want this? I’m certain that Microsoft, Apple and Cisco would like it.

Second, if communications have to be decryptable on demand, with key escrow, or a “golden key” or something, then ultimately law enforcement agencies will end up randomly sampling, or even universally checking, that what looks like communications with a golden key actually comprises communications with a golden key or escrowed key, and not some other illegal form of encrypted communications. This also seems like a bad idea. Getting interoperating DES implementations was notoriously difficult – what keeps SOCA or FBI from charging some inept programmer with Illegal Cryptography just because said inept programer made a inmplementaion of some crypto algorithm that didn’t do parity correctly or initiall set some, but not all bitz to zero, items not explicitly spelled out in the Official Algorithm?

I see key escrow or golden key cryptography or mandated backdoors as ways to (maybe inadvertantly) shut down any innovation not approved by governments and/or big corporations. Am I off base here?

Anonymous Coward says:

Not the messages themselves

They weren’t able to decrypt the messages themselves. The problem appears to be that Apple used a 64 bit key to encrypt the pictures/videos. Although even that has been largely changed in iOS 9. It now uses a 256 bit key. 9.3 will probably change that as well. I’m interested to see how they did it though. Hopefully it wasn’t just brute forcing a 64 bit key.

ECA (profile) says:

iTS ALWAYS ENTERTAINED ME

Over many years of using computers..
Over all the past Claims in movies and by the Gov, and telco..that everything was being hacked..

There is allot of work, placed in MOST software to protect devices from hacking, FOR A REASON..
Hardware can be defeated, Most times…because of 1 fact, Customer @#$@ things up and there has to be a way to reset the device, so it can be USED again.
A hardware device that could NOT be reset, would be a BRICK if someone forget a password..

Software always has FLAWS..If you could protect Software from every form of hacking, you would have a BLOATED SLOW piece of garbage.. The developers place code in it for testing and running around a game to see/do and do/fix STUFF, but SELDOM remove this code. And if they did, it would make it HARDER to fix/update a game or program..

THERE ARE tricks…using augmented hardware with Good software, can protect very well, until someone figures it out.. Do not THINK, you are smarter then the person NEXT DOOR.. THINGS have to change on a regular basis to keep the system SAFE..

DONT even think about DRM..very little of it has ever worked.

Anonymous Coward says:

Chosen Ciphertext Attacks on Apple iMessage

Via Matthew Green

Dancing on the Lip of the Volcano: Chosen Ciphertext Attacks on Apple iMessage.
Christina Garman, Matthew Green, Ian Miers, Gabriel Kaptchuk, Michael Rushanan.

Abstract

Apple’s iMessage is one of the most widely-deployed end-to-end encrypted messaging protocols. Despite its broad deployment, the encryption protocols used by iMessage have never been subjected to rigorous cryptanalysis. In this paper, we conduct a thorough analysis of iMessage to determine the security of the protocol against a variety of attacks. Our analysis shows that iMessage has significant vulnerabilities that can be exploited by a sophisticated attacker. In particular, we outline a novel chosen ciphertext attack on Huffman compressed data, which allows retrospective decryption of some iMessage payloads in less than 218 queries. The practical implication of these attacks is that any party who gains access to iMessage ciphertexts may potentially decrypt them remotely and after the fact. We additionally describe mitigations that will prevent these attacks on the protocol, without breaking backwards compatibility. Apple has deployed our mitigations in the latest iOS and OS X releases.

Jim says:

Huh?

Then, awnser me this. Apple acknowledged they did the same as they were asked to do on other phones they produced. So there is a back door into the system. And they know what and where it is. Now that is security thru obscurity. But they openly acknowledged that fact. That means, to researchers, and hackers, try it. So now we have, a research paper, and a person walking into one of the offices. That’s two back doors, and one of them, they may not know about. Interesting. And then the give to the Chinese, of the OS? I wonder if it’s as secure as ssb short wave?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...