CIA Leak Shows Mobile Phones Vulnerable, Not Encryption

from the and-cia-isn't-helping dept

As you've probably heard by now, this morning Wikileaks started releasing a new cache of information regarding CIA hacking tools. This is interesting on a variety of levels, but many of the reports focus on the claims that encrypted chat apps like Signal, Whatsapp and Telegram may be compromised. See the top two links in this screenshot:

Wikileaks itself may have contributed to this view with the following paragraph in its release:

These techniques permit the CIA to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide and Cloackman by hacking the "smart" phones that they run on and collecting audio and message traffic before encryption is applied.

But the details don't seem to show that those apps are compromised, so much as that Android and iOS devices are compromised. It's always been true that if someone can get into your phone, the encryption scheme you use doesn't matter, because they can just pull keystrokes or grab data before you encrypt it -- in the same way that someone looking over your shoulder can read your messages as well. That's not a fault of the encryption or the app, but of the environment in which you're using the app itself.

And that should really be the bigger concern here. Over the years, nearly all of the focus on hacking mobile phones has been on the NSA and its capabilities, rather than the CIA. But it's now clear that the CIA has its own operations, akin to the NSA's hacking operations (kinda makes you wonder why we need that overlap). Except that the CIA's hacking team seems almost entirely unconcerned with following the federal government's rules on letting private companies know about vulnerabilities they've discovered.

Remember, the Obama White House put in place what it called a Vulnerabilities Equities Program in which the intelligence community is supposed to default to letting private companies know about vulnerabilities. And, yes, this was always something of a joke as there was a giant loophole involving "except for a clear national security or law enforcement need" that the NSA basically used to withhold vulnerabilities all the time. Still, at least the NSA appeared to get around to revealing some vulnerabilities eventually (probably once they were no longer useful).

Here, however, it looks like the CIA was hoarding some really serious vulnerabilities with wild abandon. In a chart released by Wikileaks you see that the CIA is getting these vulnerabilities from a variety of sources. Some it's finding itself, some it's purchasing, and some are shared via other agencies, such as the NSA or the UK's GCHQ. As Ed Snowden notes, there is now clear evidence (which many suspected, but which had not been proven) that the US government was secretly paying to keep US software unsafe and vulnerable. That's really dangerous. It's putting basically everyone in much more serious danger, just so the CIA, NSA and others can get in when they want to:

This is why the whole conversation about mandating backdoors and "going dark" was so dangerous in the first place. Those were plans to force even more of these vulnerabilities into the wild, just for the very very rare cases where they were needed by law enforcement or intelligence.

At a time when the President is suddenly acting as if he's concerned about domestic surveillance (at least of himself), perhaps now would be a good time to crack down on this kind of stuff. I'm not holding my breath -- but, for now, we're getting a lot more insight into the CIA's electronic surveillance methods, and it sounds like there's more to come.

Filed Under: cia, encryption, hacking, nsa, phones, surveillance, vep, vulnerabilities, vulnerabilities equities program
Companies: wikileaks

Reader Comments

The First Word

Subscribe: RSS

View by: Time | Thread

  1. identicon
    Thad, 7 Mar 2017 @ 1:59pm


    Yeah, there are a lot of reasons why security simply isn't the fundamental priority in software design that it should be. I'm hoping that, now that we've got languages like Rust and Go that can match C's performance without adopting its 1970-vintage approach to memory management, devs will start slowly making the transition, but a fully-functional OS based on those foundations is a long way off.

    (When was the last time a new, built-from-the-ground-up OS got a foothold? Windows NT? I don't think we can count OSX (based on FreeBSD) or Android or ChromeOS (both use the Linux kernel), and lesser-used OS's like Blackberry, WebOS, BeOS, and Tizen all seem like also-rans.)

    I think we're likely to see formal verification start to be adopted for highly secure, special-purpose OS's, but by its nature it's incredibly labor-intensive and has serious issues with scalability.

    Meanwhile, thanks to Android and the IoT, Linux-based OS's have proven not to be nearly the secure workhorses in consumer electronics that they are in the server market. Torvalds and the other core kernel developers have always focused on compatibility over security, and that's not likely to change. And honestly they kind of have a point -- it doesn't matter how secure you make your kernel if some jackass is going to stick it on a router that uses a hardcoded root password and an open telnet port and call it a day.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Techdirt Gear
Show Now: Takedown
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Report this ad  |  Hide Techdirt ads
Recent Stories
Report this ad  |  Hide Techdirt ads


Email This

This feature is only available to registered users. Register or sign in to use it.