Why The NSA's Vulnerability Equities Process Is A Joke (And Why It's Unlikely To Ever Get Better)

from the 'national'-security-still-the-best-kind-of-security,-apparently dept

Two contributors to Lawfare -- offensive security expert Dave Aitel and former GCHQ information security expert Matt Tait -- take on the government's Vulnerability Equities Process (VEP), which is back in the news thanks to a group of hackers absconding with some NSA zero-days.

The question is whether or not the VEP is being used properly. If the NSA discovered its exploits had been accessed by someone other than its own TAO (Tailored Access Operations) team, why did it choose to keep its exploits secret, rather than inform the developers affected? The vulnerabilities exposed so far seem to date as far back as 2013, but only now, after details have been exposed by the Shadow Brokers are companies like Cisco actually aware of these issues.

According to Lawfare's contributors, there are several reasons why the NSA would have kept quiet, even when confronted with evidence that these tools might be in the hands of criminals or antagonistic foreign powers. They claim the entire process -- which is supposed to push the NSA, FBI, et al towards disclosure -- is broken. But not for the reasons you might think.

The Office of the Director of National Intelligence claimed last year that the NSA divulges 90% of the exploits it discovers. Nowhere in this statement were any details as to what the NSA considered to be an acceptable timeframe for disclosure. It's always been assumed the NSA turns these exploits over to developers after they're no longer useful. The Obama administration may have reiterated the presumption of openness when reacting to yet another Snowden leak, but also made it clear that national security concerns will always trump personal security concerns -- even if the latter has the potential to affect more people.

The main thrust of the Lawfare article is that the "broken" part of the equities process is that there should be a presumption of disclosure at all. The authors point out that it might take years to discover or develop a useful exploit and -- given the nature of the NSA's business -- it should be under no pressure to make timely disclosures to developers whose software/hardware the agency is exploiting.

[F]rom an operational standpoint, it takes about two years to fully utilize and integrate a discovered vulnerability. For the intelligence officer charged with managing the offensive security process, the VEP injects uncertainty by requiring inexpert intergovernmental oversight of the actions of your offensive teams, effectively subjects certain classes of bugs to time limits and eventual public exposure—all without any strategic or tactical thought governing the overall process.


Individual exploitable software vulnerabilities are difficult to find in the first place. But to engineer the discovered vulnerability into an operationally deployable exploit that can bypass modern anti-exploit defenses is far harder. It is a challenge to get policymakers to appreciate how rare the skills are for building operationally reliable exploits. The skillset exists almost exclusively within the IC and in a small set of commercial vendors (many of whom were originally trained in intelligence). This is not an area where capacity can be easily increased by throwing money at it—meaningful development here requires monumental investment of time and resources in training and cultivating a workforce, as well as crafting mechanisms to identify traits of innate talent.

The authors do point out that disclosure can also be useful to intelligence services. If these disclosures result in safer computing for everyone else, then that's apparently an acceptable side effect.

[T]here are three major, non-technical reasons for vulnerability disclosure.

First, disclosure can provide cover in the event that an OPSEC failure leads you to believe a zero-day has been compromised—if there is a heightened risk of malicious use, it allows the vendor time to patch. Second, disclosing to vendors allows the government to out an enemy’s zero-day vulnerability without disclosing how it was found. And third, government disclosure can form the basis of building a better relationship with Silicon Valley.

Saddling intelligence agencies with a presumption of disclosure is possibly a dangerous idea. Less-than-useful exploits that could be divulged to developers might be tied to other exploits still being deployed by intelligence services. Any suggested timeframe for mandatory disclosure would likely cause further harm by forcing the NSA, FBI, etc. to turn over exploits just as they're generating optimal results. On top of that, the authors point out that a push towards disclosure hamstrings US intelligence services as agencies in unfriendly nations will never be constrained by requirements to put the public ahead of their own interests.

But the process is definitely broken, no matter whose side of the argument you take. The NSA says it discloses 90% of the vulnerabilities it discovers, but former personnel involved in these operations note they've never seen a vulnerability disclosed during their years in the agency.

It's unlikely that the process will ever be fixed to everyone's satisfaction. The most likely scenario is that the VEP will continue to trundle along doing absolutely nothing while being ineffectually attacked by those opposing intelligence community secrecy. As it stands now, the presumption of disclosure is completely subject to any national security concerns raised by intelligence and law enforcement agencies. Occasional political climate shifts may provoke transparency pledges from various administrations, but those should be viewed as sympathetic noises -- presidential pats on the head meant to fend off troubling questions and legislative pushes to put weight behind the administration's words.

Filed Under: 0days, exploits, nsa, sharing, surveillance, vep, vulnerabilities, vulnerabilities equities process, zero days

Reader Comments

Subscribe: RSS

View by: Time | Thread

  1. icon
    That One Guy (profile), 20 Aug 2016 @ 11:32am

    Re: Two years?

    Yeah, there is no way it takes them that long to exploit a found vulnerability, instead I imagine that's just an excuse not to report it sooner.

    'It takes us two years to really begin to fully exploit a vulnerability, and after that it might be good for a few more years, which means reporting it sooner would take a valuable tool away from us before we can really use it. You don't want us to be unable to protect the public, do you?'

    Alternatively they're all so incredibly incompetent that it does indeed take them that long to figure out how to use an exploit, though that's not much better really.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Techdirt Gear
Shop Now: Copying Is Not Theft
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Report this ad  |  Hide Techdirt ads
Recent Stories
Report this ad  |  Hide Techdirt ads


Email This

This feature is only available to registered users. Register or sign in to use it.