Given Spy Agencies' Love For Exploits And Malware, It's Never Been More Dangerous To Be A Security Researcher

from the only-acceptable-form-of-security-is-'national,'-apparently dept

There’s probably never been a great time to be a security researcher, what with the attacks they suffer in response to their work, not to mention far too many corporations greeting the discovery of vulnerabilities with legal threats and criminal charges.

The worldwide adoption of the Wassenaar Arrangement threatens to treat the products of security research like weapons-grade plutonium, making every outbound flight with a laptop or USB drive akin to smuggling weapons out of the country.

But that’s not the end of it. Security researchers work in the same shadowy areas as national security agencies, as well as other government entities focused on espionage. These agencies not only create their own malware and exploits, but also purchase them from companies in the vulnerabilities business.

Security researchers generally want to expose holes and dangerous software. Almost every other entity involved would prefer to keep these hidden. Any malicious software exposed and patched into irrelevance could carry national security implications. This puts researchers in the crosshairs of both governments and their exploit suppliers. And the system — as it is — has very little in the way of built-in protections for legitimate security research.

A paper published by Juan Andrés Guerrero-Saade of Kaspersky Lab points out the inherent dangers of security research in an age when one form of security (national) routinely takes precedent over another form (general computing). [h/t Slashdot]

Though private research teams and intelligence agencies will follow similar intelligence production cycles , we must not conflate their attributes.

(1) Intelligence agencies benefit from cover for action, meaning that other governmental institutions do not fi nd the agencies’ intelligence production activities suspect.
(2) Agency employees enjoy legal protections, even those involved in network exploitation activities. And finally,
(3) their work is shielded from political blowback or geopolitical incongruousness.

Each point is inversely applicable to security researchers and thus sets the tone for the power asymmetry:

1. Security researchers enjoy no cover for action for their production of intelligence reports into what may or may not constitute legitimate intelligence operations…
2. Security researchers are afforded no explicit legal protections for the grey areas regularly visited throughout the course of an investigation…
3. The companies too lack a cover for action and are in no way insulated from the political blowback that arises from the public disclosure of sensitive operations. They suffer from a further dimension of ‘guilt by association’ as research into sensitive operations and subsequent reporting is misconstrued as an act of geopolitical aggression when the victim and perpetrator are involved in any form of international tension…

On top of that, the “work cycles” are also inverted. Security researchers may not know they’re treading on the cyber-toes of state operatives until long after the research has begun. Contrast that to the operations of intelligence agencies/exploit marketers, who are only seeking holes rather than fixes and know with certainty what they’re deploying or who they’re targeting. When security researchers stumble across vulnerabilities, they’re not always aware of their origin and may find themselves facing prosecution or, at the very least, government interference and/or additional surveillance.

But those outcomes are the least worrying of the possibilities. Security researchers may find themselves involved with governments willing to deploy far more severe tactics.

The researcher as a private individual faces unique challenges when in the cross-hairs of a nation-state actor determined to enact some form of retribution. The operator of an espionage campaign is not a common criminal nor a simple citizen and his resources are truly manifold. As a special class of government insider responsible for a sensitive operation, the attacker can go so far as to legitimize special recourse in order to neutralize the threat posed by the meddling security researcher. The options available slide relative to the nature of the attacker, ranging from civilized to unscrupulous, and include: subtle pressure, patriotic enlistment, bribery, compromise and blackmail, legal repercussions, threat to livelihood, threat to viability of life in the actor’s area of influence, threat of force, or elimination.

With no built-in protections for researchers, the potential negative outcomes of their work may outweigh the intangible rewards of the work itself — more secure computing.

With this in mind, a rather interesting awarded contract was posted to the Federal Business Opportunities (FBO) website. Kudu Dynamics — which has secured previous DARPA/Dept. of Defense contracts related to cybersecurity — apparently landed a $500,000 contract for a project that appears to give the company permission to spy on security researchers.

ACLU technologist Chris Soghoian phrased it this way:

DARPA awards 500k grant to spy on security vuln researchers. Seriously.

Here’s the synopsis of the project (emphasis added):

The goal of Kudu’s proposed effort, named “Internet Cyber Early Warning of Adversary Research and Development (ICEWARD)”, is to determine whether it is possible to gain actionable insight into the intent of a cyber­ adversary by observing specific behaviors. In particular, the proposers hypothesize that vulnerability researchers make use of public information and resources (such as search engines and websites) that are relevant to their missions, targets, and techniques in such a way that it is possible to glean part of their intent if only we could observe such use and differentiate it from noise (e.g., search engine crawlers). The basis for this hypothesis is both the proposers’ own experience as vulnerability researchers and a little ­noticed incident in 2010-2011. The proposed approach will investigate the feasibility of creating highly tailored information resources whose access via the public network will be inherently highly correlated with vulnerability research. A second aspect of the proposed work entails the creation of an evaluation methodology for proving or disproving this hypothesis.

What it seems to suggest is that Kudu would be allowed to read over the shoulders of security researchers as they used public resources, separate that use from the “noise” of bot activity, and extrapolate the researchers’ intended goals.

I’m not as convinced as Soghoian that this means Kudu will be allowed to spy on/tap into security researchers’ web browsing. It could mean that Kudu plans to use its own experience in the security research field to help refine processes and programs put in place to counter malicious activity — a lot of which can seem almost indistinguishable from legitimate security research. This determination of intent would allow government agencies to focus on actual threats, rather than wasting resources chasing down legitimate security research.

Then again, certain government agencies would greatly benefit from this advance knowledge. It would give them a heads-up if researchers were probing exploits, vulnerabilties or malware they’d rather keep under wraps and in working condition.

What makes this murkier than it perhaps should be is the fact that the FBO scrubbed the synopsis from the listing a few hours after people began talking about it. (The EFF preserved the original post as a PDF.) There may be any number of non-nefarious reasons for it doing so, but considering most of the discussion centered around the theory that a government contractor was getting paid to spy on security researchers, the sudden burial of this contract info seems slightly suspicious.

Filed Under: , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Given Spy Agencies' Love For Exploits And Malware, It's Never Been More Dangerous To Be A Security Researcher”

Subscribe: RSS Leave a comment
10 Comments
Anonymous Coward says:

The very purpose of free speech is so that we can express free speech that the government doesn’t want us to express. Otherwise what’s the point? If security weaknesses are speech the government doesn’t want us to express then that’s exactly the type of speech that free speech laws are designed to protect in order to make the public aware of any security vulnerabilities that could negatively impact them and to encourage companies to fix those vulnerabilities.

Anonymous Coward says:

It's not just spy agencies

Oracle’s CSO has blasted security researchers — of course she has, Oracle’s products are insecure junk and she’s well-paid to help cover that up. The automakers are desperately trying to halt independent research into the crappy hardware, firmware, and software that they’re installing in vehicles. The MPAA and RIAA and their counterparts elsewhere are busy equating security research with piracy. The voting machine makers are covering up their massive failures and systemic corruption by attacking security researchers.

Pretty much everyone who’s making money off security failures is pushing hard to have their critics turned into criminals so that they can use the power of the state to silence them. And thanks to the combination of Congressional ignorance and stupidity with the power of lobbying and the reach of campaign contributions (i.e., bribes) it’s working.

Ninja (profile) says:

Re: It's not just spy agencies

It will work this way till such security vulnerabilities start costing tons of money. Money speaks louder than anything, including human rights and ethics. So the path to solve this issue (and others) is to help make the problems cause as much economic damage as possible. See environmental issues. When extreme events started threatening the money flow companies started to worry a bit more about them.

Today the most effective way to change something is to cause a disruption to the money flow. Preferably one that cannot be stopped and does not depend on human interference to keep growing if the problem is not tackled.

any moose cow word says:

Re: Re: It's not just spy agencies

It’s already costing tons of money, just usually not those who are actually responsible for the slipshod security that causes damages and cost to others. Until the financial burden is shifted to companies that fail to sensibly secure their systems, not a single damn thing will change.

Leave a Reply to tqk Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...