When code is treated like weapons, bad things happen. Governing bodies have previously treated encryption as weaponry, ensuring that only the powerful will have access to strong encryption while the general public must make do with weaker or compromised variants.
More recently, the US government went after the creator of a 3D-printed gun, claiming the very existence of printing instructions violated international arms regulations. So, it's not just the end result that's (potentially) covered under this ban (the actual weapon) but the data and coding itself. That's currently being fought in court, carrying with it some potentially disturbing implications for several Constitutional rights.
Now, it appears the conflation of physical weapons/weaponized code is possibly going to make things much, much worse. The EFF notes that the US government's adoption of recommended changes to an international arms trafficking agreement (the Wassenaar Arrangement) will likely cause very serious problems for security researchers and analysts in the future.
The BIS's version of the Wassenaar Arrangement's 2013 amendments contains none of the recommended security research exceptions and vastly expands the amount of technology subject to government control.
Specifically, the BIS proposal would add to the list of controlled technology:
Systems, equipment, components and software specially designed for the generation, operation or delivery of, or communication with, intrusion software include network penetration testing products that use intrusion software to identify vulnerabilities of computers and network-capable devices.
Technology for the development of intrusion software includes proprietary research on the vulnerabilities and exploitation of computers and network-capable devices.
On its face, it appears that BIS has just proposed prohibiting the sharing of vulnerability research without a license.
As if things weren't already dangerous enough for security researchers, what with companies responding with threats
-- rather than apologies and appreciation -- when informed of security holes and the US government always resting its finger on the CFAA trigger
. Violating the terms of this agreement could see researchers facing fines of up to $1 million and/or 20 years in prison.
Wassenaar was originally limited to physical items used in conventional weapons, like guns, landmines and missiles. It was amended in December 2013
to include surveillance tech, mainly in response to stories leaking out about Western companies like Gamma (FinFisher)
and Hacking Team
selling exploits and malware to oppressive governments, which then used these tools to track down dissidents and journalists.
The push to regulate the distribution of these tools had its heart in the right place, but the unintended consequences will keep good people from doing good things, while doing very little to prevent bad people from acquiring and deploying weaponized software.
The Wassenaar Arrangement's attempt to wrestle a mostly ethereal problem into regulatable problem was, for the most part, handled well. It defined the software it intended to control very narrowly and provided some essential exceptions:
Notably, the controls are not intended apply to software or technology that is generally available to the public, in the public domain, or part of basic scientific research.
But, even so, it still contained the potential to do more harm than good.
We have significant problems with even the narrow Wassenaar language; the definition risks sweeping up many of the common and perfectly legitimate tools used in security research.
Either interpretation (Wassenaar, BIS) is a problem. The BIS version is much worse, but both will result in a less-secure computing world, despite being implemented with an eye on doing the opposite, as Robert Graham at Errata Security points out
[G]ood and evil products are often indistinguishable from each other. The best way to secure your stuff is for you to attack yourself.
That means things like bug bounties that encourage people to find 0-days in your software, so that you can fix them before hackers (or the NSA) exploit them. That means scanning tools that hunt for any exploitable conditions in your computers, to find those bugs before hackers do. Likewise, companies use surveillance tools on their own networks (like intrusion prevention systems) to monitor activity and find hackers.
Thus, while Wassenaar targets evil products, they inadvertently catch the bulk of defensive products in their rules as well.
And the results will disproportionately negatively affect those who need these protections the most. This is the end result of controls written with physical items (which originates from physical manufacturing plants and travel on physical means of conveyance) in mind but copied-pasted to handle "items" that can traverse the internet with no known originating point.
That's not to say export controls would have no leverage. For example, these products usually require an abnormally high degree of training and technical support that can be tracked. However, the little good export controls provide is probably outweighed by the harm -- such as preventing dissidents in the affected countries from being able to defend themselves. We know they do little good know because we watch Bashar Al Assad brandish the latest iPhone that his wife picked up in Paris. Such restrictions may stop the little people in his country getting things -- but they won't stop him.
The "open-source" exception in Wassenaar can be useful, up to a point. Researchers could post their findings to Github, as Graham points out, to ensure they're still protected. This, of course, means the Arrangement is still mostly useless, as the moment it's put into the public domain, any entity cut out of the distribution loop by this agreement can immediately make use of posted vulnerabilities and exploits. It also makes research destined to be open-sourced forbidden weaponry until the point it's actually made public. So, a laptop full of research is
a prohibited weapon, while a Github post containing the same is not
When security researchers discover 0-day, they typically write a proof-of-concept exploit, then present their findings at the next conference. That means they have unpublished code on their laptop, code that they may make public later, but which is not yet technically open-source. If they travel outside the country, they have technically violated both the letter and the spirit of the export restrictions, and can go to jail for 20 years and be forced to pay a $1 million fine.
Thus, make sure you always commit your latest changes to GitHub before getting on a plane.
Statements made by the BIS aren't exactly comforting. The BIS's implementation doesn't include an open-source exception, but supposedly, this will still be taken into consideration when the US government starts throwing around fines and prison sentences. Randy Wheeler of the BIS:
"We generally agree that vulnerability research is not controlled, nor is the technology related to choosing a target or finding a target, controlled." However, she undermined her message by stating that any software that is used to help develop 0-day exploits for sale would be covered by the proposal.
Again, bad for researchers. This gives the government leeway to imply intent when prosecuting, because the allowed and the forbidden look very similar while still in their formative stages.
[T]he only difference between an academic proof of concept and a 0-day for sale is the existence of a price tag.
Even if the exploit is not on the market at the point the government steps in, it would take very little to insinuate that it would
have been headed to market, if not for the speedy intervention of regulators.
There is some good news, however. The BIS is accepting comments
on its proposed adoption (and partial rewrite) of the amendments to the Wassenaar Arrangement. The comment period ends on July 20, 2015, so sooner rather than later would be good if you're interested in steering the government away from doing further damage to the livelihoods of security researchers.