The past year has been pretty rough for security -- with Heartbleed, Superfish, remotely hacked cars and all sorts of personal information getting into the wrong hands. Encryption and security are quickly becoming mainstream topics, as it's harder and harder to keep your head in the sand about technology risks. However, perfect security doesn't really exist, but I'm sure we'll have some folks demanding it soon.
The United States spends more than $50 billion a year on spying and intelligence, while the folks who build important defense software — in this case a program called OpenSSL that ensures that your connection to a website is encrypted — are four core programmers, only one of whom calls it a full-time job.
In a typical year, the foundation that supports OpenSSL receives just $2,000 in donations. The programmers have to rely on consulting gigs to pay for their work. "There should be at least a half dozen full time OpenSSL team members, not just one, able to concentrate on the care and feeding of OpenSSL without having to hustle commercial work," says Steve Marquess, who raises money for the project.
Is it any wonder that this Heartbleed bug slipped through the cracks?
Dan Kaminsky, a security researcher who saved the Internet from a similarly fundamental flaw back in 2008, says that Heartbleed shows that it's time to get "serious about figuring out what software has become Critical Infrastructure to the global economy, and dedicating genuine resources to supporting that code."
The Obama Administration has said it is doing just that with its national cybersecurity initiative, which establishes guidelines for strengthening the defense of our technological infrastructure — but it does not provide funding for the implementation of those guidelines.
Instead, the National Security Agency, which has responsibility to protect U.S. infrastructure, has worked to weaken encryption standards. And so private websites — such as Facebook and Google, which were affected by Heartbleed — often use open-source tools such as OpenSSL, where the code is publicly available and can be verified to be free of NSA backdoors.
The federal government spent at least $65 billion between 2006 and 2012 to secure its own networks, according to a February report from the Senate Homeland Security and Government Affairs Committee. And many critical parts of the private sector — such as nuclear reactors and banking — follow sector-specific cybersecurity regulations.
But private industry has also failed to fund its critical tools. As cryptographer Matthew Green says, "Maybe in the midst of patching their servers, some of the big companies that use OpenSSL will think of tossing them some real no-strings-attached funding so they can keep doing their job."
In the meantime, the rest of us are left with the unfortunate job of changing all our passwords, which may have been stolen from websites that were using the broken encryption standard. It's unclear whether the bug was exploited by criminals or intelligence agencies. (The NSA says it didn't know about it.)
It's worth noting, however, that the risk of your passwords being stolen is still lower than the risk of your passwords being hacked from a website that failedtoprotect them properly. Criminals have so many ways to obtain your information these days — by sending you a fake email from your bank or hacking into a retailer's unguarded database — that it's unclear how many would have gone through the trouble of exploiting this encryption flaw.
The problem is that if your passwords were hacked by the Heartbleed bug, the hack would leave no trace. And so, unfortunately, it's still a good idea to assume that your passwords might have been stolen.
So, you need to change them. If you're like me, you have way too many passwords. So I suggest starting with the most important ones — your email passwords. Anyone who gains control of your email can click "forgot password" on your other accounts and get a new password emailed to them. As a result, email passwords are the key to the rest of your accounts. After email, I'd suggest changing banking and social media account passwords.
But before you change your passwords, you need to check if the website has patched their site. You can test whether a site has been patched by typing the URL here. (Look for the green highlighted " Now Safe" result.)
If the site has been patched, then change your password. If the site has not been patched, wait until it has been patched before you change your password.
A reminder about how to make passwords: Forget all the password advice you've been given about using symbols and not writing down your passwords. There are only two things that matter: Don't reuse passwords across websites and the longer the password, the better.
I suggest using password management software, such as 1Password or LastPass, to generate the vast majority of your passwords. And for email, banking and your password to your password manager, I suggest a method of picking random words from the Dictionary called Diceware. If that seems too hard, just make your password super long — at least 30 or 40 characters long, if possible.
Given the speed of the arrest, it would not appear that Solis-Reyes did very much to cover his tracks. In fact, reports say he did nothing to hide his IP address. He's a computer science student -- and his father is a CS professor, with a specialty in data mining. It seems at least reasonably likely that the "hack" was more of a "test" to see what could be done with Heartbleed and (perhaps) an attempt to show off how risky the bug could be, rather than anything malicious. It will be interesting to see how he is treated by Canadian officials, compared to say, the arrests of Aaron Swartz and weev.
Well, this is interesting. I naturally assumed that when the various researchers first discovered Heartbleed, they told the government about it. While I know that some people think this is crazy, it is fairly standard practice, especially for a bug as big and as problematic as Heartbleed. However, the National Journal has an article suggesting that Google deliberately chose not to tell the government about Heartbleed. No official reason is given, but assuming this is true, it wouldn't be difficult to understand why. Google employees (especially on the security side) still seem absolutely furious about the NSA hacking into Google's data centers, and various other privacy violations. When a National Journal reporter contacted Google about the issue, note the response:
Asked whether Google discussed Heartbleed with the government, a company spokeswoman said only that the "security of our users' information is a top priority" and that Google users do not need to change their passwords.
Here's the thing: if the NSA hadn't become so focused on hacking everyone, it wouldn't be in this position. The NSA's dual offense and defense role has poisoned the waters, such that no company can or should trust the government to do the responsible thing and help secure vulnerable systems any more. And for that, the government only has itself to blame.
Somewhat late to the game (by about a week), after the Heartbleed vulnerability was publicly revealed, and a few days after it was reported and denied that the NSA was already well aware of Heartbleed and exploiting it, the NSA has put out a one page PDF about Heartbleed. This seems like something of a too little, too late effort by the NSA to live up to its semi-promise of a "bias" towards revealing vulnerabilities over exploiting them. However, that leads to the simple question that plenty of people should be asking: given everything you've learned about the NSA recently (or, well, for years), would you trust the NSA's advice on how to deal with Heartbleed? Not that I think the NSA would publicly suggest anything bad, but at this point, the NSA has a serious trust problem in convincing anyone engaged in computer security that they have their best interests in mind.
The web is a dangerous place these days. Akamai, which many large companies rely on for hosting as a CDN, has admitted that its Heartbleed patch was faulty, meaning that it was possible that the SSL keys "could have been exposed to an adversary exploiting the Heartbleed vulnerability." Akamai had already noted that it was more protected against Heartbleed than others, because of custom code it had used for its own OpenSSL deployment. However, as researchers looked through that custom code, they found some significant defects in it. Some people have been arguing that the Heartbleed bug highlights a weakness in open source software -- but that's not necessarily true. Pretty much all software has vulnerabilities. And, sometimes, by open sourcing stuff you can find those vulnerabilities faster.
We've already been discussing how President Obama has told the NSA it can continue exploiting computer security flaws, rather than fixing them, and also how the NSA's offensive and defensive roles are incompatible with each other. However, I wanted to highlight a more concerning point raised by Julian Sanchez about the NSA and Heartbleed in the article about the NSA's dual role: and it's that, even granting the fact that the NSA might not have known about Heartbleed until it became public, the NSA could still use it to their advantage, in part because it has so much old encrypted data stored up:
Here, however, is the really crucial point to recognize: NSA doesn't need to have known about Heartbleed all along to take advantage of it.
The agency's recently-disclosed minimization procedures permit "retention of all communications that are enciphered." In other words, when NSA encounters encryption it can't crack, it's allowed to – and apparently does – vacuum up all that scrambled traffic and store it indefinitely, in hopes of finding a way to break into it months or years in the future. As security experts recently confirmed, Heartbleed can be used to steal a site's master encryption keys – keys that would suddenly enable anyone with a huge database of encrypted traffic to unlock it, at least for the vast majority of sites that don't generate new keys as a safeguard against retroactive exposure.
If NSA moved quickly enough – as dedicated spies are supposed to – the agency could have exploited the bug to steal those keys before most sites got around to fixing the bug, gaining access to a vast treasure trove of stored traffic.
As Sanchez notes, this creates a dilemma for those who discover such flaws. Normally, they should want to reveal such things to the NSA to help with protecting networks. But doing so now might expose more risk. And, in fact, it seems likely that the NSA was aware of the bug prior to its revelation to the public. Note that in its denial of the Bloomberg story, it just says it wasn't aware prior to "April 2014," but not on which date in April it found out about it. Thus, it's likely the NSA had a heads up, and could collect a bunch of private keys to use against its encrypted data store for a few days before everyone else was informed to fix the vulnerability.
Last week there was some confusion as Bloomberg published a story claiming that the NSA was well aware of the Heartbleed bug and had been exploiting it for "at least" two years. That seemed fairly incredible, given that the bug had only been around for slightly over two years. The NSA came out with a pretty strongly worded denial -- which left out much of the usual equivocation and tricky wording that the NSA normally uses in denying things. The general consensus seems to be that it is, in fact, unlikely that the NSA knew about Heartbleed (though that makes some wonder if some team at the NSA is now in trouble for not figuring it out). If anything, it seems likely that the Bloomberg reporters got confused by other programs that the NSA is known to have to break parts of SSL, something it's supposedly been able to do since around 2010.
However, the NY Times had a story this weekend about how this move has forced the administration to clarify its position on zero day exploits. It's already known that the NSA buys lots of zero day exploits and makes the internet weaker as a result of it. Though, in the past, the NSA has indicated that it only makes use of the kinds of exploits that only it can use (i.e., exploits that need such immense computing power that anyone outside of the NSA is unlikely to be able to do anything). However, the NY Times article notes that, following the White House's intelligence review task force recommendation that the NSA stop weakening encryption and other technologies, President Obama put in place an official rule that the NSA should have a "bias" towards revealing the flaws and helping to fix them, but leaves open a massive loophole:
But Mr. Obama carved a broad exception for “a clear national security or law enforcement need,” the officials said, a loophole that is likely to allow the N.S.A. to continue to exploit security flaws both to crack encryption on the Internet and to design cyberweapons.
Amusingly, the NY Times initially had a title on its story saying that President Obama had decided that the NSA should "reveal, not exploit, internet security flaws," but the title then changed to the much more accurate: "Obama Lets N.S.A. Exploit Some Internet Flaws, Officials Say."
Of course, the cold war analogy used by people in the article seems... wrong:
“We don’t eliminate nuclear weapons until the Russians do,” one senior intelligence official said recently. “You are not going to see the Chinese give up on ‘zero days’ just because we do.”
Except, it's meaningless that no one expects the Chinese (or the Russians or anyone else) to give up zero days. The simple fact is that if the NSA were helping to stop zero days that would better protect everyone against anyone else using those zero days. In fact, closing zero days is just like disarming both sides, because it takes the vulnerability out of service. It's not about us giving up our "weapons," it's about building a better defense for the world. And yet the NSA isn't willing to do that. Because they're not about protecting anyone -- other than themselves.
Update: The NSA has denied the Bloomberg report, briefly stating that the agency "was not aware of the recently identified Heartbleed vulnerability until it was made public." We'll continue to update as more information emerges.
While it's not news that the NSA hunts down and utilizes vulnerabilities like this, the extreme nature of Heartbleed is going to draw more scrutiny to the practice than ever before. As others have noted, failing to reveal the bug so it could be fixed is contrary to at least part of the agency's supposed mission:
Ordinary Internet users are ill-served by the arrangement because serious flaws are not fixed, exposing their data to domestic and international spy organizations and criminals, said John Pescatore, director of emerging security trends at the SANS Institute, a Bethesda, Maryland-based cyber-security training organization.
“If you combine the two into one government agency, which mission wins?” asked Pescatore, who formerly worked in security for the NSA and the U.S. Secret Service. “Invariably when this has happened over time, the offensive mission wins.”
There is, in fact, a massive hypocrisy here: the default refrain of NSA apologists is that all these questionable things they do are absolutely necessary to protect Americans from outside threats, yet they leave open a huge security hole that is just as easily exploited by foreign entities. Or consider the cybersecurity bill CISPA, which was designed to allow private companies to share network security information with the intelligence community, and vice versa, supposedly to assist in detecting and fixing security holes and cyber attacks of various kinds. But, especially after this revelation about Heartbleed, can there be any doubt that the intelligence community is far more interested in using backdoors than it is in closing them?
It's not too surprising that one of the first questions many people have been asking about the Heartbleed vulnerability in OpenSSL is whether or not it was a backdoor placed there by intelligence agencies (or other malicious parties). And, even if that wasn't the case, a separate question is whether or not intelligence agencies found the bug earlier and have been exploiting it. So far, the evidence is inconclusive at best -- and part of the problem is that, in many cases, it would be impossible to go back and figure it out. The guy who introduced the flaw, Robin Seggelmann, seems rather embarrassed about the whole thing but insists it was an honest mistake:
Mr Seggelmann, of Munster in Germany, said the bug which introduced the flaw was "unfortunately" missed by him and a reviewer when it was introduced into the open source OpenSSL encryption protocol over two years ago.
"I was working on improving OpenSSL and submitted numerous bug fixes and added new features," he said.
"In one of the new features, unfortunately, I missed validating a variable containing a length."
After he submitted the code, a reviewer "apparently also didn’t notice the missing validation", Mr Seggelmann said, "so the error made its way from the development branch into the released version." Logs show that reviewer was Dr Stephen Henson.
Mr Seggelmann said the error he introduced was "quite trivial", but acknowledged that its impact was "severe".
Later in that same interview, he insists he has no association with intelligence agencies, and also notes that it is "entirely possible" that intelligence agencies had discovered the bug and had made use of it.
Another oddity in all of this is that, even though the flaw itself was introduced two years ago, two separate individuals appear to have discovered it on the exact same day. Vocativ, which has a great story giving the behind the scenes on the discovery by Codenomicon, mentions the following in passing:
Unbeknownst to Chartier, a little-known security researcher at Google, Neel Mehta, had discovered and reported the OpenSSL bug on the same day. Considering the bug had actually existed since March 2012, the odds of the two research teams, working independently, finding and reporting the bug at the same time was highly surprising.
Highly surprising. But not necessarily indicative of anything. It could be a crazy coincidence. Kim Zetter, over at Wired explores the "did the NSA know about Heartbleed" angle, and points out accurately that while the bug is catastrophic in many ways, what it's not good for is targeting specific accounts. The whole issue with Heartbleed is that it "bleeds" chunks of memory that are on the server. It's effectively a giant crapshoot as to what you get when you exploit it. Yes, it bleeds all sorts of things: including usernames, passwords, private keys, credit card numbers and the like -- but you never quite know what you'll get, which makes it potentially less useful for intelligence agencies. As that Wired article notes, at best, using the Heartbleed exploit would be "very inefficient" for the NSA.
But that doesn't mean there aren't reasons to be fairly concerned. Peter Eckersley, over at EFF, has tracked down at least one potentially scary example that may very well be someone exploiting Heartbleed back in November of last year. It's not definitive, but it is worth exploring further.
The second log seems much more troubling. We have spoken to Ars Technica's second source, Terrence Koeman, who reports finding some inbound packets, immediately following the setup and termination of a normal handshake, containing another Client Hello message followed by the TCP payload bytes 18 03 02 00 03 01 40 00 in ingress packet logs from November 2013. These bytes are a TLS Heartbeat with contradictory length fields, and are the same as those in the widely circulated proof-of-concept exploit.
Koeman's logs had been stored on magnetic tape in a vault. The source IP addresses for the attack were 126.96.36.199 and 188.8.131.52. Interestingly, those two IP addresses appear to be part of a larger botnet that has been systematically attempting to record most or all of the conversations on Freenode and a number of other IRC networks. This is an activity that makes a little more sense for intelligence agencies than for commercial or lifestyle malware developers.
EFF is asking people to try to replicate Koeman's findings, while also looking for any other possible evidence of Heartbleed exploits being used in the wild. As it stands now, there doesn't seem to be any conclusive evidence that it was used -- but that doesn't mean it wasn't being used. After all, it's been known that the NSA has a specific program designed to subvert SSL, so there's a decent chance that someone in the NSA could have discovered this bug earlier, and rather than doing its job and helping to protect the security of the internet, chose to use it to its own advantage first.